CERN Machine Learning

A quick look into my involvement with bringing state-of-the-art Machine Learning techniques to High Energy Physics

Projects

Dark Knowledge for the Matrix Element Method (MEM)

The MEM is a Physics-driven, computationally expensive technique which is widely used in HEP due to the clear physical insights it provides. It directly connects theoretical and experimental Physics, by allowing us to estimate several parameters at once using integration over phase space and detector response. Its output is usally combined with high level features, such as the 4-vectors of the particles in the event and the missing transverse energy, as inputs to a NN in order to augment the raw features with a Physics-instilled engineered variable.

Our goal is to use Dark Knowledge to construct a net that would be able to mimic the performance of MEM, while reducing computational requirements. This net is expected to capture not only the performance but also the overall distribution shape of the output of MEM, in order to preserve its discriminating power as an input feature to the original NN. Our benchmark is the $ttH$, with $H \rightarrow bb$ ATLAS analysis.


RNN for Event Classification

As a member of the $hh\rightarrow \gamma \gamma bb$ analysis group, I'm looking for evidence of the production of Higgs pairs decaying to two photons and a pair of $b$ quark-antiquark. I use Machine Learning techniques inspired by Natural Language Processing to explore an event-level approach to Physics analysis. I design, train and test Recurrent Neural Networks using Keras, in order to classify events with variable numbers of particles.

This method allows for a holistic study of each event, by combining event-level variables with jet and photon specific features, without limiting the number of jets or particles under consideration. This represents a huge step forward from classic cut-based analyses.


Deep Learning for Top and Boson Tagging in Boosted Topologies

During my first year at CERN, I developed a boosted boson tagger using Deep Learning for binary and multiclass classification purposes for the $W’\rightarrow WZ$ analysis, as well as a boosted top tagger which outperformed ordinary substracture variable scans. At the time, I used a feed-forward neural network from the AGILEPack library, with stacked auto-encoders and unsupervised pre-training.

Additionally, I investigated the model's ability to learn high order correlations between tracking and calorimeter variables, and I performed variable selection studies and hyperparameters grid searches.


Machine Learning for b-Tagging

An ongoing project of mine is to contribute to the improvement of the $b$-tagging pipeline by providing scalable Machine Learning solutions.

In collaboration with colleagues from UC Irvine, Stanford and UGeneva, I've developed recurrent and convolutional architectures to improve upon the current impact parameter based tagging techniques. Drawing inspiration from NLP, I introduced the use of bidirectional recurrent units and 1D convolutional layers for jet tagging using the characteristics of the tracks that compose them.

My authorship qualification task for the ATLAS collaboration also included the optimization of a lower level tagger using Boosted Decision Trees.


Tools

In order to perform these studies (and more), I use the following tools:
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo
  • Logo