Curriculum Vitae


Education

Ph.D. in Theoretical Physics at TU München
Munich, Germany
Advisor: Dr. Danny van Dyk
Title: "Applications of Light-Cone Sum Rules in Flavour Physics"
2017 - 2020

Master's Degree in Physics at Università "La Sapienza"
Rome, Italy
Grade: 110/110 with honours
2014 - 2016

Bachelor's Degree in Physics at Università degli Studi di Perugia
Perugia, Italy
Grade: 110/110 with honours
2011 - 2014



Academic Positions

Research Associate at University of Cambridge
Department of Applied Mathematics and Theoretical Physics, United Kingdom
2023 - Present

Postdoctoral Researcher at Siegen University
Department of Physics, Germany,as part of the DFG Collaborative Research Centre TRR 257 "Particle Physics Phenomenology after the Higgs Discovery"
2020 - 2023



Extended Research Stays

University of California San Diego (UCSD)
San Diego, United States
Visiting scholar
Nov - Dec 2022

Conseil Européen pour la Recherche Nucléaire (CERN)
Geneva, Switzerland
Short-term visitor
Oct 2022

Kyoto University
Kyoto, Japan
Visiting scholar
Sep - Oct 2019



Teaching and Supervision

I have taught nine courses for undergraduate, masters and/or doctoral students (e.g. Electrodynamics, QCD and Hadrons, and Introduction to Flavour Physics).

I have (co-)supervised four students from bachelor to doctoral level.

A detailed list of these activities is available on request.



Invited Talks at International Conferences

I have given more than thirty invited talks at international conferences, workshops, and seminars. I have also organized several scientific events, such as CHARM 2023, the Young Scientists Meeting of the CRC TRR 257, and the HEP seminars in Munich, Siegen, and Cambridge. A detailed list of these activities is available on request.



Skills

Computing Skills
Programming languages: Python, C++, Mathematica (Wolfram Language)
Markup languages: LaTeX, HTML

Advanced Mathematical Skills
Applied mathematics, complex analysis, non-euclidean geometry, Bayesian statistics, data analysis

Languages
Italian (native language), English (fluent), Spanish (fluent), German (conversant), French (basic)



Further Information

More details about my work and publications can be found in the Research section.
You can also get in touch with me using the email and social media links in the Contact section.

Research

This section is for particle physicists. In the following, I provide a concise overview of specific topics within flavour physics and present some of my research findings in this area. My research focuses on the theoretical prediction of hadronic matrix elements and flavour observables, together with their phenomenological implications in the Standard Model and beyond. For more details, you may refer to my list of publications in my INSPIRE-HEP profile.



Contents



    The Standard Model
    of Particle Physics

    The Standard Model (SM) of Particle Physics is a quantum field theory that describes the electromagnetic, weak, and strong interactions, which are the fundamental forces that govern the behaviour of subatomic particles. Since its formulation in the late 1960s, the SM has undergone an impressive number of experimental tests, making it one of the most successful theories in the history of physics. However, it is well known that the SM is not a complete theory. In fact, It does not include the gravitational force, and it does not explain the existence of dark matter and dark energy, which together make up about 95% of the Universe. Moreover, the SM does not provide a satisfactory explanation for the matter-antimatter asymmetry observed in the Universe. For these reasons, physicists are looking for a more comprehensive theory that can explain all the phenomena observed in Nature. There are two main strategies to search for New Physics (NP) beyond the SM: direct and indirect searches. On the one hand, direct searches aim to produce new particles at high-energy colliders, such as the Large Hadron Collider (LHC) at CERN.and to detect their So far, no new particles have been discovered in this way at the LHC, which may indicate that they are too heavy to be produced at the energies currently available. On the other hand, indirect searches look for deviations from the SM predictions in precision measurements of known processes. Flavour physics provides an excellent framework to perform indirect searches of NP, through the rigorous study of flavour processes, i.e. processes that involve flavour-changing currents.


    Flavour physics
    and \(\,b\,\)-hadron decays

    Flavour physics is a branch of particle physics that studies the different flavours of quarks and leptons, their transitions and their spectrum. One of the greatest achievements of flavour physics in the last decades has been to obtain stringent constraints on NP. This has been made possible by the impressive precision reached by the experimental measurements and theoretical predictions. As many flavour measurements are statistically limited, their precision will further improve in the coming years thanks to the ever-increasing amount of data collected by the experiments (LHC, Belle 2, BES III, ...). In order to match the experimental precision, progress on the theoretical side is urgently needed. In fact, the theoretical uncertainties of many crucial observables are of the same order or even larger than the experimental ones (see below). Note that this increased precision is not only important for the NP searches, but also for the determination of the SM fundamental parameters, i.e. the CKM matrix elements.

    Among the various quark transitions, the \(b\,\)-quark decays are of particular interest for three main reasons. First, it is the heaviest quark that hadronises. This is important from a practical point of view because its mass is much larger than the typical scale of hadronic interactions, the QCD scale \(\Lambda_{\text{QCD}}\). Hence it is possible to perform systematic expansion in powers \(m_b/ \Lambda_{\text{QCD}}\), which is of critical importance for theoretical calculations. Second, the \(b\) quark is lighter than the \(t\) quark, thus it always decays into quarks of a different generation. Therefore, \(b\) quark decays are CKM-suppressed, making them sensitive to NP effects, since NP is generally not CKM-suppressed. Finally, the \(b\) quark is a third-generation quark, and many NP models have larger couplings to the third generation than to the first two. This is motivated by the fact that the third generation may play a special role in the hierarchy problem and the flavour puzzle.

    The \(b\) quark, due to confinement and its relatively large lifetime, usually hadronises. Hence, to study the properties of the \(b\) quark, one has to study the \(b\) hadrons. This introduces complications, as partons inside hadrons interact at low energies (\(\sim \Lambda_{\text{QCD}}\)) invalidating the perturbative QCD expansion. To simplify this problem, an operator product expansion (OPE) is commonly performed. OPEs are used to disentangle the short-distance and the long-distance dynamics for a given process. In \(b\,\)-hadron decays, it is possible to use an OPE to define a convenient effective field theory (EFT) where certain degrees of freedom of the SM are integrated out, such as the Weak Effective Theory or the Heavy Quark Effective Theory. In these type of EFTs, the short-distance contributions can be computed perturbatively and are encoded in the Wilson coefficients, while the long-distance (QCD) contributions are by definition non-perturbative and are encoded in the hadronic matrix elements (MEs) of the respective effective operators. Calculating the Wilson coefficients is certainly an hard task but it can be done using well-established procedures. As a result, these coefficients are usually known with high precision. For instance, the Wilson coefficients of the Weak Effective Theory are known at the next-to-next-to-leading order in QCD. In contrast, the MEs are very difficult to calculate. As a consequence, they are commonly the main source of theoretical uncertainty in the predictions of \(b\,\)-hadron decays.


    Flavour changing
    currents

    FCCC and FCNC transitions

    Examples of FCCC and FCNC transitions.

    Before going into the details of the MEs calculations, let me briefly introduce the concept of flavour-changing currents. As mentioned above, \(b\,\)-quarks must decay into quarks belonging to different generation. These decays are always mediated by the \(W\) boson. The fermion currents responsible for the tree-level decays of the \(b\,\)-quark are referred to as flavour changing charged currents (FCCC). An example of a FCCC is the \(b \to c \, \ell \, \bar{\nu}_\ell\) transition, with the corresponding Feynman diagram shown in figure 1(a). Since the neutral bosons do not change the flavour of the particles, the flavour changing neutral currents (FCNC) only occur at loop level in the SM. An example of a FCNC is the \(b \to s \, \ell^+ \ell^-\) transition, with one of the corresponding Feynman diagrams shown in figure 1(b).

    FCCC mediated decays can be used to accurately extract certain CKM matrix elements. For instance, the \(|V_{ub}|\) and \(|V_{cb}|\) parameters can be extracted from the \(b \to u \, \ell \, \bar{\nu}_\ell\) and \(b \to c \, \ell \, \bar{\nu}_\ell\) transitions, respectively. Moreover, the same transitions can be used to for indirect searches of NP. Both the \(|V_{cb}|\) and \(|V_{ub}|\) extraction and the NP searches require precise measurements and predictions. The golden decay channels to study the \(b \to \{u,c\} \, \ell \, \bar{\nu}_\ell\) transitions are \(B \to \pi \, \ell \, \bar{\nu}_\ell\) and \(B \to D^{(*)} \, \ell \, \bar{\nu}_\ell\). The Babar, Belle (II), and LHCb experiments have achieved remarkable precision for these channels, measuring both the (differential) branching fractions and angular observables with accuracy down to the per cent level in certain cases. The theoretical predictions for observables in FCCC decays are usually affected by QCD uncertainties due to the MEs, e.g. \(\langle D\,| c\, \gamma_\mu b |B\, \rangle \). The dominant contributions to FCCC decays come from local MEs, since non-local contributions are either \(\alpha\) or \(G_F\) suppressed.

    FCNC mediated decays are particularly interesting because they are loop, CKM and/or GIM suppressed in the SM. Therefore, they are very sensitive to certain NP models. For instance, a leptoquark or a \(Z'\) boson could give a tree-level contribution to \(b \to s \, \ell^+ \ell^-\) transition, which would only be suppressed by the mass of the new heavy mediator. The golden decay channels to study the \(b \to s \, \ell^+ \ell^-\) transitions are \(B \to K \, \mu^+ \mu^-\) and \(B \to K^* \, \mu^+ \mu^-\). Even though the branching ratios of these decays are of the order of \(10^{-6}\), the LHCb and CMS experiments were able to measure the differential branching ratios and angular observables to an precision of about 10% in certain bins. This precision is expected to improve in the coming years, thanks to the increasing amount of data collected first by LHC Run 3 and then by the HL-LHC. Also the Belle II experiment will play a crucial role in the study of rare \(B\,\)-meson decays, thanks to its high luminosity,excellent tracking, and particle identification capabilities. The theoretical predictions for observables in FCNC decays are even more difficult than for FCCC decays. In fact, besides the QCD uncertainties due to the local MEs, in this case also non-local MEs appear.


    \(\,b\,\)-hadron decays in 2024

    There are several tensions between the SM predictions and the measurements in \(b\,\)-hadron decays observables. Although their significance has not yet reached 5 standard deviations in a single measurement - the traditionally accepted criteria for a new discovery - they are certainly intriguing and could be a signature of NP. These tensions have been detected in both the FCCC transition \(b\to c\tau\bar\nu\) and the FCNC transition \(b\to s\mu^+\mu^-\). Concerning the FCCC processes, the average of the measurements of the lepton flavour universality (LFU) ratios \(R_D\) and \(R_{D^*}\) exceed the SM prediction by 1.6σ and 2.6σ, respectively [HFLAV:2022pweBIB]. The combination of these two averages yields a tension of 3.3σ. Concerning the FCNC processes, the LHCb and CMS measurements [LHCb:2014cxeBIB,LHCb:2016yklBIB,LHCb:2021zwzBIB,CMS:2024syxBIB] of several observables (i.e., branching ratios and angular observables) in \(B \to K^{(*)}\mu^+\mu^-\) and \(B_s \to \phi\mu^+\mu^-\) decays deviate from the SM predictions. The significance of the \(b\to s\mu^+\mu^-\) tensions depends not only on the specific observable and bin considered, but also on the SM predictions used. Due to the difficulties in estimating the relevant non-local MEs, there is no consensus in theory community on the SM predictions for these observables. Nevertheless, whether the tension is due to NP or underestimated theoretical uncertainties, it is clear that the current SM predictions cannot explain the experimental data. For example, ref. [Parrott:2022zteBIB] reports a 4.2σ tension for the \(B \to K\mu^+\mu^-\) branching fraction in the large recoil region.

    These tensions together with others omitted here for brevity are commonly called the \(B\) anomalies. Understanding the \(B\) anomalies is essential for advancements in the field. In particular, the significance of the \(b\to s\mu^+\mu^-\) anomalies is high and has been confirmed by different measurements and/or experiments, and thus it is extremely unlikely that they can be due to a mere statistical fluctuation. The situation is made even more interesting by the fact that these anomalies form a coherent picture that suggests the existence of NP short-distance contributions (see, e.g., ref. [Allanach:2023uxzBIB]). Similarly, the \(b\to c\tau\bar\nu\) anomalies can be explained by introducing a new heavy mediator [Iguro:2024hykBIB].

    In addition to the \(B\) anomalies, there is also a long puzzle in the the determination of the CKM matrix elements \(|V_{ub}|\) and \(|V_{cb}|\), as the inclusive and exclusive determinations of these parameters differ by about 3σ [HFLAV:2022pweBIB]. Even if this tension can be hardly attributed to NP, it is important to resolve \(|V_{ub}|-|V_{cb}|\) puzzle, as most of the predictions of flavour observables depend on these parameters. In other words, the strength of the flavour constraints on NP depends critically depends on the precision of \(|V_{ub}|\) and \(|V_{cb}|\). This puzzle can only be solved with an improvement of both the measurements and the calculations of the MEs in \(b\to u\tau\bar\nu\) and \(b\to c\tau\bar\nu\) decays.


    Hadronic matrix
    elements calculations

    I have shown in the previous paragraphs that precise theoretical predictions for MEs in \(b\,\)-hadron decays are indispensable for both NP searches and for the extraction of SM flavour parameters. This is the focus of my research: improving the precisions of such MEs. Especially for certain observables — e.g. the \(B \to K^* \, \mu^+ \mu^-\) differential branching ratio — the theoretical uncertainties are larger than the experimental ones. Therefore, in order to make full use of the substantial investments and efforts made by the experimental collaborations dedicated to these measurements, advancements on the theoretical side are necessary.

    There are two QCD-based methods to compute MEs: lattice QCD (LQCD) and QCD sum rules. On the one hand, LQCD is a first-principles method that allows the calculation of MEs. The main advantage of LQCD computations is that their uncertainties can be systematically reduced, although they are time-consuming and resource-intensive. On the hand, QCD sum rules are based on an operator product expansion and the analytical properties of the correlation functions. The evaluation of QCD sum rules requires knowledge of non-perturbative inputs, such as the quark condensates or the hadron distribution amplitudes, which are difficult to determine with high precision. Nevertheless, QCD sum rules are extremely useful as they can currently shed light on matrix elements for which no or only limited LQCD results are available.

    Using the QCD light-cone sum rules (LCSRs) with \(B\)-meson distribution amplitudes, my collaborators and I have calculated the local MEs (i.e., the meson-to-meson form factors) in the most phenomenologically relevant \(B\) decays [Gubernari:2018wyiBIB,Bordone:2019gucBIB,Gubernari:2020eftBIB]: \(B \to \pi\), \(B \to \rho\), \(B \to K^{(*)}\), \(B \to D^{(*)}\), \(B_s \to \phi\). \(B_s \to D_s^{(*)}\). For the first time, our calculations included sub-leading power (or twist) corrections, which have a significant impact, amounting to a ~20% shift in the MEs results. In ref. [Gubernari:2020eftBIB], we have recalculated the the leading non-local MEs in \(B \to K^{(*)}\ell^+\ell^-\) decays. We have found that the power corrections to these non-local MEs — also known as soft-gluon contribution to the charm loop — are almost negligible. This contradicts the results of ref. [Khodjamirian:2010vfBIB], which found a result two orders of magnitude larger than mine for these power corrections. Nevertheless, the origin of this difference is understood, which is mostly due to contributions that were missing in the previous calculation and updated hadronic inputs.

    Another research direction I am pursuing is the derivation of unitarity bounds for MEs. The unitarity bounds are model independent constraints that can be used to further improve the precision of the MEs predictions. In ref. [Bordone:2019gucBIB], we have imposed these bounds and used heavy-quark effective theory to obtain very precise predictions for local MEs in \(B \to D^{(*)}\) and \(B_s \to D_s^{(*)}\) processes in the whole semileptonic region. We have also derived the first unitarity bound for the non-local MEs in \(B \to K^{(*)}\ell^+\ell^-\) decays, providing unprecedented control over the systematic uncertainties of these objects [Gubernari:2020eftBIB].

    I have performed pioneering calculations of the local MEs in \(B \to D^{**}\ell \bar\nu\) decays  [Gubernari:2022hrqBIB,Gubernari:2023rfuBIB] and the non-local MEs in \(\Lambda_b \to \Lambda\ell^+\ell^-\) decays [Feldmann:2023plvBIB]. These calculations are not only important because those MEs were previously unknown (or poorly known) theoretically, but also because their completion required important technical developments of LCSRs. As a matter of fact, both my numerical results and technical developments are widely used by other collaborations.


    Predictions and
    phenomenological
    analyses for \(\,B\,\)-decays

    Once the MEs are calculated, they can be used to predict observables in \(b\,\)-hadron decays. These predictions are valuable because they can be compared with measurements to search for NP and extract the SM flavour parameters. Performing these comparisons is often technically challenging, since there are usually several measurements and predictions for the same observable that need to be taken into account and combined to obtain the most accurate result. In addition, the observables can be differential in several kinematic variables, such as the momentum transfer and the angles between the final state particles. For this reason, complex statistical analyses are often required to perform comprehensive comparisons between theory and experiment. To overcome these challenges, dedicated software tools have been developed, e.g. EOS, flavio, and HAMMER.

    I am one of the senior developers of EOS, an open-source software project primarily written in C++ with a Python interface. EOS can both predict flavour observables and perform Bayesian statistical analyses. Using EOS, we have combined the available theoretical results (including our calculation of refs. [Gubernari:2018wyiBIB,Bordone:2019gucBIB]) to obtain precise predictions for \(B \to D^{(*)}\ell \bar\nu\) and \(B_s \to D_s^{(*)}\ell \bar\nu\) decays, such as the branching ratios, the angular observables, and the LFU ratios \(R_{D^{(*)}}\) and \(R_{D_s^{(*)}}\). We have also provided an exclusive determination of \(|V_{cb}|\), which lowered the tension with the inclusive determination to 1.8σ, and hence alleviates the \(|V_{ub}|-|V_{cb}|\) puzzle.

    In ref. [Bordone:2020gaoBIB], we have used the the results for the \(\bar{B}\to D^{(*)}\) and \(\bar{B}_s\to D_s^{(*)}\) local MEs to update the predictions for the hadronic decays \(\bar{B}_s^0\to D_s^{(*)+}\! \pi^-\) and \(\bar{B}^0\to D^{(*)+} K^-\). The precision of our SM predictions uncovered a substantial discrepancy with respect to the experimental measurements at the level of 4.4σ. We discuss the possible origins of this discrepancy in the article, which has triggered the interests of several research groups.

    We investigate the phenomenological consequences of the unitarity bound and the calculation of non-local MEs in \(B \to K^{(*)}\ell^+\ell^-\) and \(B_s \to \phi\ell^+\ell^-\) decays of ref. [Gubernari:2020eftBIB] in ref. [Gubernari:2022hxnBIB]. In this work we provide precise predictions for the differential branching ratios and angular observables in these decays. We have also performed a global analysis of the \(b\to s\mu^+\mu^-\) transition. We find a significant tension between the SM predictions and the experimental measurements for these decays, strengthening the \(b\to s\mu^+\mu^-\) anomalies.

    At the University of Cambridge, I am currently working with prof. B. Allanach on the development NP models that can explain the \(b\to s\mu^+\mu^-\) anomalies. This research is important to understand whether there is a viable NP explanation for the anomalies and to guide the experimental searches for NP at the LHC and future colliders. If such NP explanation does not exist, the anomalies would be a sign of underestimated theoretical oe experimental uncertainties, which would require a revision of the SM predictions and measurements for these decays.


    References

    1. Y.S. Amhis, others, Averages of b-hadron, c-hadron, and \(\tau\)-lepton properties as of 2021, Phys. Rev. D 107 2023 052008 [2206.07501].

    2. R. Aaij, others, Differential branching fractions and isospin asymmetries of \(B \to K^{(*)} \mu^+ \mu^-\) decays, JHEP 06 2014 133 [1403.8044].

    3. R. Aaij, others, Measurements of the S-wave fraction in \(B^{0}\rightarrow K^{+}\pi^{-}\mu^{+}\mu^{-}\) decays and the \(B^{0}\rightarrow K^{\ast}(892)^{0}\mu^{+}\mu^{-}\) differential branching fraction, JHEP 11 2016 047 [1606.04731].

    4. R. Aaij, others, Branching Fraction Measurements of the Rare \(B^0_s\rightarrow\phi\mu^+\mu^-\) and \(B^0_s\rightarrow f_2^\prime(1525)\mu^+\mu^-\)- Decays, Phys. Rev. Lett. 127 2021 151801 [2105.14007].

    5. A. Hayrapetyan, others, Test of lepton flavor universality in B\(^{\pm}\)\(\to\) K\(^{\pm}\mu^+\mu^-\) and B\(^{\pm}\)\(\to\) K\(^{\pm}\)e\(^+\)e\(^-\) decays in proton-proton collisions at \(\sqrt{s}\) = 13 TeV, Rept. Prog. Phys. 87 2024 077802 [2401.07090].

    6. W.G. Parrott, C. Bouchard, C.T.H. Davies, Standard Model predictions for B\(\to\)K\(\ell\)+\(\ell\)-, B\(\to\)K\(\ell\)1-\(\ell\)2+ and B\(\to\)K\(\nu\)\(\nu\) using form factors from Nf=2+1+1 lattice QCD, Phys. Rev. D 107 2023 014511 [2207.13371].

    7. B. Allanach, A. Mullin, Plan B: new Z' models for b \(\to\) s\(\ell\)\(^{+}\)\(\ell\)\(^{−}\) anomalies, JHEP 09 2023 173 [2306.08669].

    8. S. Iguro, T. Kitahara, R. Watanabe, Global fit to \(b \to c\tau\nu\) anomaly 2024 Spring breeze, N/A 2024 [2405.06062].

    9. N. Gubernari, A. Kokulu, D. van Dyk, \(B\to P\) and \(B\to V\) Form Factors from \(B\)-Meson Light-Cone Sum Rules beyond Leading Twist, JHEP 01 2019 150 [1811.00983].

    10. M. Bordone, N. Gubernari, D. van Dyk, M. Jung, Heavy-Quark expansion for \({{\bar{B}}_s\rightarrow D^{(*)}_s}\) form factors and unitarity bounds beyond the \({SU(3)_F}\) limit, Eur. Phys. J. C 80 2020 347 [1912.09335].

    11. N. Gubernari, D. van Dyk, J. Virto, Non-local matrix elements in \(B_{(s)}\to \{K^{(*)},\phi\}\ell^+\ell^-\), JHEP 02 2021 088 [2011.09813].

    12. A. Khodjamirian, T. Mannel, A.A. Pivovarov, Y.-. Wang, Charm-loop effect in \(B \to K^{(*)} \ell^{+} \ell^{-}\) and \(B\to K^*\gamma\), JHEP 09 2010 089 [1006.4945].

    13. N. Gubernari, A. Khodjamirian, R. Mandal, T. Mannel, \(B \to D_1 (2420)\) and \(B \to D_1^\prime (2430)\) form factors from QCD light-cone sum rules, JHEP 05 2022 029 [2203.08493].

    14. N. Gubernari, A. Khodjamirian, R. Mandal, T. Mannel, \( B\to {D}_0^{\ast } \) and \( {B}_s\to {D}_{s0}^{\ast } \) form factors from QCD light-cone sum rules, JHEP 12 2023 015 [2309.10165].

    15. T. Feldmann, N. Gubernari, Non-factorisable contributions of strong-penguin operators in \(\Lambda_b \to \Lambda \ell^+\ell^-\) decays, JHEP 03 2024 152 [2312.14146].

    16. M. Bordone, N. Gubernari, T. Huber, M. Jung, D. van Dyk, A puzzle in \(\bar{B}_{(s)}^0 \to D_{(s)}^{(*)+} \lbrace \pi^-, K^-\rbrace\) decays and extraction of the \(f_s/f_d\) fragmentation fraction, Eur. Phys. J. C 80 2020 951 [2007.10338].

    17. N. Gubernari, M. Reboud, D. van Dyk, J. Virto, Improved theory predictions and global analysis of exclusive \(b \to s\mu^+\mu^-\) processes, JHEP 09 2022 133 [2206.03797].

    Outreach

    This section is for the general public and especially for people who do not know much about quantum and particle physics but would like to know more. In the following paragraphs, I give an accessible introduction to particle physics, starting with the basic concepts of atoms and molecules, and then moving on to the structure of the atom and the concept of antimatter. Then, I explain the importance of particle accelerators and the concept of quantum collisions. Finally, I will introduce my field of research, flavour physics, and discuss its purpose and importance.

    Please contact me if you any have comments, questions, or suggestions about this section.



    Contents



      Physics:
      The study of Nature

      Curiosity is one of the defining characteristics of human beings. Curiosity drives us to ask questions, seek answers, and unravel the mysteries of the universe. It is this insatiable curiosity that drives our thirst for knowledge, fuels our scientific discoveries, and shapes the course of our history. In this relentless quest, curiosity has led us to ponder fundamental matters such as: What are we made of? How did our universe originate? What is space and time? Physics endeavours to answer these questions through the systematic study of Nature. Among the different branches of physics, I am principally interested in particle physics. The aim of this field of research is to understand the fundamental constituents of matter and the forces that govern their behaviour.


      What are we made of?

      We see and touch hundreds of objects every day. The variety of shapes, colours and sizes we observe is stunning. Even more surprising is the fact that everything is made up of 17 different types of particles. But let us take things one step at a time, introducing first the concept of atoms.

      The idea that matter is made up by basic building blocks goes back a long way. It was first proposed by the ancient Greek philosophers Leucippus of Miletus and Democritus in the fifth century BC. That is why the word atom comes from the ancient Greek word atomos, meaning indivisible. However, these were mere speculations and the modern (scientific) understanding of the atom began to emerge in the 19th century. In 1803, John Dalton proposed his atomic theory, supported by strong experimental evidence. According to this theory, matter is made up of tiny, indivisible particles (the atoms). He also suggested that not all atoms are the same, but that there are different types with different properties.

      To date, 118 different types of atoms (or elements) have been identified. This is more or less the final number of existing elements, due to the problem of nuclear stability. The most common elements on Earth include oxygen, aluminium and iron. Atoms are extremely small compared to the objects we usually deal with in our everyday lives. In fact, the typical diameter of an atom is less than a billionth of a millimetre. To give you an idea of how small atoms are, a grain of salt contains about a quintillion atoms, or if you prefer, a million million million (=1,000,000,000,000,000,000) atoms. Please take this information with a grain of salt.

      Atoms can combine to form molecules through chemical bonds. Molecules in turn assemble into complex structures like cells, which are the basic units of life. Within cells, intricate processes take place to carry out the functions necessary for living beings. In the case of humans, billions of cells, each with specific roles and functions, come together to form tissues, organs and ultimately an entire organism.

      The discovery of the atom was undoubtedly one of the greatest achievements in the history of mankind. Thanks to this discovery, we are nowadays able to diagnose diseases, generate energy and develop new materials for various applications. However, we now know that atoms are not the fundamental building blocks of matter. In other words, they are made up of other particles which, according to our current knowledge, are fundamental. Hence, to get the ultimate answer to the question "what are we made of", we have to answer the question "what are atoms made of". But how to find out what is inside something so small?


      The dawn of
      particle physics

      Were it not so long, I would have titled this paragraph "The story of physicists and their insatiable passion for particle collisions". The story begins at the end of the 19th century, when Thomson discovered the electron at the Cavendish Laboratory of Cambridge University in 1897. A few years later, Geiger and Marsden, under the supervision of Rutherford, carried out a series of experiments to clarify the structure of the atom. The experiment that went down in history consisted of firing positively charged particles — called \(\alpha\) particles, which we now know to be helium nuclei — at a gold foil. Geiger and Marsden observed to their amazement that about one out of every ten thousand \(\alpha\) particles was reflected, i.e. were deflected at an angle greater than 90°, while the others passed through the foil or were slightly deflected. The astonishment arose from the fact that the atomic model of the time proposed by Thomson himself did not predict such an outcome of the experiment. Indeed, in Thomson's model, atoms resembled a "plum pudding", with the positive charge evenly distributed throughout the atom and the negatively charged electrons embedded in it. According to this model, most of the \(\alpha\) particles should have passed straight through the foil with little to no deflection, as they encountered very few obstacles. Rutherford himself famously explained this result by saying, "It was as if you had fired a 15-inch shell at a piece of tissue paper and it had come back and hit you". To explain the experimental results, Rutherford proposed a new atomic model. In this new model the positive charge is concentrated in a compact nucleus at the centre of the atom, while the electrons revolve around the nucleus itself. For this reason, the Rutherford model is also called the planetary model. This model laid the foundation for the later quantum model of the atom proposed by Bohr, which is still valid today.

      Geiger and Marsden's experiment explains one of the reasons why colliding particles can be useful. By shooting particles at a fixed target or making them collide, we can understand both their "shape" and whether they are themselves made up of even smaller particles. (Shape refers to the geometry of the potentials through which the particles interact. In the experiment considered, the shape in question is that of the electromagnetic potential of the nucleus of a gold atom.) There is no other way to understand what is inside an atom, or even worse what happens inside the nucleus of an atom, as it is impossible to observe such phenomena with any optical microscope. In fact, an atom is hundreds of times smaller than the wavelength of light visible to the human eye. Such experiments can therefore be seen as a natural evolution of the optical microscope concept.

      The concept of understanding the shape of particles and their internal structure through collisions is the basis of particle physics. Hence, I will try to further clarify it using a simple example. Imagine to open a garage door. You are outside the garage and you want to know what is inside. However, the garage is dark and you cannot see anything. You can also smell gas, so you don't want to enter the garage for fear of being poisoned. Fortunately, you have just come from a fantastic tennis match (you won 6-2 6-1) and you have some tennis balls with you. You have an idea. You throw the tennis balls inside the garage and observe how they bounce back. From the way the tennis balls bounce back and the time they take to do so, you can immediately understand how full the garage is. By analysing further the way the tennis balls bounce back, you can also understand the shape of the objects inside the garage. Did I manage to convince you that colliding particles is a good idea to understand their properties?


      Aside: The structure
      of the atom
      and the antimatter

      Before continuing, let me clarify some concepts that are fundamental to understanding the next section. As I explained above, the atom is made up of a nucleus and electrons orbiting around it. (This is a simplification, but it is sufficient for our purposes.) The nucleus contains almost all the mass of the atom, about 99.9%, while the electrons, being much lighter, contribute negligibly to the total mass, i.e. less than 0.01%. However, it is much smaller than the atom itself, about 10,000 times smaller. To put this in perspective, if the nucleus were the size of a football, the atom would be the size of a football stadium. The nucleus is made up of protons and neutrons. Protons have a positive charge, while neutrons have no charge. Since the atom is electrically neutral, the number of protons is equal to the number of electrons. What distinguishes one element from another is only the number of protons in the nucleus. For example, hydrogen has one proton, helium two, and so on. The number of elements is however finite, as I mentioned earlier. This is due to the fact that nuclei with a large number of protons and neutrons are unstable and decay into lighter nuclei. It is important to stress that, according to our current knowledge, protons and neutrons are not fundamental particles, but are made up of even smaller particles called quarks. On the other hand, electrons are considered fundamental particles.

      Let me mention something that a physicist might consider trivial, but not everyone is familiar with. Whenever we touch an object, like a table, what happens at the microscopic level is that the electrons in the atoms of our fingertips repel the electrons in the atoms of the table. This is due to the electromagnetic force, which states that two negative particles (like electrons) repel each other. As a result, we never actually "touch" anything in the sense of direct contact at the atomic level. Instead, it's the electromagnetic force between the electrons that we perceive as the sensation of touch. Furthermore, all fundamental particles are considered to be point-like, i.e. they have no size. Therefore, there is never any direct contact between particles, but only interactions through the fundamental forces (electromagnetic, weak, strong and gravitational).

      Another important concept to clarify is that of antimatter. Antimatter is like a mirror image of matter. For every type of particle in matter, there's a corresponding antiparticle in antimatter. For example, the antiparticle of an electron (a tiny negatively charged particle) is called a positron (which has a positive charge). When matter and antimatter come together, they annihilate each other, that is, they disappear in a burst of energy. This energy release is extremely powerful. According to Einstein's famous formula \(E=mc^2\), the energy released is equal to the mass of the particles multiplied by the speed of light squared. For instance, if a few grains of matter salt and antimatter salt were to annihilate, they would release an amount of energy equivalent to that released by the Little Boy bomb dropped on Hiroshima in 1945. Please take this information with a grain of salt. Luckily, our universe is essentially made up of matter, while the production of (very little) antimatter occurs in very energetic processes. So don't worry, the next time you put salt on your pasta, you won't be risking a nuclear explosion. One might naively expect antimatter and matter to be equally abundant in the Universe, and our current theories support this expectation. But what we observe is different, and so the fact that the universe is mostly matter is one of the biggest unsolved mysteries in contemporary physics.


      The advent of
      particle accelerators

      Syncroton Image

      Schematic diagram of a synchrotron in which electrons and positrons collide.
      The first particle accelerators were built in the late 1920s, although particle physics mostly developed after the Second World War. The purpose of these machines is to create beams of particles, accelerate them to speeds close to the speed of light, and make them collide with each other. (The speed of light is about 3·108 m/s, or more than a billion kilometres per hour.) The particles most commonly used are protons, electrons, anti-protons and positrons, although heavy metal nuclei are also sometimes used. Some accelerators collide beams of the same particles, e.g. electron-electron, while others collide two beams of different particles: electron-positron, electron-proton, etc. Modern accelerators are almost all synchrotrons. The Large Hadron Collider (LHC) at CERN in Switzerland, better known as the LHC, is the world's largest and most powerful accelerator, a proton-proton synchrotron. The LHC is one of the most impressive feats ever achieved by mankind. It cost around €4.75 billion to build (not including the cost of digging the 27-kilometre tunnel that houses the accelerator and the four caverns that house the various experiments) and requires an additional €5.5 billion a year to operate.

      Synchrotrons have a circular (or rather polygonal) shape and consist of two pipes with a diameter of several tens of centimetres, through each of which particle beams are passed. The particle beams in the two different pipes travel in opposite directions (see figure). Clearly, air is removed from these pipes, i.e. a vacuum is created so that the beams can travel unimpeded. In a synchrotron, beams of particles are accelerated along the two pipes by alternating electric fields, while magnetic fields guide and concentrate the beams, ensuring they maintain a circular trajectory and avoid dispersion.

      The maximum (kinetic) energy that a synchrotron can reach is determined by mainly two factors. The first is the type of particle being accelerated. The heavier the particle, the greater its energy. The second is the circumference of the synchrotron. With each revolution, a particle gradually sheds energy due to synchrotron radiation. Put plainly, when a charged particle is bent, it emits radiation, causing energy loss. This radiation is called synchrotron radiation. For this reason, it is not possible to accelerate particles indefinitely, because at a certain point the energy provided by the electric fields at each revolution is equal to the energy lost to the synchrotron radiation. Since the synchrotron radiation is inversely proportional to the radius of the synchrotron itself, synchrotrons with larger radii will be able to reach higher speeds. This explains why we need an accelerator 27 kilometres long like the LHC, and why we will need an even larger one if we want to reach higher energies.


      Quantum collisions

      Particle accelerators are not simply sophisticated microscopes for studying the shapes of potentials and the possible substructure of particles in beams, but are much more. To understand this, you first need to know the difference between classical and quantum collisions.

      Let us consider, for example, a billiard table. Classical mechanics tells us that by making different billiard balls collide with each other, the balls will take different directions after the collision. It is possible to calculate the direction and speed of the balls after the collision if the direction and speed of the balls before the collision are known. Imagine increasing the speed of the balls disproportionately, causing them to jump off the table or even break on impact, but nothing more can happen.

      That is pretty intuitive to understand, right? Now forget it, because collisions in quantum (relativistic) mechanics can have very different outcomes from those in classical mechanics. I already have Einstein's famous formula above: \(E=mc^2\). This formula implies that mass and energy are equivalent, i.e. they are two sides of the same coin. Hence, mass can be converted into energy and vice versa. The case of matter antimatter annihilation is an example of the conversion of mass into energy. A high-energy particle collision is an example of the conversion of energy into mass. The mass-energy equivalence, which may seem strange at first, has surprising consequences. The energy released in a collision generates new particles, which can be completely different from the particles that collided. For instance, by colliding two electrons, it is possible to create every particle in the universe whose mass is less than or equal to the energy of the collision. So, in the quantum world, the collision of two billiard balls at sufficient speed could create a rugby ball, a cake, or even a house. Strange right? Let me point out something about this example. While it's true that in theory, particles and billiard balls at sufficient energy levels could potentially form macroscopic objects through various interactions and combinations, in practice, this doesn't occur for mainly reasons. First, it is not possible in practice to achieve the extremely high energies required to trigger such phenomena. Second, the likelihood of the products of a collision arranging into recognizable macroscopic objects like rugby balls, bedside tables, or houses is incredibly low due to the complexity of such arrangements and the dominance of statistical processes at the quantum level. Nevertheless, quantum effects are observable in collisions between subatomic particles.

      Therefore, by using particle accelerators and colliding electrons and/or protons with each other it was possible to create the fundamental particles (or what we believe to be such) that constitute the entire visible universe. To date, 17 (types of) fundamental particles have been identified: the electron and two of its heavier brothers (muons and tauons), three neutrinos, six quarks, the photon, the W and Z bosons, the gluon, and the Higgs boson. These particles combine, interact with each other, and interact to form the objects we see and use in everyday life.

      I hope that I have convinced you that colliding subatomic particles is much more fun and interesting than it might seem. For instance, every collision at the LHC between two protons creates thousands of particles. This can give an idea of how complicated both the work of experimental physicists is, given that around 600 million collisions per second occur at the LHC, and that of theorists like me, who have to calculate the outcome of these collisions.

      The ultimate goal of particle physics is to understand what the primary constituents of matter are and the forces that regulate the entire universe. This is done precisely by comparing the experimental measurements with the theoretical calculations. If there is agreement between the two, it means that our theory of fundamental interactions (excluding the gravitational one) is correct, otherwise the theory is incomplete and requires some modifications. The current theory of fundamental interactions is called the Standard Model, which was formulated in the late 1960s and still works successfully today. The Standard Model describes the interactions and motion of the 17 particles listed above. Trying to test the Standard Model also explains the need for more powerful particle accelerators, i.e. larger synchrotrons. In fact, by reaching higher energies it is possible to create new fundamental particles not yet discovered and therefore not included in the Standard Model. The discovery of new particles is absolutely not an end in itself, but a fundamental step forward towards the complete understanding of the universe and the formulation of the "theory of everything".


      Flavourful particles

      What I am specifically concerned with is called flavour physics. The name comes from the fact that the six different types of quarks are called flavours. Flavour physics studies the interactions and decays of quarks in detail. The aim is once again to test the Standard Model, but instead of trying to find particles at higher and higher energies, flavour physics tries to carry out precision tests. In other words, by comparing experimental measurements of the decays of quarks of different flavours with precise theoretical predictions, it is possible to discover new particles in an indirect way. In fact, if the measurements find something different from our calculations, it means that there are new forces never considered before and therefore new particles. I am mainly concerned with the study of the beauty quark, because it is one of the heaviest and therefore most interesting flavours of quark to study.

      About me

      Lorem ipsum dolor sit amet, consectetur et adipiscing elit. Praesent eleifend dignissim arcu, at eleifend sapien imperdiet ac. Aliquam erat volutpat. Praesent urna nisi, fringila lorem et vehicula lacinia quam. Integer sollicitudin mauris nec lorem luctus ultrices. Aliquam libero et malesuada fames ac ante ipsum primis in faucibus. Cras viverra ligula sit amet ex mollis mattis lorem ipsum dolor sit amet.

      Thank you!

      I have received your message and will reply as soon as possible!