Friday, December 16, 2016

Cosmic rays hint at new physics just beyond the reach of the LHC

Cosmic ray shower. Artist’s impression.
[Img Src]
The Large Hadron Collider (LHC) – the world’s presently most powerful particle accelerator – reaches a maximum collision energy of 14 TeV. But cosmic rays that collide with atoms in the upper atmosphere have been measured with collision energies about ten times as high.

The two types of observations complement each other. At the LHC, energies are smaller, but collisions happen in a closely controlled experimental environment, directly surrounded by detectors. This is not the case for cosmic rays – their collisions reach higher energies, but the experimental uncertainties are higher.

Recent results from the Pierre Auger Cosmic Ray observatory at center-of-mass energies of approximately 100 TeV are incompatible with the Standard Model of particle physics and hint at unexplained new phenomena. The statistical significance is not high, currently at 2.1 sigma (or 2.9 for a more optimistic simulation). This is approximately a one-in-100 probability to be due to random fluctuations.

Cosmic rays are created either by protons or light atomic nuclei which come from outer space. These particles are accelerated in galactic magnetic fields, though the exact way how their get their high speeds is often unknown. When they enter the atmosphere of planet Earth, they sooner or later hit an air molecule. This destroys the initial particle and creates a primary shower of new particles. This shower has an electromagnetic part and a part of quarks and gluons that quickly form bound states known as hadrons. These particles undergo further decays and collisions, leading to a secondary shower.

The particles of the secondary shower can be detected on Earth in large detector arrays like Pierre Auger, which is located in Argentina. Pierre Auger has two types of detectors: 1) detectors that directly collect the particles which make it to the ground, and 2) fluorescence detectors which captures the light emitted from the ionization air molecules.

The hadronic component of the shower is dominated by pions, which are the lightest mesons and composed of a quark and an anti-quark. The neutral pions decay quickly, mostly into photons; the charged pions create muons which make it into the ground-based detectors.

It has been known for several years that the muon signal seems too large compared to the electromagnetic signal – the balance between them is off. This however was not based on very solid data analyses because it was dependent on a total energy estimate, and that’s very hard to do if you don’t measure all particles of the shower and have to extrapolate from what you measure.

In the new paper – just published in PRL – the Pierre Auger collaboration used a different analysis method for the data, one that does not depend on the total energy calibration. They individually fit the results of detected showers by comparing them to computer-simulated events. From a previously generated sample, they pick the simulated event that best matches the fluorescence result.

Then they add two parameters to also fit the hadronic result: The one parameter adjusts the energy calibration of the fluorescence signal, the other rescales the number of particles in the hadronic component. Then they look for the best-fit values and find that these are systematically off the standard model prediction. As an aside, their analysis also shows that the energy does not need to be recalibrated.

The main reason for the mismatch with the standard model predictions is that the detectors measure more muons than expected. What’s up with those muons? Nobody knows, but the origin of the mystery seems not in the muons themselves, but in the pions from whose decay they come.

Since the neutral pions have a very short lifetime and decay almost immediately into photons, essentially all energy that goes into neutral pions is lost for the production of muons. Besides the neutral pions there are two charged pions and the more energy is left for these and other hadrons, the more muons are produced in the end. So the result by Pierre Auger indicates that the total energy in neutral pions is smaller than what the present simulations predict.

One possible explanation for this, which has been proposed by Farrar and Allen, is that we misunderstand chiral symmetry breaking. It is the breaking of chiral symmetry that accounts for the biggest part of the masses of nucleons (not the Higgs!). The pions are the (pseudo) Goldstone bosons of that broken symmetry, which is why they are so light and ultimately why they are produced so abundantly. Pions are not exactly massless, and thus “pseudo”, because chiral symmetry is only approximate. The chiral phase transition is believed to be close by the confinement transition, that being the transition from a medium of quarks and gluons to color-neutral hadrons. For all we know, it takes place at a temperature of approximately 150 MeV. Above that temperature chiral symmetry is “restored”.

Chiral symmetry restoration almost certainly plays a role in the cosmic ray collisions, and a more important role than it does at the LHC. So, quite possibly this is the culprit here. But it might be something more exotic, new short-lived particles that become important at high energies and which make interaction probabilities deviate from the standard model extrapolation. Or maybe it’s just a measurement fluke that will go away with more data.

If the signal remains, however, that’s a strong motivation to build the next larger particle collider which could reach 100 TeV. Our accelerators would then be as good as the heavens.

[This post previously appeared on Forbes.]


Phillip Helbig said...

As Zeldovich pointed out, the universe is the poor man's particle accelerator. :-)

Sabine Hossenfelder said...

Ah, it just means we're all rich in ways we don't appreciate.

Uncle Al said...

"misunderstand chiral symmetry breaking" Postulate exact vacuum mirror symmetry toward fermion quarks then hadrons. Pierre Auger Observatory, baryogenesis (Sakharov parity violation), dark matter (Milgrom acceleration as Noetherian angular momentum leakage re vacuum chiral anisotropy), Chern-Simons repair of Einstein-Hilbert action.

Opposite shoes non-identically embed within trace left-footed vacuum. They vacuum free fall along non-identical minimum action trajectories. Vacuum is not exactly mirror-symmetric toward quarks then hadrons.
Eötvös experiments, Test space-time geometry with maximally divergent chiral mass geometries.

Single crystal opposite shoes are visually and chemically identical test masses in enantiomorphic space groups, doi:10.1107/S0108767303004161, Section 3ff.
Calculate mass distribution chiral divergence, CHI = 0 → 1, doi:10.1063/1.1484559 glydense.png

Alex Lumaghi said...

How does the Look Elsewhere Effect apply to this type of data as opposed to something like the LHC where you have more places to look? In my layperson's understanding, if you run 100 experiments, then you should not be surprised by a 2 sigma result in one or two of them. If you run one experiment and get a 2 sigma result, it may suggest you are on to something. Does that kind of reasoning come into play with this experiment?

Sabine Hossenfelder said...

Hi Alex,

The sigmas have nothing to do with the number of experiments directly, but with the amount of data. Since more experiments usually means more data, these are indirectly related though.

The level of significance is a test against the expected random fluctuations that may look just like a signal. If you have less data, fluctuations stand out more strikingly. (There's a name for this which has escaped me.)

Yeah, sure that kind of reasoning comes into play, which is why they calculate the sigmas. Or else, I don't understand your question. For all I can tell their significance is global and hence no look-elsewhere effect. Best,


Sabine Hossenfelder said...

Ah, now I recall. I think it's called the "law of small numbers." In a nutshell, it's why Iceland seems to stand out in so many statistics. (Few people, large fluctuations.)

Alex Lumaghi said...

Yes, I was pretty much engaged in exactly the fallacy you describe, so this does clarify it for me. Thank you for the response.

Unknown said...

"the charged pions create muons which make it into the ground-based detectors." The muons only make it to the ground because of special relativity time dilation. Could the discrepancy be explained by a failure of special relativity?

andrew said...

"The statistical significance is not high, currently at 2.1 sigma (or 2.9 for a more optimistic simulation). This is approximately a one-in-100 probability to be due to random fluctuations."

From the phrasing it isn't as clear as it might be that 2.1 sigma is about 4%. It is 2.9 that is about 1%.

But, it also bears noting that it isn't clear how the look elsewhere effect works in this context and that while those percentages are technically mathematically the case, in practice, 2 sigma anomalies are generally considered consistent with the underlying theory (given the immense corroboration of the underlying Standard Model and GR) and the rule of thumb is that three sigma anomalies in real life only end up amounting to anything about half the time.

The discrepancy between the math and the reality is partially a product of subtle look elsewhere effects that are hard to define properly since the definition of what constitutes a trial can be muddy, and partially a product of overoptimistic estimates of how low systemic errors are (in part because the unknown unknowns that contribute to systemic error aren't accounted for despite everyone's best efforts to identify them).

In a similar although not quite the same as the look elsewhere effect concept, the raw statistical anomaly percentage doesn't take into account the fact that there are thousands of confirmations of what would naively seem to be the same effects. When you had 1000 positive confirmations (some at 5 sigma plus) and 1 anomaly of roughly similar tests of the same law of physics, the likelihood that a result is a random anomaly rather than a fluke looks a lot different.

Bottom line, "one in a hundred" that it is nothing, in practice vastly understates that true likelihood that a 2.9 sigma result is a statistical fluke. A 2.9 sigma anomaly is interesting and might end up being something real, but a "one in a hundred" figure while literally describing the math, conveys a misleading impression that an effect is almost surely real when 2.9 sigma results inconsistent with the SM and GR later turn out to be nothing all the time.

TheBigHenry said...


It sounds like the maximum energy at which the LHC operates needs to be scaled up by about an order of magnitude, which presumably implies a similar scaling of cost for the construction upgrade. Do you think there is much hope of getting enough money to enable such a massive venture?

Beijixiong said...

There is still a "look elsewhere" effect. If the Auger group compared 100 different measurements to expectations, finding a "1 in 100 probability" discrepancy would be expected.

Arun said...

It is a rather strange result, in my opinion. Why would a discrepancy in the ratio of
π0 to π+/π− show up not at the LHC but at ten times the energy? Surely most of the interactions leading to the pion showers are QCD, and why would one expect QCD to go hayware at beyond-LHC energies?

Sabine Hossenfelder said...


No. The same time dilatation also applies to all other particles.

Sabine Hossenfelder said...


That's a rather complicated question that I'm afraid I can't fully answer. See, I often hear people say that the standard model explains all the LHC data, but then of course the LHC collides hadrons and most of the particles being measured are also hadrons, and that's all strongly coupled QCD which nobody can calculate from first principles.

The way this works is that on top of the standard model you use two functions that connect the composites to the quark/gluon content which are a) the parton distribution functions and b) the fragmentation functions (for former for the input, the latter for the output). These aren't computed analytically, they're parameterizations combined with a scaling analysis, and the parameters are extracted from already existing data. You literally download them as tables (ask Google).

Having said that, the full energy of the collision is eventually redistributed to much lower energies. In the cosmic ray shower, there's more total energy in the collision, it's a different collision (on nucleus), and you have to trace it for a longer period. It's very possible that this tests different regimes of the parameterization than does the LHC.

What they do in this paper (if I understood correctly), is basically to use the standard codes from the LHC and then applying them to cosmic ray showers, and somewhere along the line something goes wrong. Putting the blame on strongly coupled QCD, hence, things that nobody can calculate with pen on paper, is kind of the first thing that comes to mind. Best,


Uncle Al said...

Samuel Ting's AMS-02 experiment infers dark matter mass, only lacking more data for more sigmas and contingent ground detection (XENON, LUX, CoGeNT; CUORE and Super-K indirectly).

Ephraim Fischbach’s Fifth Force data was spurious. Pioneer anomaly, exquisite measurement, multiple theories; unequal surface temperatures caused it. OPERA experiment, superluminal muon neutrinos to high sigma, multiple theories; a loose fiberoptic clock connection caused it. US War on Cancer, 50 years ever victorious; now requiring a Moon Shot for victory.

Sometimes a cigar is only a banana. Look elsewhere, too.

Bill said...

"... just beyond the reach of the LHC."

Richtig? From 14 to 100+ TeV is a bit more than moving the goalposts, I would think.

andrew said...

While a special relativistic new physics suggested by "Unknown" seems unlikely, the notion that there is a problem with the amount of special relativistic time dilation in the model that is assumed isn't a bad one.

One way to get more muons than expected is for the muons to last longer, which mostly goes to whether the model is accurately measuring the number of hadrons above the 100 GeV energy threshold necessary to prevent the muons from decaying. At the relativistic energies involved here, a difference in kinetic energy that would greatly slow down the decay of the muons which result in almost no discernible different in muon speed to an observer in the observatory.

After all, as you note in your answer to Arun, a systemic error in estimating the number of hadrons above the 100 GeV threshold in a never before tested alternative to measuring the total energy of the event (using a model trained on LHC events in which the total energy of the event was accurately measured) is exactly the kind of error one would expect to see with the new methodology for estimating energy scales. For example, maybe the new methodology dismissed a contribution as minor, because it doesn't contribute a huge percentage to total energy, but the ignored contribution's small contribution to total energy is concentrated primarily in events near the 100 GeV threshold while having little impact on high energy events.

And it is hard to treat the actually discrepancy as more than 30% at 2.1 sigma in any case, as there are two LHC trained models and the fact that they provide very different predictions makes it certain that one is more incorrect than the other and this test shows which is the more accurate (and keep in mind we have only about 600ish events here total that are being analyzed in multiple bins that are smaller than that).

So, the case for new physics isn't huge.

Also, even the paper referenced for "New Physics" is really not so much arguing that Standard Model QCD is wrong as it is that we've overlooked a subtle high energy implication of Standard Model QCD as applied in operational models of it, by proposing a new high energy chiral symmetry restoration phase analogous to other phase transitions in QCD like the quark-gluon plasma and Bose-Einstein condensate phase transitions. This chiral symmetry restoration hypothesis still seems like a stretch, but even if it were right, it wouldn't really be "new physics" in the way that we commonly speak of the term as something that actually changes the underlying equations of the Standard Model rather than merely the way that we do calculations with then as a practical matter, which as you note, in QCD is greatly distanced fro a first principles calculation and instead uses huge data tables to get things like PDFs.

akidbelle said...

Andrew, Sabine,

if I understand something, the tables should be the same at 10 and 100 TeV (or maybe extrapolated). But nobody can compute something from first principles (the models are "trained") and then, formally speaking, we do not even know if the tables agree with QCD. Right?


Sabine Hossenfelder said...


Well, yes, the tables are extrapolated from 10 to 100 TeV, but that isn't what Arun alluded to, or at least I don't think he did. QCD is difficult at low energies, not at high energies. If the extrapolation really does breaks down that would be much more dramatic. Best,


Jeff said...

"It is the breaking of chiral symmetry that accounts for the biggest part of the masses of nucleons..." - while I know it's a convenient shorthand, it makes me sad when we speak as though theory causes physical phenomena. The masses of nucleon are whatever they are, while "chiral symmetry" is a construct of the human mind intended to describe our observations. Our mental constructs don't determine the behavior of nucleons.

Sabine Hossenfelder said...

Here is another recent paper with a proposed explanation:

Strange fireball as an explanation of the muon excess in Auger data
Luis A. Anchordoqui, Haim Goldberg, Thomas J. Weiler

Federico vdP said...

in the sentence
"Since the neutral pions have a very short lifetime and decay almost immediately into photons, essentially all energy that goes into neutral pions is lost for the production of muons."
you certainly ment photons instead of muons.

Sabine Hossenfelder said...

Federico, I meant muons.