Sunday, February 19, 2017

Fake news weren’t hard to predict – But what’s next?

In 2008, I wrote a blogpost which began with a dark vision – a presidential election led astray by fake news.

I’m not much of a prophet, but it wasn’t hard to predict. Journalism, for too long, attempted the impossible: Make people pay for news they don’t want to hear.

It worked, because news providers, by and large, shared an ethical code. Journalists aspired to tell the truth; their passion was unearthing and publicizing facts – especially those that nobody wanted to hear. And as long as the professional community held the power, they controlled access to the press – the device – and kept up the quality.

But the internet made it infinitely easy to produce and distribute news, both correct and incorrect. Fat headlines suddenly became what economists call an “intangible good.” No longer do news rely on a physical resource or a process of manufacture. News now can be created, copied, and shared by anyone, anywhere, with almost zero investment.

By the early 00s, anybody could set up a webpage and produce headlines. From thereon, quality went down. News make the most profit if they they’re cheap and widely shared. Consequently, more and more outlets offer the news people want to read –that’s how the law of supply and demand is supposed to work after all.

What we have seen so far, however, is only the beginning. Here’s what’s up next:
  • 1. Fake News Get Organized

    An army of shadow journalists specializes in fake news, pitching them to alternative news outlets. These outlets will mix real and fake news. It becomes increasingly hard to tell one from the other.

  • 2. Fake News Become Visual

    “Picture or it didn’t happen,” will soon be a thing of the past. Today, it’s still difficult to forge photos and videos. But software becomes better, and cheaper, and easier to obtain, and soon it will take experts to tell real from fake.

  • 3. Fake News Get Cozy

    Anger isn’t sustainable. In the long run, most people want good news – they want to be reassured everything’s fine. The war in Syria is over. The earthquake risk in California is low. The economy is up. The chocolate ratio has been raised again.

  • 4. Cooperations Throw the Towel

    Facebook and Google and Yahoo conclude out it’s too costly to assess the truth value of information passed on by their platforms, and decide it’s not their task. They’re right.
  • 5. Fake News Have Real-World Consequences

    We’ll see denial of facts leading to deaths of thousands of people. I mean lack of earthquake warning systems because the risk was believed fear-mongering. I mean riots over terrorist attacks that never happened. I mean collapsed buildings and toxic infant formula because who cares about science. We’ll get there.

The problem that fake news pose for democratic societies attracted academic interest already a decade ago. Triggered by the sudden dominance of Google as search engine, it entered the literature under the name “Googlearchy.”

Democracy relies on informed decision making. If the electorate doesn’t know what’s real, democratic societies can’t identify good ways to carry out the people’s will. You’d think that couldn’t be in anybody’s interest, but it is – if you can make money from misinformation.

Back then, the main worry focused on search engines as primary information providers. Someone with more prophetic skills might have predicted that social networks would come to play the central role for news distribution, but the root of the problem is the same: Algorithms are designed to deliver news which users like. That optimizes profit, but degrades the quality of news.

Economists of the Chicago School would tell you that this can’t be. People’s behavior reveals what they really want, and any regulation of the free market merely makes the fulfillment of their wants less efficient. If people read fake news, that’s what they want – the math proves it!

But no proof is better than its assumptions, and one central assumption for this conclusion is that people can’t have mutually inconsistent desires. We’re supposed to have factored in long-term consequences of today’s actions, properly future-discounted and risk-assessed. In other words, we’re supposed to know what’s good for us and our children and grand-grand-children and make rational decisions to work towards that goal.

In reality, however, we often want what’s logically impossible. Problem is, a free market, left unattended, caters predominantly to our short-term wants.

On the risk of appearing to be inconsistent, economists are right when they speak of revealed preferences as the tangible conclusion of our internal dialogues. It’s just that economists, being economists, like to forget that people have a second way of revealing preferences – they vote.

We use democratic decision making to ensure the long-term consequences of our actions are consistent with the short-term ones, like putting a price on carbon. One of the major flaws of current economic theory is that it treats the two systems, economic and political, as separate, when really they’re two sides of the same coin. But free markets don’t work without a way to punish forgery, lies, and empty promises.

This is especially important for intangible goods – those which can be reproduced with near-zero effort. Intangible goods, like information, need enforced copyright, or else quality becomes economically unsustainable. Hence, it will take regulation, subsidies, or both to prevent us from tumbling down into the valley of alternative facts.

In the last months I’ve seen a lot of finger-pointing at scientists for not communicating enough or not communicating correctly, as if we were the ones to blame for fake news. But this isn’t our fault. It’s the media which has a problem – and it’s a problem scientists solved long ago.

The main reason why fake news are hard to identify, and why it remains profitable to reproduce what other outlets have already covered, is that journalists – in contrast to scientists – are utterly intransparent about their doings.

As a blogger, I see this happening constantly. I know that many, if not most, science writers closely follow science blogs. And the professional writers frequently report on topics previously covered by bloggers – without doing as much as naming their sources, not to mention referencing them.

This isn’t merely a personal paranoia. I know this because in several instances science writers actually told me that my blogpost about this-or-that has been so very useful. Some even asked me to share links to their articles they wrote based on it. Let that sink in for a moment – they make money from my expertise, don’t give me credits, and think that this is entirely appropriate behavior. And you wonder why fake news are economically profitable?

For a scientist, that’s mindboggling. Our currency is citations. Proper credits is pretty much all we want. Keep the money, but say my name.

I understand that journalists have to protect some sources, so don’t misunderstand me. I don’t mean they have to spill beans about their exclusive secrets. What I mean is simply that a supposed news outlet that merely echoes what’s been reported elsewhere should be required to refer to the earlier article.

Of course this would imply that the vast majority of existing news sites were revealed as copy-cats and lose readers. And of course it isn’t going to happen because nobody’s going to enforce it. If I saw even a remote chance of this happening, I wouldn’t have made the above predictions, would I?

What’s even more perplexing for a scientist, however, is that news outlets, to the extent that they do fact-checks, don’t tell customers that they fact-check, or what they fact-check, or how they fact-check.

Do you know, for example, which science magazines fact-check their articles? Some do, some don’t. I know for a few because I’ve been border-crossing between scientists and writers for a while. But largely it’s insider knowledge – I think it should be front-page information. Listen, Editor-in-Chief: If you fact-check, tell us.

It isn’t going to stop fake news, but I think a more open journalistic practice and publicly stated adherence to voluntary guidelines could greatly alleviate it. It probably makes you want to puke, but academics are good at a few things and high community standards are one of them. And that is what journalisms need right now.

I know, this isn’t exactly the cozy, shallow, good news that shares well. But it will be a great pleasure when, in ten years, I can say: I told you so.

Friday, February 17, 2017

Black Hole Information - Still Lost

[Illustration of black hole.
Image: NASA]
According to Google, Stephen Hawking is the most famous physicist alive, and his most famous work is the black hole information paradox. If you know one thing about physics, therefore, that’s what you should know.

Before Hawking, black holes weren’t paradoxical. Yes, if you throw a book into a black hole you can’t read it anymore. That’s because what has crossed a black hole’s event horizon can no longer be reached from the outside. The event horizon is a closed surface inside of which everything, even light, is trapped. So there’s no way information can get out of the black hole; the book’s gone. That’s unfortunate, but nothing physicists sweat over. The information in the book might be out of sight, but nothing paradoxical about that.

Then came Stephen Hawking. In 1974, he showed that black holes emit radiation and this radiation doesn’t carry information. It’s entirely random, except for the distribution of particles as a function of energy, which is a Planck spectrum with temperature inversely proportional to the black hole’s mass. If the black hole emits particles, it loses mass, shrinks, and gets hotter. After enough time and enough emission, the black hole will be entirely gone, with no return of the information you put into it. The black hole has evaporated; the book can no longer be inside. So, where did the information go?

You might shrug and say, “Well, it’s gone, so what? Don’t we lose information all the time?” No, we don’t. At least, not in principle. We lose information in practice all the time, yes. If you burn the book, you aren’t able any longer to read what’s inside. However, fundamentally, all the information about what constituted the book is still contained in the smoke and ashes.

This is because the laws of nature, to our best current understanding, can be run both forwards and backwards – every unique initial-state corresponds to a unique end-state. There are never two initial-states that end in the same final state. The story of your burning book looks very different backwards. If you were able to very, very carefully assemble smoke and ashes in just the right way, you could unburn the book and reassemble it. It’s an exceedingly unlikely process, and you’ll never see it happening in practice. But, in principle, it could happen.

Not so with black holes. Whatever formed the black hole doesn't make a difference when you look at what you wind up with. In the end you only have this thermal radiation, which – in honor of its discoverer – is now called ‘Hawking radiation.’ That’s the paradox: Black hole evaporation is a process that cannot be run backwards. It is, as we say, not reversible. And that makes physicists sweat because it demonstrates they don’t understand the laws of nature.

Black hole information loss is paradoxical because it signals an internal inconsistency of our theories. When we combine – as Hawking did in his calculation – general relativity with the quantum field theories of the standard model, the result is no longer compatible with quantum theory. At a fundamental level, every interaction involving particle processes has to be reversible. Because of the non-reversibility of black hole evaporation, Hawking showed that the two theories don’t fit together.

The seemingly obvious origin of this contradiction is that the irreversible evaporation was derived without taking into account the quantum properties of space and time. For that, we would need a theory of quantum gravity, and we still don’t have one. Most physicists therefore believe that quantum gravity would remove the paradox – just how that works they still don’t know.

The difficulty with blaming quantum gravity, however, is that there isn’t anything interesting happening at the horizon – it's in a regime where general relativity should work just fine. That’s because the strength of quantum gravity should depend on the curvature of space-time, but the curvature at a black hole horizon depends inversely on the mass of the black hole. This means the larger the black hole, the smaller the expected quantum gravitational effects at the horizon.

Quantum gravitational effects would become noticeable only when the black hole has reached the Planck mass, about 10 micrograms. When the black hole has shrunken to that size, information could be released thanks to quantum gravity. But, depending on what the black hole formed from, an arbitrarily large amount of information might be stuck in the black hole until then. And when a Planck mass is all that’s left, it’s difficult to get so much information out with such little energy left to encode it.

For the last 40 years, some of the brightest minds on the planets have tried to solve this conundrum. It might seem bizarre that such an outlandish problem commands so much attention, but physicists have good reasons for this. The evaporation of black holes is the best-understood case for the interplay of quantum theory and gravity, and therefore might be the key to finding the right theory of quantum gravity. Solving the paradox would be a breakthrough and, without doubt, result in a conceptually new understanding of nature.

So far, most solution attempts for black hole information loss fall into one of four large categories, each of which has its pros and cons.

  • 1. Information is released early.

    The information starts leaking out long before the black hole has reached Planck mass. This is the presently most popular option. It is still unclear, however, how the information should be encoded in the radiation, and just how the conclusion of Hawking’s calculation is circumvented.

    The benefit of this solution is its compatibility with what we know about black hole thermodynamics. The disadvantage is that, for this to work, some kind of non-locality – a spooky action at a distance – seems inevitable. Worse still, it has recently been claimed that if information is released early, then black holes are surrounded by a highly-energetic barrier: a “firewall.” If a firewall exists, it would imply that the principle of equivalence, which underlies general relativity, is violated. Very unappealing.

  • 2. Information is kept, or it is released late.

    In this case, the information stays in the black hole until quantum gravitational effects become strong, when the black hole has reached the Planck mass. Information is then either released with the remaining energy or just kept forever in a remnant.

    The benefit of this option is that it does not require modifying either general relativity or quantum theory in regimes where we expect them to hold. They break down exactly where they are expected to break down: when space-time curvature becomes very large. The disadvantage is that some have argued it leads to another paradox, that of the possibility to infinitely produce black hole pairs in a weak background field: i.e., all around us. The theoretical support for this argument is thin, but it’s still widely used.

  • 3. Information is destroyed.

    Supporters of this approach just accept that information is lost when it falls into a black hole. This option was long believed to imply violations of energy conservation and hence cause another inconsistency. In recent years, however, new arguments have surfaced according to which energy might still be conserved with information loss, and this option has therefore seem a little revival. Still, by my estimate it’s the least popular solution.

    However, much like the first option, just saying that’s what one believes doesn’t make for a solution. And making this work would require a modification of quantum theory. This would have to be a modification that doesn’t lead to conflict with any of our experiments testing quantum mechanics. It’s hard to do.

  • 4. There’s no black hole.

    A black hole is never formed or information never crosses the horizon. This solution attempt pops up every now and then, but has never caught on. The advantage is that it’s obvious how to circumvent the conclusion of Hawking’s calculation. The downside is that this requires large deviations from general relativity in small curvature regimes, and it is therefore difficult to make compatible with precision tests of gravity.
There are a few other proposed solutions that don’t fall into any of these categories, but I will not – cannot! – attempt to review all of them here. In fact, there isn’t any good review on the topic – probably because the mere thought of compiling one is dreadful. The literature is vast. Black hole information loss is without doubt the most-debated paradox ever.

And it’s bound to remain so. The temperature of black holes which we can observe today is far too small to be observable. Hence, in the foreseeable future nobody is going to measure what happens to the information which crosses the horizon. Let me therefore make a prediction. In ten years from now, the problem will still be unsolved.

Hawking just celebrated his 75th birthday, which is a remarkable achievement by itself. 50 years ago, his doctors declared him dead soon, but he's stubbornly hung onto life. The black hole information paradox may prove to be even more stubborn. Unless a revolutionary breakthrough comes, it may outlive us all.

(I wish to apologize for not including references. If I’d start with this, I wouldn’t be done by 2020.)

[This post previously appeared on Starts With A Bang.]

Sunday, February 12, 2017

Away Note

I'm traveling next week and will be offline for some days. Blogging may be insubstantial, if existent, and comments may be stuck in the queue longer than usual. But I'm sure you'll survive without me ;)

And since you haven't seen the girls for a while, here is a recent photo. They'll be starting school this year in the fall and are very excited about it.

Thursday, February 09, 2017

New Data from the Early Universe Does Not Rule Out Holography

[img src: entdeckungen.net]
It’s string theorists’ most celebrated insight: The world is a hologram. Like everything else string theorists have come up with, it’s an untested hypothesis. But now, it’s been put to test with a new analysis that compares a holographic early universe with its non-holographic counterpart.

Tl;dr: Results are inconclusive.

When string theorists say we live in a hologram, they don’t mean we are shadows in Plato’s cave. They mean their math says that all information about what’s inside a box can be encoded on the boundary of that box – albeit in entirely different form.

The holographic principle – if correct – means there are two different ways to describe the same reality. Unlike in Plato’s cave, however, where the shadows lack information about what caused them, with holography both descriptions are equally good.

Holography would imply that the three dimensions of space which we experience are merely one way to think of the world. If you can describe what happens in our universe by equations that use only two-dimensional surfaces, you might as well say we live in two dimensions – just that these are dimensions we don’t normally experience.

It’s a nice idea but hard to test. That’s because the two-dimensional interpretation of today’s universe isn’t normally very workable. Holography identifies two different theories with each other by a relation called “duality.” The two theories in question here are one for gravity in three dimensions of space, and a quantum field theory without gravity in one dimension less. However, whenever one of the theories is weakly coupled, the other one is strongly coupled – and computations in strongly coupled theories are hard, if not impossible.

The gravitational force in our universe is presently weakly coupled. For this reason General Relativity is the easier side of the duality. However, the situation might have been different in the early universe. Inflation – the rapid phase of expansion briefly after the big bang – is usually assumed to take place in gravity’s weakly coupled regime. But that might not be correct. If instead gravity at that early stage was strongly coupled, then a description in terms of a weakly coupled quantum field theory might be more appropriate.

This idea has been pursued by Kostas Skenderis and collaborators for several years. These researchers have developed a holographic model in which inflation is described by a lower-dimensional non-gravitational theory. In a recent paper, their predictions have been put to test with new data from the Planck mission, a high-precision measurement of the temperature fluctuations of the cosmic microwave background.


In this new study, the authors compare the way that holographic inflation and standard inflation in the concordance model – also known as ΛCDM – fit the data. The concordance model is described by six parameters. Holographic inflation has a closer connection to the underlying theory and so the power spectrum brings in one additional parameter, which makes a total of seven. After adjusting for the number of parameters, the authors find that the concordance model fits better to the data.

However, the biggest discrepancy between the predictions of holographic inflation and the concordance model arise at large scales, or low multipole moments respectively. In this regime, the predictions from holographic inflation cannot really be trusted. Therefore, the authors repeat the analysis with the low multipole moments omitted from the data. Then, the two models fit the data equally well. In some cases (depending on the choice of prior for one of the parameters) holographic inflation is indeed a better fit, but the difference is not statistically significant.

To put this result into context it must be added that the best-understood cases of holography work in space-times with a negative cosmological constant, the Anti-de Sitter spaces. Our own universe, however, is not of this type. It has instead a positive cosmological constant, described by de-Sitter space. The use of the holographic principle in our universe is hence not strongly supported by string theory, at least not presently.

The model for holographic inflation can therefore best be understood as one that is motivated by, but not derived from, string theory. It is a phenomenological model, developed to quantify predictions and test them against data.

While the difference between the concordance model and holographic inflation which this study finds are insignificant, it is interesting that a prediction based on such an entirely different framework is able to fit the data at all. I should also add that there is a long-standing debate in the community as to whether the low multipole moments are well-described by the concordance model, or whether any of the large-scale anomalies are to be taken seriously.

In summary, I find this an interesting result because it’s an entirely different way to think of the early universe, and yet it describes the data. For the same reason, however, it’s also somewhat depressing. Clearly, we don’t presently have a good way to test all the many ideas that theorists have come up with.

Friday, February 03, 2017

Testing Quantum Foundations With Atomic Clocks

Funky clock at Aachen University.
Nobel laureate Steven Weinberg has recently drawn attention by disliking quantum mechanics. Besides an article for The New York Review of Books and a public lecture to bemoan how unsatisfactory the current situation is, he has, however, also written a technical paper:
    Lindblad Decoherence in Atomic Clocks
    Steven Weinberg
    Phys. Rev. A 94, 042117 (2016)
    arXiv:1610.02537 [quant-ph]
In this paper, Weinberg studies the use of atomic clocks for precision tests of quantum mechanics. Specifically, to search for an unexpected, omnipresent, decoherence .

Decoherence is the process that destroys quantum-ness. It happens constantly and everywhere. Each time a quantum state interacts with an environment – air, light, neutrinos, what have you – it becomes a little less quantum.

This type of decoherence explains why, in every-day life, we don’t see quantum-typical behavior, like cats being both dead and alive and similar nonsense. Trouble is, decoherence takes place only if you consider the environment a source of noise whose exact behavior is unknown. If you look at the combined system of the quantum state plus environment, that still doesn’t decohere. So how come on large scales our world is distinctly un-quantum?

It seems that besides this usual decoherence, quantum mechanics must do something else, that is explaining the measurement process. Decoherence merely converts a quantum state into a probabilistic (“mixed”) state. But upon measurement, this probabilistic state must suddenly change to reflect that, after observation, the state is in the measured configuration with 100% certainty. This update is also sometimes referred to as the “collapse” of the wave-function.

Whether or not decoherence solves the measurement problem then depends on your favorite interpretation of quantum mechanics. If you don’t think the wave-function, which describes the quantum state, is real but merely encodes information, then decoherence does the trick. If you do, in contrast, think the wave-function is real, then decoherence doesn’t help you understand what happens in a measurement because you still have to update probabilities.

That is so unless you are a fan of the the many-worlds interpretation which simply declares the problem nonexistent by postulating all possible measurement outcomes are equally real. It just so happens that we find ourselves in only one of these realities. I’m not a fan of many worlds because defining problems away rarely leads to progress. Weinberg finds all the many worlds “distasteful,” which also rarely leads to progress.

What would really solve the problem, however, is some type of fundamental decoherence, an actual collapse prescription basically. It’s not a particularly popular idea, but at least it is an idea, and it’s one that’s worth testing.

What has any of that to do with atomic clocks? Well, atomic clocks work thanks to quantum mechanics, and they work extremely precisely. And so, Weinberg’s idea is to use atomic clocks to look for evidence of fundamental decoherence.

An atomic clock trades off the precise measurement of time for the precise measurement of a wavelength, or frequency respectively, which counts oscillations per time. And that is where quantum mechanics comes in handy. A hundred years or so ago, physicist found that the energies of electrons which surround the atomic nucleus can take on only discrete values. This also means they can absorb and emit light only of energies that corresponds to the difference in the discrete levels.

Now, as Einstein demonstrated with the photoelectric effect, the energy of light is proportional to its frequency. So, if you find light of a frequency that the atom can absorb, you must have hit one of the differences in energy levels. These differences in energy levels are (at moderate temperatures) properties of the atom and almost insensitive to external disturbances. That’s what makes atomic clocks tick so regularly.

So, it comes down to measuring atomic transition frequencies. Such measurements works by tuning a laser until a cloud of atoms (usually Cesium or Rubidium) absorbs most of the light. The absorbtion indicates you have hit the transition frequency.

In modern atomic clocks, one employs a two-pulse scheme, known as the Ramsey method. A cloud of atoms is exposed to a first pulse, then left to drift for a second or so, and then comes a second pulse. After that, you measure how many atoms were affected by the pulses, and use a feedback loop to tune the frequency of the light to maximize the number of atoms. (Further reading: “Real Clock Tutorial” by Chad Orzel.)

If, however, between the two pulses some unexpected decoherence happens, then the frequency tuning doesn’t work as well as it does in normal quantum mechanics. And this, so Weinberg’s argument, would have been noticed already if decoherence were relevant for atomic masses on the timescale of seconds. This way, he obtains constraints on fundamental decoherence. And, as bonus, proposes a new way of testing the foundations of quantum mechanics by use of the Ramsey method.

It’s a neat idea. It strikes me as the kind of paper that comes about as spin-off when thinking about a problem. I find this an interesting work because my biggest frustration with quantum foundations is all the talk about what is or isn’t distasteful about this or that interpretation. For me, the real question is whether quantum mechanics – in whatever interpretation – is fundamental, or whether there is an underlying theory. And if so, how to test that.

As a phenomenologist, you won’t be surprised to hear that I think research on the foundations of quantum mechanics would benefit from more phenomenology. Or, in summary: A little less talk, a little more action please.

Wednesday, January 25, 2017

What is Physics? [Video]

I spent the holidays watching some tutorials for video animations and learned quite a few new things. The below video collects some exercise projects I did to answer a question Gloria asked the other day: “Was ist Phykik?” (What is phycics?). Embarrassingly, I doubt she learned more from my answer than how to correctly pronounce physics. It’s hard to explain stuff to 6-year olds if you’re used to dealing with brainy adults.

Thanks to all the tutorials, however, I think this video is dramatically better than the previous one. There are still a number of issues I'm unhappy about, notably the timing, which I find hard to get right. Also, the lip-synching is poor. Not to mention that I still can’t draw and hence my cartoon child looks like a giant zombie hamster.

Listening to it again, the voiceover seems too fast to me and would have benefited from a few breaks. In summary, there’s room for improvement.

Complete transcript:

The other day, my daughter asked me “What is physics?”

She’s six. My husband and I, we’re both physicists. You’d think I had an answer. But best I managed was: Physics is what explains the very, very small and very, very large things.

There must be a better explanation, I said to myself.

The more I thought about it though, the more complicated it got. Physics isn’t only about small and large things. And nobody uses a definition to decide what belongs into the department of physics. Instead, it’s mostly history and culture that marks disciplinary boundaries. The best summary that came to my mind is “Physics is what physicists do.”

But then what do physicists do? Now that’s a question I can help you with.

First, let us see what is very small and very large.

An adult human has a size of about a meter. Add some zeros to size and we have small planets like Earth with a diameter of some tenthousand kilometers, and larger planets, like Saturn. Add some more zeros, and we get to solar systems, which are more conveniently measured with the time it takes light to travel through them, a few light-hours.

On even larger scales, we have galaxies, with typical sizes of a hundred-thousand light years, and galaxy clusters, and finally the whole visible universe, with an estimated size of 100 billion light years. Beyond that, there might be an even larger collection of universes which are constantly newly created by bubbling out of vacuum. It’s called the ‘multiverse’ but nobody knows if it’s real.

Physics, or more specifically cosmology, is the only discipline that currently studies what happens at such large scales. This remains so for galaxy clusters and galaxies and interstellar space, which fall into the area of astrophysics. There is an emerging field, called astrobiology, where scientists look for life elsewhere in the universe, but so far they don’t have much to study.

Once we get to the size of planets, however, much of what we observe is explained by research outside of physics. There is geology and atmospheric science and climate science. Then there are the social sciences and all the life sciences, biology and medicine and zoology and all that.

When we get to scales smaller than humans, at about a micrometer we have bacteria and cells. At a few nanometers, we have large molecular structures like our DNA, and then proteins and large molecules. Somewhere here, we cross over into the field of chemistry. If we get to even smaller scales, to the size of atoms of about an Angstrom, physics starts taking over again. First there is atomic physics, then there is nuclear physics, and then there is particle physics, which deals with quarks and electrons and photons and all that. Beyond that... nobody knows. But to the extent that it’s science at all, it’s safely in the hands of physicists.

If you go down 16 more orders of magnitude, you get to what is called the Planck length, at 10^-35 meters. That’s where quantum fluctuations of space-time become important and it might turn out elementary particles are made of strings or other strange things. But that too, is presently speculation.

One would need an enormously high energy to probe such short distances, much higher than what our particle accelerators can reach. Such energies, however, were reached at the big bang, when our universe started to expand. And so, if we look out to very, very large distances, we actually look back in time to high energies and very short distances. Particle physics and cosmology are therefore close together and not far apart.

Not everything in physics, however, is classified by distance scales. Rocks fall, water freezes, planes fly, and that’s physics too. There are two reasons for that.

First, gravity and electrodynamics are forces that span over all distance scales.

And second, the tools of physics can be used also for stuff composed of many small things that behave similarly, like solids fluids and gases. But really, it could be anything from a superconductor, to a gas of strings, to a fluid of galaxies. The behavior of such large numbers of similar objects is studied in fields like condensed matter physics, plasma physics, thermodynamics, and statistical mechanics.

That’s why there’s more physics in every-day life than what the breakdown by distance suggests. And that’s also why the behavior of stuff at large and small distances has many things in common. Indeed, methods of physics can, and have been used, also to describe the growth of cities, bird flocking, or traffic flow. All of that is physics, too.

I still don’t have a good answer for what physics is. But next time I am asked, I have a video to show.

Thursday, January 19, 2017

Dark matter’s hideout just got smaller, thanks to supercomputers.

Lattice QCD. Artist’s impression.
Physicists know they are missing something. Evidence that something’s wrong has piled up for decades: Galaxies and galaxy clusters don’t behave like Einstein’s theory of general relativity predicts. The observed discrepancies can be explained either by modifying general relativity, or by the gravitational pull of some, so-far unknown type of “dark matter.”

Theoretical physicists have proposed many particles which could make up dark matter. The most popular candidates are a class called “Weakly Interacting Massive Particles” or WIMPs. They are popular because they appear in supersymmetric extensions of the standard model, and also because they have a mass and interaction strength in just the right ballpark for dark matter. There have been many experiments, however, trying to detect the elusive WIMPs, and one after the other reported negative results.

The second popular dark matter candidate is a particle called the “axion,” and the worse the situation looks for WIMPs the more popular axions are becoming. Like WIMPs, axions weren’t originally invented as dark matter candidates.

The strong nuclear force, described by Quantum ChromoDynamics (QCD), could violate a symmetry called “CP symmetry,” but it doesn’t. An interaction term that could give rise to this symmetry-violation therefore has a pre-factor – the “theta-parameter” (θ) – that is either zero or at least very, very small. That nobody knows just why the theta-parameter should be so small is known as the “strong CP problem.” It can be solved by promoting the theta-parameter to a field which relaxes to the minimum of a potential, thereby setting the coupling to the troublesome term to zero, an idea that dates back to Peccei and Quinn in 1977.

Much like the Higgs-field, the theta-field is then accompanied by a particle – the axion – as was pointed out by Steven Weinberg and Frank Wilczek in 1978.

The original axion was ruled out within a few years after being proposed. But theoretical physicists quickly put forward more complicated models for what they called the “hidden axion.” It’s a variant of the original axion that is more weakly interacting and hence more difficult to detect. Indeed it hasn’t been detected. But it also hasn’t been ruled out as a dark matter candidate.

Normally models with axions have two free parameters: one is the mass of the axion, the other one is called the axion decay constant (usually denoted f_a). But these two parameters aren’t actually independent of each other. The axion gets its mass by the breaking of a postulated new symmetry. A potential, generated by non-perturbative QCD effects, then determines the value of the mass.

If that sounds complicated, all you need to know about it to understand the following is that it’s indeed complicated. Non-perturbative QCD is hideously difficult. Consequently, nobody can calculate what the relation is between the axion mass and the decay constant. At least so far.

The potential which determines the particle’s mass depends on the temperature of the surrounding medium. This is generally the case, not only for the axion, it’s just a complication often omitted in the discussion of mass-generation by symmetry breaking. Using the potential, it can be shown that the mass of the axion is inversely proportional to the decay constant. The whole difficulty then lies in calculating the factor of proportionality, which is a complicated, temperature-dependent function, known as the topological susceptibility of the gluon field. So, if you could calculate the topological susceptibility, you’d know the relation between the axion mass and the coupling.

This isn’t a calculation anybody presently knows how to do analytically because the strong interaction at low temperatures is, well, strong. The best chance is to do it numerically by putting the quarks on a simulated lattice and then sending the job to a supercomputer.

And even that wasn’t possible until now because the problem was too computationally intensive. But in a new paper, recently published in Nature, a group of researchers reports they have come up with a new method of simplifying the numerical calculation. This way, they succeeded in calculating the relation between the axion mass and the coupling constant.

    Calculation of the axion mass based on high-temperature lattice quantum chromodynamics
    S. Borsanyi et al
    Nature 539, 69–71 (2016)

(If you don’t have journal access, it’s not the exact same paper as this but pretty close).

This result is a great step forward in understanding the physics of the early universe. It’s a new relation which can now be included in cosmological models. As a consequence, I expect that the parameter-space in which the axion can hide will be much reduced in the coming months.

I also have to admit, however, that for a pen-on-paper physicist like me this work has a bittersweet aftertaste. It’s a remarkable achievement which wouldn’t have been possible without a clever formulation of the problem. But in the end, it’s progress fueled by technological power, by bigger and better computers. And maybe that’s where the future of our field lies, in finding better ways to feed problems to supercomputers.

Friday, January 13, 2017

What a burst! A fresh attempt to see space-time foam with gamma ray bursts.

It’s an old story: Quantum fluctuations of space-time might change the travel-time of light. Light of higher frequencies would be a little faster than that of lower frequencies. Or slower, depending on the sign of an unknown constant. Either way, the spectral colors of light would run apart, or ‘disperse’ as they say if they don’t want you to understand what they say.

Such quantum gravitational effects are miniscule, but added up over long distances they can become observable. Gamma ray bursts are therefore ideal to search for evidence of such an energy-dependent speed of light. Indeed, the energy-dependent speed of light has been sought for and not been found, and that could have been the end of the story.

Of course it wasn’t because rather than giving up on the idea, the researchers who’d been working on it made their models for the spectral dispersion increasingly difficult and became more inventive when fitting them to unwilling data. Last thing I saw on the topic was a linear regression with multiple curves of freely chosen offset – sure way to fit any kind of data on straight lines of any slope – and various ad-hoc assumptions to discard data that just didn’t want to fit, such as energy cuts or changes in the slope.

These attempts were so desperate I didn’t even mention them previously because my grandma taught me if you have nothing nice to say, say nothing.

But here’s a new twist to the story, so now I have something to say, and something nice in addition.

On June 25 2016, the Fermi Telescope recorded a truly remarkable burst. The event, GRB160625, had a total duration of 770s and had three separate sub-bursts with the second, and largest, sub-burst lasting 35 seconds (!). This has to be contrasted with the typical burst lasting a few seconds in total.

This gamma ray burst for the first time allowed researchers to clearly quantify the relative delay of the different energy channels. The analysis can be found in this paper
    A New Test of Lorentz Invariance Violation: the Spectral Lag Transition of GRB 160625B
    Jun-Jie Wei, Bin-Bin Zhang, Lang Shao, Xue-Feng Wu, Peter Mészáros
    arXiv:1612.09425 [astro-ph.HE]

Unlike supernovae IIa, which have very regular profiles, gamma ray bursts are one of a kind and they can therefore be compared only to themselves. This makes it very difficult to tell whether or not highly energetic parts of the emission are systematically delayed because one doesn’t know when they were emitted. Until now, the analysis relied on some way of guessing the peaks in three different energy channels and (basically) assuming they were emitted simultaneously. This procedure sometimes relied on as little as one or two photons per peak. Not an analysis you should put a lot of trust in.

But the second sub-burst of GRB160625 was so bright, the researchers could break it down in 38 energy channels – and the counts were still high enough to calculate the cross-correlation from which the (most likely) time-lag can be extracted.

Here are the 38 energy channels for the second sub-burst

Fig 1 from arXiv:1612.09425


For the 38 energy channels they calculate 37 delay-times relative to the lowest energy channel, shown in the figure below. I find it a somewhat confusing convention, but in their nomenclature a positive time-lag corresponds to an earlier arrival time. The figure therefore shows that the photons of higher energy arrive earlier. The trend, however, isn’t monotonically increasing. Instead, it turns around at a few GeV.

Fig 2 from arXiv:1612.09425


The authors then discuss a simple model to fit the data. First, they assume that the emission has an intrinsic energy-dependence due to astrophysical effects which cause a positive lag. They model this with a power-law that has two free parameters: an exponent and an overall pre-factor.

Second, they assume that the effect during propagation – presumably from the space-time foam – causes a negative lag. For the propagation-delay they also make a power-law ansatz which is either linear or quadratic. This ansatz has one free parameter which is an energy scale (expected to be somewhere at the Planck energy).

In total they then have three free parameters, for which they calculate the best-fit values. The fitted curves are also shown in the image above, labeled n=1 (linear) and n=2 (quadratic). At some energy, the propagation-delay becomes more relevant than the intrinsic delay, which leads to the turn-around of the curve.

The best-fit value of the quantum gravity energy is 10q GeV with q=15.66 for the linear and q=7.17 for the quadratic case. From this they extract a lower limit on the quantum gravity scale at the 1 sigma confidence level, which is 0.5 x 1016 GeV for the linear and 1.4 x 107 GeV for the quadratic case. As you can see in the above figure, the data in the high energy bins has large error-bars owing to the low total count, so the evidence that there even is a drop isn’t all that great.

I still don’t buy there’s some evidence for space-time foam to find here, but I have to admit that this data finally convinces me that at least there is a systematic lag in the spectrum. That’s the nice thing I have to say.

Now to the not-so-nice. If you want to convince me that some part of the spectral distortion is due to a propagation-effect, you’ll have to show me evidence that its strength depends on the distance to the source. That is, in my opinion, the only way to make sure one doesn’t merely look at delays present already at emission. And even if you’d done that, I still wouldn’t be convinced that it has anything to do with space-time foam.

I’m skeptic of this because the theoretical backing is sketchy. Quantum fluctuations of space-time in any candidate-theory for quantum gravity do not lead to this effect. One can work with phenomenological models, in which such effects are parameterized and incorporated as new physics into the known theories. This is all well and fine. Unfortunately, in this case existing data already constrains the parameters so that the effect on the propagation of light is unmeasurably small. It’s already ruled out. Such models introduce a preferred frame and break Lorentz-invariance and there is loads of data speaking against it.

It has been claimed that the already existing constraints from Lorentz-invariance violation can be circumvented if Lorentz-invariance is not broken but instead deformed. In this case the effective field theory limit supposedly doesn’t apply. This claim is also quoted in the paper above (see end of section 3.) However, if you look at the references in question, you will not find any argument for how one manages to avoid this. Even if one can make such an argument though (I believe it’s possible, not sure why it hasn’t been done), the idea suffers from various other theoretical problems that, to make a very long story very short, make me think the quantum gravity-induced spectral lag is highly implausible.

However, leaving aside my theory-bias, this newly proposed model with two overlaid sources for the energy-dependent time-lag is simple and should be straight-forward to test. Most likely we will soon see another paper evaluating how well the model fits other bursts on record. So stay tuned, something’s happening here.

Sunday, January 08, 2017

Stephen Hawking turns 75. Congratulations! Here’s what to celebrate.

If people know anything about physics, it’s the guy in a wheelchair who speaks with a computer. Google “most famous scientist alive” and the answer is “Stephen Hawking.” But if you ask a physicist, what exactly is he famous for?

Hawking became “officially famous” with his 1988 book “A Brief History of Time.” Among physicists, however, he’s more renowned for the singularity theorems. In his 1960s work together with Roger Penrose, Hawking proved that singularities form under quite general conditions in General Relativity, and they developed a mathematical framework to determine when these conditions are met.

Before Hawking and Penrose’s work, physicists had hoped that the singularities which appeared in certain solutions to General Relativity were mathematical curiosities of little relevance for physical reality. But the two showed that this was not so, that, to the very contrary, it’s hard to avoid singularities in General Relativity.

Since this work, the singularities in General Relativity are understood to signal the breakdown of the theory in regions of high energy-densities. In 1973, together with George Ellis, Hawking published the book “The Large Scale Structure of Space-Time” in which this mathematical treatment is laid out in detail. Still today it’s one of the most relevant references in the field.

Only a year later, in 1974, Hawking published a seminal paper in which he demonstrates that black holes give off thermal radiation, now referred to as “Hawking radiation.” This evaporation of black holes results in the black hole information loss paradox which is still unsolved today. Hawking’s work demonstrated clearly that the combination of General Relativity with the quantum field theories of the standard model spells trouble. Like the singularity theorems, it’s a result that doesn’t merely indicate, but prove that we need a theory of quantum gravity in order to consistently describe nature.

While the 1974 paper was predated by Bekenstein’s finding that black holes resemble thermodynamical systems, Hawking’s derivation was the starting point for countless later revelations. Thanks to it, physicists understand today that black holes are a melting pot for many different fields of physics – besides general relativity and quantum field theory, there is thermodynamics and statistical mechanics, and quantum information and quantum gravity. Let’s not forget astrophysics, and also mix in a good dose of philosophy. In 2017, “black hole physics” could be a subdiscipline in its own right – and maybe it should be. We owe much of this to Stephen Hawking.

In the 1980s, Hawking worked with Jim Hartle on the no-boundary proposal according to which our universe started in a time-less state. It’s an appealing idea whose time hasn’t yet come, but I believe this might change within the next decade or so.

After this, Hawking tries several times to solve the riddle of black hole information loss that he posed himself, most recently in early 2016. While his more recent work has been met with interest in the community, it hasn’t been hugely impactful – it attracts significantly more attention by journalists than by physicists.

As a physicist myself, I frequently get questions about Stephen Hawking: “What’s he doing these days?” – I don’t know. “Have you ever met him?” – He slept right through it. “Do you also work on the stuff that he works on?” – I try to avoid it. “Will he win a Nobel Prize?” – Ah. Good question.

Hawking’s shot at the Nobel Prize is the Hawking radiation. The astrophysical black holes which we can presently observe have a temperature way too small to be measured in the foreseeable future. But since the temperature increases for smaller mass, lighter black holes are hotter, and could allow us to measure Hawking radiation.

Black holes of sufficiently small masses could have formed from density fluctuations in the early universe and are therefore referred to as “primordial black holes.” However, none of them have been seen, and we have tight observational constraints on their existence from a variety of data. It isn’t yet entirely excluded that they are around, but I consider it extremely unlikely that we’ll observe one of these within my lifetime.

For what the Nobel is concerned, this leaves the Hawking radiation in gravitational analogues. In this case, one uses a fluid to mimic a curved space-time background. The mathematical formulation of this system is (in certain approximations) identical to that of an actual black hole, and consequently the gravitational analogues should also emit Hawking radiation. Indeed, Jeff Steinhauer claims that he has measured this radiation.

At the time of writing, it’s still somewhat controversial whether Steinhauer has measured what he thinks he has. But I have little doubt that sooner or later this will be settled – the math is clear: The radiation should be there. It might take some more experimental tinkering, but I’m confident sooner or later it’ll be measured.

Sometimes I hear people complain: “But it’s only an analogy.” I don’t understand this objection. Mathematically it’s the same. That in the one case the background is an actually curved space-time and in the other case it’s an effectively curved space-time created by a flowing fluid doesn’t matter for the calculation. In either situation, measuring the radiation would demonstrate the effect is real.

However, I don’t think that measuring Hawking radiation in an analogue gravity system would be sufficient to convince the Nobel committee Hawking deserves the prize. For that, the finding would have to have important implications beyond confirming a 40-years-old computation.

One way this could happen, for example, would be if the properties of such condensed matter systems could be exploited as quantum computers. This isn’t as crazy as it sounds. Thanks to work built on Hawking’s 1974 paper we know that black holes are both extremely good at storing information and extremely efficient at distributing it. If that could be exploited in quantum computing based on gravitational analogues, then I think Hawking would be in line for a Nobel. But that’s a big “if.” So don’t bet on it.

Besides his scientific work, Hawking has been and still is a master of science communication. In 1988, “A Brief History of Time” was a daring book about abstract ideas in a fringe area of theoretical physics. Hawking, to everybody’s surprise, proved that the public has an interest in esoteric problems like what happens if you fall into a black hole, what happed at the Big Bang, or whether god had any choice when he created the laws of nature.

Since 1988, the popular science landscape has changed dramatically. There are more books about theoretical physics than ever before and they are more widely read than ever before. I believe that Stephen Hawking played a big role in encouraging other scientists to write about their own research for the public. It certainly was an inspiration for me.

So, Happy Birthday, Stephen, and thank you.

Tuesday, January 03, 2017

The Bullet Cluster as Evidence against Dark Matter

Once upon a time, at the far end of the universe, two galaxy clusters collided. Their head-on encounter tore apart the galaxies and left behind two reconfigured heaps of stars and gas, separating again and moving apart from each other, destiny unknown.

Four billion years later, a curious group of water-based humanoid life-forms tries to make sense of the galaxies’ collision. They point their telescope at the clusters’ relics and admire its odd shape. They call it the “Bullet Cluster.”

In the below image of the Bullet Cluster you see three types of data overlaid. First, there are the stars and galaxies in the optical regime. (Can you spot the two foreground objects?) Then there are the regions colored red which show the distribution of hot gas, inferred from X-ray measurements. And the blue-colored regions show the space-time curvature, inferred from gravitational lensing which deforms the shape of galaxies behind the cluster.

The Bullet Cluster.
[Img Src: APOD. Credits: NASA]


The Bullet Cluster comes to play an important role in the humanoids’ understanding of the universe. Already a generation earlier, they had noticed that their explanation for the gravitational pull of matter did not match observations. The outer stars of many galaxies, they saw, moved faster than expected, meaning that the gravitational pull was stronger than what their theories could account for. Galaxies which combined in clusters, too, were moving too fast, indicating more pull than expected. The humanoids concluded that their theory, according to which gravity was due to space-time curvature, had to be modified.

Some of them, however, argued it wasn’t gravity they had gotten wrong. They thought there was instead an additional type of unseen, “dark matter,” that was interacting so weakly it wouldn’t have any consequences besides the additional gravitational pull. They even tried to catch the elusive particles, but without success. Experiment after experiment reported null results. Decades passed. And yet, they claimed, the dark matter particles might just be even more weakly interacting. They built larger experiments to catch them.

Dark matter was a convenient invention. It could be distributed in just the right amounts wherever necessary and that way the data of every galaxy and galaxy cluster could be custom-fit. But while dark matter worked well to fit the data, it failed to explain how regular the modification of the gravitational pull seemed to be. On the other hand, a modification of gravity was difficult to work with, especially for handling the dynamics of the early universe, which was much easier to explain with particle dark matter.

To move on, the curious scientists had to tell apart their two hypotheses: Modified gravity or particle dark matter? They needed an observation able to rule out one of these ideas, a smoking gun signal – the Bullet Cluster.

The theory of particle dark matter had become known as the “concordance model” (also: ΛCDM). It heavily relied on computer simulations which were optimized so as to match the observed structures in the universe. From these simulations, the scientists could tell the frequency by which galaxy clusters should collide and the typical relative speed at which that should happen.

From the X-ray observations, the scientists inferred that the collision speed of the galaxies in the Bullet Cluster must have taken place at approximately 3000 km/s. But such high collision speeds almost never occurred in the computer simulations based on particle dark matter. The scientists estimated the probability for a Bullet-Cluster-like collision to be about one in ten billion, and concluded: that we see such a collision is incompatible with the concordance model. And that’s how the Bullet Cluster became strong evidence in favor of modified gravity.

However, a few years later some inventive humanoids had optimized the dark-matter based computer simulations and arrived at a more optimistic estimate of a probability of 4.6×10-4 for seeing something like the Bullet-Cluster. Briefly later they revised the probability again to 6.4×10−6.

Either way, the Bullet Cluster remained a stunningly unlikely event to happen in the theory of particle dark matter. It was, in contrast, easy to accommodate in theories of modified gravity, in which collisions with high relative velocity occur much more frequently.

It might sound like a story from a parallel universe – but it’s true. The Bullet Cluster isn’t the incontrovertible evidence for particle dark matter that you have been told it is. It’s possible to explain the Bullet Cluster with models of modified gravity. And it’s difficult to explain it with particle dark matter.

How come we so rarely read about the difficulties the Bullet Cluster poses for particle dark matter? It’s because the pop sci media doesn’t like anything better than a simple explanation that comes with an image that has “scientific consensus” written all over it. Isn’t it obvious the visible stuff is separated from the center of the gravitational pull?

But modifying gravity works by introducing additional fields that are coupled to gravity. There’s no reason that, in a dynamical system, these fields have to be focused at the same place where the normal matter is. Indeed, one would expect that modified gravity too should have a path dependence that leads to such a delocalization as is observed in this, and other, cluster collisions. And never mind that when they pointed at the image of the Bullet Cluster nobody told you how rarely such an event occurs in models with particle dark matter.

No, the real challenge for modified gravity isn’t the Bullet Cluster. The real challenge is to get the early universe right, to explain the particle abundances and the temperature fluctuations in the cosmic microwave background. The Bullet Cluster is merely a red-blue herring that circulates on social media as a shut-up argument. It’s a simple explanation. But simple explanations are almost always wrong.

Monday, January 02, 2017

How to use an "argument from authority"

I spent the holidays playing with the video animation software. As a side-effect, I produced this little video.



If you'd rather read than listen, here's the complete voiceover:

It has become a popular defense of science deniers to yell “argument from authority” when someone quotes an experts’ opinion. Unfortunately, the argument from authority is often used incorrectly.

What is an “argument from authority”?

An “argument from authority” is a conclusion drawn not by evaluating the evidence itself, but by evaluating an opinion about that evidence. It is also sometimes called an “appeal to authority”.

Consider Bob. Bob wants to know what follows from A. To find out, he has a bag full of knowledge. The perfect argument would be if Bob starts with A and then uses his knowledge to get to B to C to D and so on until he arrives at Z. But reality is never perfect.

Let’s say Bob wants to know what’s the logarithm of 350,000. In reality he can’t find anything useful in his bag of knowledge to answer that question. So instead he calls his friend, the Pope. The Pope says “The log is 4.8.” So, Bob concludes, the log of 350,000 is 4.8 because the Pope said so.

That’s an argument from authority – and you have good reasons to question its validity.

But unlike other logical fallacies, an argument from authority isn’t necessarily wrong. It’s just that, without further information about the authority that has been consulted, you don’t know how good the argument it is.

Suppose Bob hadn’t asked the Pope what’s the log of 350,000 but instead he’d have asked his calculator. The calculator says it’s approximately 5.544.

We don’t usually call this an argument from authority. But in terms of knowledge evaluation it’s the same logical structure as exporting an opinion to a trusted friend. It’s just that in this case the authority is your calculator and it’s widely known to be an expert in calculation. Indeed, it’s known to be pretty much infallible.

You believe that your friend the calculator is correct not because you’ve tried to verify every result it comes up with. You believe it’s correct because you trust all the engineers and scientists who have produced it and who also use calculators themselves.

Indeed, most of us would probably trust a calculator more than our own calculations, or that of the Pope. And there is a good reason for that – we have a lot of prior knowledge about whose opinion on this matter is reliable. And that is also relevant knowledge.

Therefore, an argument from authority can be better than an argument lacking authority if you take into account evidence for the authority’s expertise in the subject area.

Logical fallacies were widely used by the Greeks in their philosophical discourse. They were discussing problems like “Can a circle be squared?” But many of today’s problems are of an entirely different kind, and the Greek rules aren’t always helpful.

The problems we face today can be extremely complex, like the question “What’s the origin of climate change?” “Is it a good idea to kill off mosquitoes to eradicate malaria?” or “Is dark matter made of particles?” Most of us simply don’t have all the necessary evidence and knowledge to arrive at a conclusion. We also often don’t have the time to collect the necessary evidence and knowledge.

And when a primary evaluation isn’t possible, the smart thing to do is a secondary evaluation. For this, you don’t try to answer the question itself, but you try to answer the question “Where do I best get an answer to this question?” That is, you ask an authority.

We do this all the time: You see a doctor to have him check out that strange rush. You ask your mother how to stuff the turkey. And when the repair man says your car needs a new crankshaft sensor, you don’t yell “argument from authority.” And you shouldn’t, because you’ve smartly exported your primary evaluation of evidence to a secondary system that, you are quite confident, will actually evaluate the evidence *better* than you yourself could do.

But… the secondary evidence you need is how knowledgeable the authority is on the topic of question. The more trustworthy the authority, the more reliable the information.

This also means that if you reject an argument from authority you claim that the authority isn’t trustworthy. You can do that. But it’s here’s where things most often go wrong.

The person who doesn’t want to accept the opinion of scientific experts implicitly claims that their own knowledge is more trustworthy. Without explicitly saying so, they claim that science doesn’t work, or that certain experts cannot be trusted – and that they themselves can do better. That is a claim which can be made. But science has an extremely good track record in producing correct conclusions. Questioning that it’s faulty therefore carries a heavy burden of proof.

So. To use an argument from authority correctly, you have to explain why the authority’s knowledge is not trustworthy on the question under consideration.

But what should you do if someone dismisses scientific findings by claiming an argument from authority?

I think we should have a name for such a mistaken use of the term argument from authority. We could call it the fallacy of the “omitted knowledge prior.” This means it’s a mistake to not take into account evidence for the reliability of knowledge, including one’s own knowledge. You, your calculator, and the pope aren’t equally reliable when it comes to evaluating logarithms. And that counts for something.

Sunday, January 01, 2017

The 2017 Edge Annual Question: Which Scientific Term or Concept Ought To Be More Widely Known?

My first thought when I heard the 2017 Edge Annual Question was “Wasn’t that last year's question?” It wasn’t. But it’s almost identical to the 2011 question, “What scientific concept would improve everybody’s cognitive toolkit.” That’s ok, I guess, the internet has an estimated memory of 2 days, so after 5 years it’s reasonable to assume nobody will remember their improved toolkit.

After that first thought, the reply that came to my mind was “Effective Field Theory,” immediately followed by “But Sean Carroll will cover that.” He didn’t, he went instead for “Bayes's Theorem.” But Lisa Randall went for “Effective Theory.”

I then considered, in that order, “Free Will,” “Emergence," and “Determinism,” only to discard them again because each of these would have required me to first explain effective field theory. You find “Emergence” explained by Garrett Lisi, and determinism and free will (or its absence, respectively), is taken on by Jerry A. Coyne, whom I don’t know, but I entirely agree with his essay. My argument would have been almost identical, you can read my blogpost about free will here.

Next I reasoned that this question calls for a broader answer, so I thought of “uncertainty” and then science itself, but decided that had been said often enough. Lawrence Krauss went for uncertainty. You find Scientific Realism represented by Rebecca Newberger Goldstein, and the scientist by Stuart Firestein.

I then briefly considered social and cognitive biases, but was pretty convinced these would be well-represented by people who know more about sociology than me. Then I despaired for a bit over my unoriginality.

Back to my own terrain, I decided the one thing that everybody should know about physics is the principle of least action. The name hides its broader implications though, so I instead went for “Optimization.” A good move, because Janna Levin went for “The Principle of Least Action.”

I haven’t read all essays, but it’ll be a nice way to start the new year by browsing them. Happy New Year everybody!

Sunday, December 25, 2016

Physics is good for your health

Book sandwich
[Img src: strangehistory.net]
Yes, physics is good for your health. And that’s not only because it’s good to know that peeing on high power lines is a bad idea. It’s also because, if they wheel you to the hospital, physics is your best friend. Without physics, there’d be no X-rays and no magnetic resonance imaging. There’d be no ultrasound and no spectroscopy, no optical fiber imaging and no laser surgery. There wouldn’t even be centrifuges.

But physics is good for your health in another way – as the resort of sanity.

Human society may have entered a post-factual era, but the laws of nature don’t give a shit. Planet Earth is a crazy place, full with crazy people, getting crazier by the minute. But the universe still expands, atoms still decay, electric currents still take the path of least resistance. Electrons don’t care if you believe in them and supernovae don’t want your money. And that’s the beauty of knowledge discovery: It’s always waiting for you. Stupid policy decisions can limit our collective benefit from science, but the individual benefit is up to each of us.

In recent years I’ve found it impossible to escape the “mindfulness” movement. Its followers preach that focusing on the present moment will ease your mental tension. I don’t know about you, but most days focusing on the present moment is the last thing I want. I’ve done a lot of breaths and most of them were pretty unremarkable – I’d much rather think about something more interesting.

And physics is there for you: Find peace of mind in Hubble images of young nebulae or galaxy clusters billions of light years away. Gauge the importance of human affairs by contemplating the enormous energies released in black hole mergers. Remember how lucky we are that our planet is warmed but not roasted by the Sun, then watch some videos of recent solar eruptions. Reflect on the long history of our own galaxy, seeded by tiny density fluctuations whose imprint still see today in the cosmic microwave background.

Or stretch your imagination and try to figure out what happens when you fall into a black hole, catch light like Einstein, or meditate over the big questions: Does time exist? Is the future determined? What, if anything, happened before the big bang? And if there are infinitely many copies of you in the multiverse, does that mean you are immortal?

This isn’t to say the here and now doesn’t matter. But if you need to recharge, physics can be a welcome break from human insanity.

And if everything else fails, there’s always the 2nd law of thermodynamics to remind us: All this will pass.

Wednesday, December 21, 2016

Reasoning in Physics

I’m just back from a workshop about “Reasoning in Physics” at the Center for Advanced Studies in Munich. I went because it seemed a good idea to improve my reasoning, but as I sat there, something entirely different was on my mind: How did I get there? How did I, with my avowed dislike of all things -ism and -ology, end up in a room full of philosophers, people who weren’t discussing physics, but the philosophical underpinning of physicists’ arguments. Or, as it were, the absence of such underpinnings.

The straight-forward answer is that they invited me, or invited me back, I should say, since this was my third time visiting the Munich philosophers. Indeed, they invited me to stay some longer for a collaborative project, but I’ve successfully blamed the kids for my inability to reply with either yes or no.

So I sat there, in one of these awkwardly quiet rooms where everyone will hear your stomach gargle, trying to will my stomach not to gargle and instead listen to the first talk. It was Jeremy Butterfield, speaking about a paper which I commented on here. Butterfield has been praised to me as one of the four good physics philosophers, but I’d never met him. The praise was deserved – he turned out to be very insightful and, dare I say, reasonable.

The talks of the first day focused on multiple multiverse measures (meta meta), inflation (still eternal), Bayesian inference (a priori plausible), anthropic reasoning (as observed), and arguments from mediocrity and typicality which were typically mediocre. Among other things, I noticed with consternation that the doomsday argument is still being discussed in certain circles. This consterns me because, as I explained a decade ago, it’s an unsound abuse of probability calculus. You can’t randomly distribute events that are causally related. It’s mathematical nonsense, end of story. But it’s hard to kill a story if people have fun discussing it. Should “constern” be a verb? Discuss.

In a talk by Mathias Frisch I learned of a claim by Huw Price that time-symmetry in quantum mechanics implies retro-causality. It seems the kind of thing that I should have known about but didn’t, so I put the paper on the reading list and hope that next week I’ll have read it last year.

The next day started with two talks about analogue systems of which I missed one because I went running in the morning without my glasses and, well, you know what they say about women and their orientation skills. But since analogue gravity is a topic I’ve been working on for a couple of years now, I’ve had some time to collect thoughts about it.

Analogue systems are physical systems whose observables can, in a mathematically precise way, be mapped to – usually very different – observables of another system. The best known example is sound-waves in certain kinds of fluids which behave exactly like light does in the vicinity of a black hole. The philosophers presented a logical scheme to transfer knowledge gained from observational test of one system to the other system. But to me analogue systems are much more than a new way to test hypotheses. They’re fundamentally redefining what physicists mean by doing science.

Presently we develop a theory, express it in mathematical language, and compare the theory’s predictions with data. But if you can directly test whether observations on one system correctly correspond to that of another, why bother with a theory that predicts either? All you need is the map between the systems. This isn’t a speculation – it’s what physicists already do with quantum simulations: They specifically design one system to learn how another, entirely different system, will behave. This is usually done to circumvent mathematically intractable problems, but in extrapolation it might just make theories and theorists superfluous.

It then followed a very interesting talk by Peter Mattig, who reported from the DFG research program “Epistemology of the LHC.” They have, now for the 3rd time, surveyed both theoretical and experimental particle physicists to track researchers’ attitudes to physics beyond the standard model. The survey results, however, will only get published in January, so I presently can’t tell you more than that. But once the paper is available you’ll read about it on this blog.

The next talk was by Radin Dardashti who warned us ahead that he’d be speaking about work in progress. I very much liked Radin’s talk at last year’s workshop, and this one didn’t disappoint either. In his new work, he is trying to make precise the notion of “theory space” (in the general sense, not restricted to qfts).

I think it’s a brilliant idea because there are many things that we know about theories but that aren’t about any particular theory, ie we know something about theory space, but we never formalize this knowledge. The most obvious example may be that theories in physics tend to be nice and smooth and well-behaved. They can be extrapolated. They have differentiable potentials. They can be expanded. There isn’t a priori any reason why that should be so; it’s just a lesson we have learned through history. I believe that quantifying meta-theoretical knowledge like this could play a useful role in theory development. I also believe Radin has a bright future ahead.

The final session on Tuesday afternoon was the most physicsy one.

My own talk about the role of arguments from naturalness was followed by a rather puzzling contribution by two young philosophers. They claimed that quantum gravity doesn’t have to be UV-complete, which would mean it’s not a consistent theory up to arbitrarily high energies.

It’s right of course that quantum gravity doesn’t have to be UV-complete, but it’s kinda like saying a plane doesn’t have to fly. If you don’t mind driving, then why put wings on it? If you don’t mind UV-incompleteness, then why quantize gravity?

This isn’t to say that there’s no use in thinking about approximations to quantum gravity which aren’t UV-complete and, in particular, trying to find ways to test them. But these are means to an end, and the end is still UV-completion. Now we can discuss whether it’s a good idea to start with the end rather than the means, but that’s a different story and shall be told another time.

I think this talk confused me because the argument wasn’t wrong, but for a practicing researcher in the field the consideration is remarkably irrelevant. Our first concern is to find a promising problem to work on, and that the combination of quantum field theory and general relativity isn’t UV complete is the most promising problem I know of.

The last talk was by Michael Krämer about recent developments in modelling particle dark matter. In astrophysics – like in particle-physics – the trend is to go away from top-down models and work with slimmer “simplified” models. I think it’s a good trend because the top-down constructions didn’t lead us anywhere. But removing the top-down guidance must be accompanied by new criteria, some new principle of non-empirical theory-selection, which I’m still waiting to see. Otherwise we’ll just endlessly produce models of questionable relevance.

I’m not sure whether a few days with a group of philosophers have improved my reasoning – be my judge. But the workshop helped me see the reason I’ve recently drifted towards philosophy: I’m frustrated by the lack of self-reflection among theoretical physicists. In the foundations of physics, everybody’s running at high speed without getting anywhere, and yet they never stop to ask what might possibly be going wrong. Indeed, most of them will insist nothing’s wrong to begin with. The philosophers are offering the conceptual clarity that I find missing in my own field.

I guess I’ll be back.

Monday, December 19, 2016

Book Review, “Why Quark Rhymes With Pork” by David Mermin

Why Quark Rhymes with Pork: And Other Scientific Diversions
By N. David Mermin
Cambridge University Press (January 2016)

The content of many non-fiction books can be summarized as “the blurb spread thinly,” but that’s a craft which David Mermin’s new essay collection Why Quark Rhymes With Pork cannot be accused of. The best summary I could therefore come up with is “things David Mermin is interested in,” or at least was interested in some time during the last 30 years.

This isn’t as undescriptive as it seems. Mermin is Horace White Professor of Physics Emeritus at Cornell University, and a well-known US-American condensed matter physicist, active in science communication, famous for his dissatisfaction with the Copenhagen interpretation and an obsession with properly punctuating equations. And that’s also what his essays are about: quantum mechanics, academia, condensed matter physicists, writing in general, and obsessive punctuation in particular. Why Quark Rhymes With Pork collects all of Mermin’s Reference Frame columns published in Physics Today from 1988 to 2009, updated with postscripts, plus 13 previously unpublished essays.

The earliest of Mermin’s Reference Frame columns stem from the age of handwritten transparencies and predate the arXiv, the Superconducting Superdisaster, and the “science wars” of the 1990s. I read these first essays with the same delighted horror evoked by my grandma’s tales of slide-rules and logarithmic tables, until I realized that we’re still discussing today the same questions as Mermin did 20 years ago: Why do we submit papers to journals for peer review instead of reviewing them independently of journal publication? Have we learned anything profound in the last half century? What do you do when you give a talk and have mustard on your ear? Why is the sociology of science so utterly disconnected from the practice of science? Does anybody actually read PRL? And, of course, the mother of all questions: How to properly pronounce “quark”?

The later essays in the book mostly focus on the quantum world, just what is and isn’t wrong with it, and include the most insightful (and yet brief) expositions of quantum computing that I have come across. The reader also hears again from Professor Mozart, a semi-fictional character that Mermin introduced in his Reference Frame columns. Several of the previously unpublished pieces are summaries of lectures, birthday speeches, and obituaries.

Even though some of Mermin’s essays are accessible for the uninitiated, most of them are likely incomprehensible without some background knowledge in physics, either because he presumes technical knowledge or because the subject of his writing must remain entirely obscure. The very first essay might make a good example. It channels Mermin’s outrage over “Lagrangeans,” and even though written with both humor and purpose, it’s a spelling that I doubt non-physicists will perceive as properly offensive. Likewise, a 12-verse poem on the standard model or elaborations on how to embed equations into text will find their audience mostly among physicists.

My only prior contact with Mermin’s writing was a Reference Frame in 2009, in which Mermin laid out his favorite interpretation of quantum mechanics, Qbism, a topic also pursued in several of this book’s chapters. Proposed by Carl Caves, Chris Fuchs, and Rüdinger Sachs, Qbism views quantum mechanics as the observers’ rule-book for updating information about the world. In his 2009 column, Mermin argues that it is a “bad habit” to believe in the reality of the quantum state. “I hope you will agree,” he writes, “that you are not a continuous field of operators on an infinite-dimensional Hilbert space.”

I left a comment to this column, lamenting that Mermin’s argument is “polemic” and “uninsightful,” an offhand complaint that Physics Today published a few months later. Mermin replied that his column was “an amateurish attempt” to contribute to the philosophy of science and quantum foundations. But while reading Why Quark Rhymes With Pork, I found his amateurism to be a benefit: In contrast to professional attempts to contribute to the philosophy of science (or linguistics, or sociology, or scholarly publishing) Mermin’s writing is mostly comprehensible. I’m thus happy to leave further complaints to philosophers (or linguists, or sociologists).

Why Quark Rhymes With Pork is a book I’d never have bought. But having read it, I think you should read it too. Because I’d rather not still discuss the same questions 20 years from now.

And the only correct way to pronounce quark is of course the German way as “qvark.”

[This book review appeared in the November 2016 issue of Physics Today.]

Friday, December 16, 2016

Cosmic rays hint at new physics just beyond the reach of the LHC

Cosmic ray shower. Artist’s impression.
[Img Src]
The Large Hadron Collider (LHC) – the world’s presently most powerful particle accelerator – reaches a maximum collision energy of 14 TeV. But cosmic rays that collide with atoms in the upper atmosphere have been measured with collision energies about ten times as high.

The two types of observations complement each other. At the LHC, energies are smaller, but collisions happen in a closely controlled experimental environment, directly surrounded by detectors. This is not the case for cosmic rays – their collisions reach higher energies, but the experimental uncertainties are higher.

Recent results from the Pierre Auger Cosmic Ray observatory at center-of-mass energies of approximately 100 TeV are incompatible with the Standard Model of particle physics and hint at unexplained new phenomena. The statistical significance is not high, currently at 2.1 sigma (or 2.9 for a more optimistic simulation). This is approximately a one-in-100 probability to be due to random fluctuations.

Cosmic rays are created either by protons or light atomic nuclei which come from outer space. These particles are accelerated in galactic magnetic fields, though the exact way how their get their high speeds is often unknown. When they enter the atmosphere of planet Earth, they sooner or later hit an air molecule. This destroys the initial particle and creates a primary shower of new particles. This shower has an electromagnetic part and a part of quarks and gluons that quickly form bound states known as hadrons. These particles undergo further decays and collisions, leading to a secondary shower.

The particles of the secondary shower can be detected on Earth in large detector arrays like Pierre Auger, which is located in Argentina. Pierre Auger has two types of detectors: 1) detectors that directly collect the particles which make it to the ground, and 2) fluorescence detectors which captures the light emitted from the ionization air molecules.

The hadronic component of the shower is dominated by pions, which are the lightest mesons and composed of a quark and an anti-quark. The neutral pions decay quickly, mostly into photons; the charged pions create muons which make it into the ground-based detectors.

It has been known for several years that the muon signal seems too large compared to the electromagnetic signal – the balance between them is off. This however was not based on very solid data analyses because it was dependent on a total energy estimate, and that’s very hard to do if you don’t measure all particles of the shower and have to extrapolate from what you measure.

In the new paper – just published in PRL – the Pierre Auger collaboration used a different analysis method for the data, one that does not depend on the total energy calibration. They individually fit the results of detected showers by comparing them to computer-simulated events. From a previously generated sample, they pick the simulated event that best matches the fluorescence result.

Then they add two parameters to also fit the hadronic result: The one parameter adjusts the energy calibration of the fluorescence signal, the other rescales the number of particles in the hadronic component. Then they look for the best-fit values and find that these are systematically off the standard model prediction. As an aside, their analysis also shows that the energy does not need to be recalibrated.

The main reason for the mismatch with the standard model predictions is that the detectors measure more muons than expected. What’s up with those muons? Nobody knows, but the origin of the mystery seems not in the muons themselves, but in the pions from whose decay they come.

Since the neutral pions have a very short lifetime and decay almost immediately into photons, essentially all energy that goes into neutral pions is lost for the production of muons. Besides the neutral pions there are two charged pions and the more energy is left for these and other hadrons, the more muons are produced in the end. So the result by Pierre Auger indicates that the total energy in neutral pions is smaller than what the present simulations predict.

One possible explanation for this, which has been proposed by Farrar and Allen, is that we misunderstand chiral symmetry breaking. It is the breaking of chiral symmetry that accounts for the biggest part of the masses of nucleons (not the Higgs!). The pions are the (pseudo) Goldstone bosons of that broken symmetry, which is why they are so light and ultimately why they are produced so abundantly. Pions are not exactly massless, and thus “pseudo”, because chiral symmetry is only approximate. The chiral phase transition is believed to be close by the confinement transition, that being the transition from a medium of quarks and gluons to color-neutral hadrons. For all we know, it takes place at a temperature of approximately 150 MeV. Above that temperature chiral symmetry is “restored”.

Chiral symmetry restoration almost certainly plays a role in the cosmic ray collisions, and a more important role than it does at the LHC. So, quite possibly this is the culprit here. But it might be something more exotic, new short-lived particles that become important at high energies and which make interaction probabilities deviate from the standard model extrapolation. Or maybe it’s just a measurement fluke that will go away with more data.

If the signal remains, however, that’s a strong motivation to build the next larger particle collider which could reach 100 TeV. Our accelerators would then be as good as the heavens.


[This post previously appeared on Forbes.]