Monday, August 29, 2016

Dear Dr. B: How come we never hear of a force that the Higgs boson carries?

    “Dear Dr. Hossenfelder,

    First, I love your blog. You provide a great insight into the world of physics for us laymen. I have read in popular science books that the bosons are the ‘force carriers.’ For example the photon carries the electromagnetic force, the gluon, the strong force, etc. How come we never hear of a force that the Higgs boson carries?

    Ramiro Rodriguez
Dear Ramiro,

The short answer is that you never hear of a force that the Higgs boson carries because it doesn’t carry one. The longer answer is that not all bosons are alike. This of course begs the question just how the Higgs-boson is different, so let me explain.

The standard model of particle physics is based on gauge symmetries. This basically means that the laws of nature have to remain invariant under transformations in certain internal spaces, and these transformations can change from one place to the next and one moment to the next. They are what physics call “local” symmetries, as opposed to “global” symmetries whose transformations don’t change in space or time.

Amazingly enough, the requirement of gauge symmetry automatically explains how particles interact. It works like this. You start with fermions, that are particles of half-integer spin, like electrons, muons, quarks and so on. And you require that the fermions’ behavior must respect a gauge symmetry, which is classified by a symmetry group. Then you ask what equations you can possibly get that do this.

Since the fermions can move around, the equations that describe what they do must contain derivatives both in space and in time. This causes a problem, because if you want to know how the fermions’ motion changes from one place to the next you’d also have to know what the gauge transformation does from one place to the next, otherwise you can’t tell apart the change in the fermions from the change in the gauge transformation. But if you’d need to know that transformation, then the equations wouldn’t be invariant.

From this you learn that the only way the fermions can respect the gauge symmetry is if you introduce additional fields – the gauge fields – which exactly cancel the contribution from the space-time dependence of the gauge transformation. In the standard model the gauge fields all have spin 1, which means they are bosons. That's because to cancel the terms that came from the space-time derivative, the fields need to have the same transformation behavior as the derivative, which is that of a vector, hence spin 1.

To really follow this chain of arguments – from the assumption of gauge symmetry to the presence of gauge-bosons – requires several years’ worth of lectures, but the upshot is that the bosons which exchange the forces aren’t added by hand to the standard model, they are a consequence of symmetry requirements. You don’t get to pick the gauge-bosons, neither their number nor their behavior – their properties are determined by the symmetry.

In the standard model, there are 12 such force-carrying bosons: the photon (γ), the W+, W-, Z, and 8 gluons. They belong to three gauge symmetries, U(1), SU(2) and SU(3). Whether a fermion does or doesn’t interact with a gauge-boson depends on whether the fermion is “gauged” under the respective symmetry, ie transforms under it. Only the quarks, for example, are gauged under the SU(3) symmetry of the strong interaction, hence only the quarks couple to gluons and participate in that interaction. The so-introduced bosons are sometimes specifically referred to as “gauge-bosons” to indicate their origin.

The Higgs-boson in contrast is not introduced by a symmetry requirement. It has an entirely different function, which is to break a symmetry (the electroweak one) and thereby give mass to particles. The Higgs doesn’t have spin 1 (like the gauge-bosons) but spin 0. Indeed, it is the only presently known elementary particle with spin zero. Sheldon Glashow has charmingly referred to the Higgs as the “flush toilet” of the standard model – it’s there for a purpose, not because we like the smell.

The distinction between fermions and bosons can be removed by postulating an exchange symmetry between these two types of particles, known as supersymmetry. It works basically by generalizing the concept of a space-time direction to not merely be bosonic, but also fermionic, so that there is now a derivative that behaves like a fermion.

In the supersymmetric extension of the standard model there are then partner particles to all already known particles, denoted either by adding an “s” before the particle’s name if it’s a boson (selectron, stop quark, and so on) or adding “ino” after the particle’s name if it’s a fermion (Wino, photino, and so on). There is then also Higgsino, which is the partner particle of the Higgs and has spin 1/2. It is gauged under the standard model symmetries, hence participates in the interactions, but still is not itself consequence of a gauge.

In the standard model most of the bosons are also force-carriers, but bosons and force-carriers just aren’t the same category. To use a crude analogy, just because most of the men you know (most of the bosons in the standard model) have short hair (are force-carriers) doesn’t mean that to be a man (to be a boson) you must have short hair (exchange a force). Bosons are defined by having integer spin, as opposed to the half-integer spin that fermions have, and not by their ability to exchange interactions.

In summary the answer to your question is that certain types of bosons – the gauge bosons – are a consequence of symmetry requirements from which it follows that these bosons do exchange forces. The Higgs isn’t one of them.

Thanks for an interesting question!

Peter Higgs receiving the Nobel Prize from the King of Sweden.
[Img Credits: REUTERS/Claudio Bresciani/TT News Agency]



Previous Dear-Dr-B’s that you might also enjoy:

Wednesday, August 24, 2016

What if the universe was like a pile of laundry?

    What if the universe was like a pile of laundry?

    Have one.

    See this laundry pile? Looks just like our universe.

    No?

    Here, have another.

    See it now? It’s got three dimensions and all.

    But look again.

    The shirts and towels, they’re really crinkled and interlocked two-dimensional surfaces.

    Wait.

    It’s one-dimensional yarn, knotted up tightly.

    You ok?

    Have another.

    I see it clearly now. It’s everything at once, one-two-three dimensional. Just depends on how closely you look at it.

    Amazing, don’t you think? What if our universe was just like that?


Universal Laundry Pile.
[Img Src: Clipartkid]

It doesn’t sound like a sober thought, but it’s got math behind it, so physicists think there might be something to it. Indeed the math piled up lately. They call it “dimensional reduction,” the idea that space on short distances has fewer than three dimensions – and it might help physicists to quantize gravity.

We’ve gotten used to space with additional dimensions, rolled up so small we can’t observe them. But how do you get rid of dimensions instead? To understand how it works we first have clarify what we mean by “dimension.”

We normally think about dimensions of space by picturing lines which spread from a point. How quickly the lines dilute with the distance from the point tells us the “Hausdorff dimension” of a space. The faster the lines diverge from each other with distance, the larger the Hausdorff dimension. If you speak through a pipe, for example, sound waves spread less and your voice carries farther. The pipe hence has a lower Hausdorff dimension than our normal 3-dimensional office cubicles. It’s the Hausdorff dimension that we colloquially refer to as just dimension.

For dimensional reduction, however, it is not the Hausdorff dimension which is relevant, but instead the “spectral dimension,” which is a slightly different concept. We can calculate it by first getting rid of the “time” in “space-time” and making it into space (period). We then place a random walker at one point and measure the probability that it returns to the same point during its walk. The smaller the average return probability, the higher the probability the walker gets lost, and the higher the number of spectral dimensions.

Normally, for a non-quantum space, both notions of dimension are identical. However, add quantum mechanics and the spectral dimension at short distances goes down from four to two. The return probability for short walks becomes larger than expected, and the walker is less likely to get lost – this is what physicists mean by “dimensional reduction.”

The spectral dimension is not necessarily an integer; it can take on any value. This value starts at 4 when quantum effects can be neglected, and decreases when the walker’s sensitivity to quantum effects at shortest distances increases. Physicists therefore also like to say that the spectral dimension “runs,” meaning its value depends on the resolution at which space-time is probed.

Dimensional reduction is an attractive idea because quantizing gravity is considerably easier in lower dimensions where the infinities that plague traditional attempts to quantize gravity go away. A theory with a reduced number of dimensions at shortest distances therefore has much higher chances to remain consistent and so to provide a meaningful theory for the quantum nature of space and time. Not so surprisingly thus, among physicists, dimensional reduction has received quite some attention lately.

This strange property of quantum-spaces was first found in Causal Dynamical Triangulation (hep-th/0505113), an approach to quantum gravity that relies on approximating curved spaces by triangular patches. In this work, the researchers did a numerical simulation of a random walk in such a triangulized quantum-space, and found that the spectral dimension goes down from four to two. Or actually to 1.80 ± 0.25 if you want to know precisely.

Instead of doing numerical simulations, it is also possible to study the spectral dimension mathematically, which has since been done in various other approaches. For this, physicists exploit that the behavior of the random walk is governed by a differential equation – the diffusion equation – which depends on the curvature of space. In quantum gravity, the curvature has quantum fluctuations, and then it’s instead its average value which enters the diffusion equation. From the diffusion equation one then calculates the return probability for the random walk.

This way, physicists have inferred the spectral dimension also in Asymptotically Safe Gravity (hep-th/0508202), an approach to quantum gravity which relies on the resolution-dependence (the “running”) of quantum field theories. And they found the same drop from four to two spectral dimensions.

Another indication comes from Loop Quantum Gravity, where the scaling of the area operator with length changes at short distances. In this case is somewhat questionable whether the notion of curvature makes sense at all on short distances. But ignoring this, one can construct the diffusion equation and finds that the spectral dimension drops from four to two (0812.2214).

And then there is Horava-Lifshitz gravity, yet another modification of gravity which some believe helps with quantizing it. Here too, dimensional reduction has been found (0902.3657).

It is difficult to visualize what is happening with the dimensionality of space if it goes down continuously, rather than in discrete steps as in the example with the laundry pile. Maybe a good way to picture it, as Calcagni, Eichhorn and Saueressig suggest, is to think of the quantum fluctuations of space-time hindering a particle’s random walk, thereby slowing it down. It wouldn’t have to be that way. Quantum fluctuations could also kick the particle around wildly, thereby increasing the spectral dimension rather than decreasing it. But that’s not what the math tells us.

One shouldn’t take this picture too seriously though, because we’re talking about a random walk in space, not space-time, and so it’s not a real physical process. Turning time into space might seem strange, but it is a common mathematical simplification which is often used for calculations in quantum theory. Still, it makes it difficult to interpret what is happening physically.

I find it intriguing that several different approaches to quantum gravity share a behavior like this. Maybe it is a general property of quantum space-time. But then, there are many different types of random walks, and while these different approaches to quantum gravity share a similar scaling behavior for the spectral dimension, they differ in the type of random walk that produces this scaling (1304.7247). So maybe the similarities are only superficial.

And of course this idea has no observational evidence speaking for it. Maybe never will. But one day, I’m sure, all the math will click into place and everything will make perfect sense. Meanwhile, have another.

[This article first appeared on Starts With A Bang under the title Dimensional Reduction: The Key To Physics' Greatest Mystery?]

Friday, August 19, 2016

Away Note

I'll be in Stockholm next week for a program on Black Holes and Emergent Spacetime, so please be prepared for some service interruptions.

Monday, August 15, 2016

The Philosophy of Modern Cosmology (srsly)

Model of Inflation.
img src: umich.edu
I wrote my recent post on the “Unbearable Lightness of Philosophy” to introduce a paper summary, but it got somewhat out of hand. I don’t want to withhold the actual body of my summary though. The paper in question is


Before we start I have to warn you that the paper speaks a lot about realism and underdetermination, and I couldn’t figure out what exactly the authors mean with these words. Sure, I looked them up, but that didn’t help because there doesn’t seem to be an agreement on what the words mean. It’s philosophy after all.

Personally, I subscribe to a philosophy I’d like to call agnostic instrumentalism, which means I think science is useful and I don’t care what else you want to say about it – anything from realism to solipsism to Carroll’s “poetic naturalism” is fine by me. In newspeak, I’m a whateverist – now go away and let me science.

The authors of the paper, in contrast, position themselves as follows:
“We will first state our allegiance to scientific realism… We take scientific realism to be the doctrine that most of the statements of the mature scientific theories that we accept are true, or approximately true, whether the statement is about observable or unobservable states of affairs.”
But rather than explaining what this means, the authors next admit that this definition contains “vague words,” and apologize that they “will leave this general defense to more competent philosophers.” Interesting approach. A physics-paper in this style would say: “This is a research article about General Relativity which has something to do with curvature of space and all that. This is just vague words, but we’ll leave a general defense to more competent physicists.”

In any case, it turns out that it doesn’t matter much for the rest of the paper exactly what realism means to the authors – it’s a great paper also for an instrumentalist because it’s long enough so that, rolled up, it’s good to slap flies. The focus on scientific realism seems somewhat superfluous, but I notice that the paper is to appear in “The Routledge Handbook of Scientific Realism” which might explain it.

It also didn’t become clear to me what the authors mean by underdetermination. Vaguely speaking, they seem to mean that a theory is underdetermined if it contains elements unnecessary to explain existing data (which is also what Wikipedia offers by way of definition). But the question what’s necessary to explain data isn’t a simple yes-or-no question – it’s a question that needs a quantitative analysis.

In theory development we always have a tension between simplicity (fewer assumptions) and precision (better fit) because more parameters normally allow for better fits. Hence we use statistical measures to find out in which case a better fit justifies a more complicated model. I don’t know how one can claim that a model is “underdetermined” without such quantitative analysis.

The authors of the paper for the most part avoid the need to quantify underdetermination by using sociological markers, ie they treat models as underdetermined if cosmologists haven’t yet agreed on the model in question. I guess that’s the best they could have done, but it’s not a basis on which one can discuss what will remain underdetermined. The authors for example seem to implicitly believe that evidence for a theory at high energies can only come from processes at such high energies, but that isn’t so – one can also use high precision measurements at low energies (at least in principle). In the end it comes down, again, to quantifying which model is the best fit.

With this advance warning, let me tell you the three main philosophical issues which the authors discuss.

1. Underdetermination of topology.

Einstein’s field equations are local differential equations which describe how energy-densities curve space-time. This means these equations describe how space changes from one place to the next and from one moment to the next, but they do not fix the overall connectivity – the topology – of space-time*.

A sheet of paper is a simple example. It’s flat and it has no holes. If you roll it up and make a cylinder, the paper is still flat, but now it has a hole. You could find out about this without reference to the embedding space by drawing a circle onto the cylinder and around its perimeter, so that it can’t be contracted to zero length while staying on the cylinder’s surface. This could never happen on a flat sheet. And yet, if you look at any one point of the cylinder and its surrounding, it is indistinguishable from a flat sheet. The flat sheet and the cylinder are locally identical – but they are globally different.

General Relativity thus can’t tell you the topology of space-time. But physicists don’t normally worry much about this because you can parameterize the differences between topologies, compute observables, and then compare the results to data. Topology is, in that, no different than any other assumption of a cosmological model. Cosmologists can, and have, looked for evidence of non-trivial space-time connectivity in the CMB data, but they haven’t found anything that would indicate our universe wraps around itself. At least so far.

In the paper, the authors point out an argument raised by someone else (Manchak) which claims that different topologies can’t be distinguished almost everywhere. I haven’t read the paper in question, but this claim is almost certainly correct. The reason is that while topology is a global property, you can change it on arbitrarily small scales. All you have to do is punch a hole into that sheet of paper, and whoops, it’s got a new topology. Or if you want something without boundaries, then identify two points with each other. Indeed you could sprinkle space-time with arbitrarily many tiny wormholes and in that way create the most abstruse topological properties (and, most likely, lots of causal paradoxa).

The topology of the universe is hence, like the topology of the human body, a matter of resolution. On distances visible to the eye you can count the holes in the human body on the fingers of your hand. On shorter distances though you’re all pores and ion channels, and on subatomic distances you’re pretty much just holes. So, asking what’s the topology of a physical surface only makes sense when one specifies at which distance scale one is probing this (possibly higher-dimensional) surface.

I thus don’t think any physicist will be surprised by the philosophers’ finding that cosmology severely underdetermines global topology. What the paper fails to discuss though is the scale-dependence of that conclusion. Hence, I would like to know: Is it still true that the topology will remain underdetermined on cosmological scales? And to what extent, and under which circumstances, can the short-distance topology have long-distance consequences, as eg suggested by the ER=EPR idea? What effect would this have on the separation of scales in effective field theory?

2. Underdetermination of models of inflation.

The currently most widely accepted model for the universe assumes the existence of a scalar field – the “inflaton” – and a potential for this field – the “inflation potential” – in which the field moves towards a minimum. While the field is getting there, space is exponentially stretched. At the end of inflation, the field’s energy is dumped into the production of particles of the standard model and dark matter.

This mechanism was invented to solve various finetuning problems that cosmology otherwise has, notably that the universe seems to be almost flat (the “flatness problem”), that the cosmic microwave background has the almost-same temperature in all directions except for tiny fluctuations (the “horizon problem”), and that we haven’t seen any funky things like magnetic monopoles or domain walls that tend to be plentiful at the energy scale of grand unification (the “monopole problem”).

Trouble is, there’s loads of inflation potentials that one can cook up, and most of them can’t be distinguished with current data. Moreover, one can invent more than one inflation field, which adds to the variety of models. So, clearly, the inflation models are severely underdetermined.

I’m not really sure why this overabundance of potentials is interesting for philosophers. This isn’t so much philosophy as sociology – that the models are underdetermined is why physicists get them published, and if there was enough data to extract a potential that would be the end of their fun. Whether there will ever be enough data to tell them apart, only time will tell. Some potentials have already been ruled out with incoming data, so I am hopeful.

The questions that I wish philosophers would take on are different ones. To begin with, I’d like to know which of the problems that inflation supposedly solves are actual problems. It only makes sense to complain about finetuning if one has a probability distribution. In this, the finetuning problem in cosmology is distinctly different from the finetuning problems in the standard model, because in cosmology one can plausibly argue there is a probability distribution – it’s that of fluctuations of the quantum fields which seed the initial conditions.

So, I believe that the horizon problem is a well-defined problem, assuming quantum theory remains valid close by the Planck scale. I’m not so sure, however, about the flatness problem and the monopole problem. I don’t see what’s wrong with just assuming the initial value for the curvature is tiny (finetuned), and I don’t know why I should care about monopoles given that we don’t know grand unification is more than a fantasy.

Then, of course, the current data indicates that the inflation potential too must be finetuned which, as Steinhardt has aptly complained, means that inflation doesn’t really solve the problem it was meant to solve. But to make that statement one would have to compare the severity of finetuning, and how does one do that? Can one even make sense of this question? Where are the philosophers if one needs them?

Finally, I have a more general conceptual problem that falls into the category of underdetermination, which is to which extent the achievements of inflation are actually independent of each other. Assume, for example, you have a theory that solves the horizon problem. Under which circumstances does it also solve the flatness problem and gives the right tilt for the spectral index? I suspect that the assumptions for this do not require the full mechanism of inflation with potential and all, and almost certainly not a very specific type of potential. Hence I would like to know what’s the minimal theory that explains the observations, and which assumptions are really necessary.

3. Underdetermination in the multiverse.

Many models for inflation create not only one universe, but infinitely many of them, a whole “multiverse”. In the other universes, fundamental constants – or maybe even the laws of nature themselves – can be different. How do you make predictions in a multiverse? You can’t, really. But you can make statements about probabilities, about how likely it is that we find ourselves in this universe with these particles and not any other.

To make statements about the probability of the occurrence of certain universes in the multiverse one needs a probability distribution or a measure (in the space of all multiverses or their parameters respectively). Such a measure should also take into account anthropic considerations, since there are some universes which are almost certainly inhospitable for life, for example because they don’t allow the formation of large structures.

In their paper, the authors point out that the combination of a universe ensemble and a measure is underdetermined by observations we can make in our universe. It’s underdetermined in the same what that if I give you a bag of marbles and say the most likely pick is red, you can’t tell what’s in the bag.

I think physicists are well aware of this ambiguity, but unfortunately the philosophers don’t address why physicists ignore it. Physicists ignore it because they believe that one day they can deduce the theory that gives rise to the multiverse and the measure on it. To make their point, the philosophers would have had to demonstrate that this deduction is impossible. I think it is, but I’d rather leave the case to philosophers.

For the agnostic instrumentalist like me a different question is more interesting, which is whether one stands to gain anything from taking a “shut-up-and-calculate” attitude to the multiverse, even if one distinctly dislikes it. Quantum mechanics too uses unobservable entities, and that formalism –however much you detest it – works very well. It really adds something new, regardless of whether or not you believe the wave-function is “real” in some sense. For what the multiverse is concerned, I am not sure about this. So why bother with it?

Consider the best-case multiverse outcome: Physicists will eventually find a measure on some multiverse according to which the parameters we have measured are the most likely ones. Hurray. Now forget about the interpretation and think of this calculation as a black box: You put in math one side and out comes a set of “best” parameters the other side. You could always reformulate such a calculation as an optimization problem which allows one to calculate the correct parameters. So, independent of the thorny question of what’s real, what do I gain from thinking about measures on the multiverse rather than just looking for an optimization procedure straight away?

Yes, there are cases – like bubble collisions in eternal inflation – that would serve as independent confirmation for the existence of another universe. But no evidence for that has been found. So for me the question remains: under which circumstances is doing calculations in the multiverse an advantage rather than unnecessary mathematical baggage?

I think this paper makes a good example for the difference between philosophers’ and physicists’ interests which I wrote about in my previous post. It was a good (if somewhat long) read and it gave me something to think, though I will need some time to recover from all the -isms.

* Note added: The word connectivity in this sentence is a loose stand-in for those who do not know the technical term “topology.” It does not refer to the technical term “connectivity.”

Friday, August 12, 2016

The Unbearable Lightness of Philosophy

Philosophy isn’t useful for practicing physicists. On that, I am with Steven Weinberg and Lawrence Krauss who have expressed similar opinions. But I think it’s an unfortunate situation because physicists – especially those who work on the foundations of physics – could need help from philosophers.

Massimo Pigliucci, a Prof for Philosophy at CUNY-City College, has ingeniously addressed physicists’ complaints about the uselessness of philosophy by declaring that “the business of philosophy is not to advance science.” Philosophy, hence, isn’t just useless, but it’s useless on purpose. I applaud. At least that means it has a purpose.

But I shouldn’t let Massimo Pigliucci speak for his whole discipline.

I’ve been told for what physics is concerned there are presently three good philosophers roaming Earth: David Albert, Jeremy Butterfield, and Tim Maudlin. It won’t surprise you to hear that I have some issues to pick with each of these gentlemen, but mostly they seem reasonable indeed. I would even like to nominate a fourth Good Philosopher, Steven Weinstein from UoW, with whom even I haven’t yet managed to disagree.

The good Maudlin, for example, had an excellent essay last year on PBS NOVA, in which he argued that “Physics needs Philosophy.” I really liked his argument until he wrote that “Philosophers obsess over subtle ambiguities of language,” which pretty much sums up all that physicists hate about philosophy.

If you want to know “what follows from what,” as Maudlin writes, you have to convert language into mathematics and thereby remove the ambiguities. Unfortunately, philosophers never seem to take that step, hence physicists’ complaints that it’s just words. Or, as Arthur Koestler put it, “the systematic abuse of a terminology specially invented for that purpose.”

Maybe, I admit, it shouldn’t be the philosophers’ job to spell out how to remove the ambiguities in language. Maybe that should already be the job of physicists. But regardless of whom you want to assign the task of reaching across the line, presently little crosses it. Few practicing physicists today care what philosophers do or think.

And as someone who has tried to write about topics on the intersection of both fields, I can report that this disciplinary segregation is meanwhile institutionalized: The physics journals won’t publish on the topic because it’s too much philosophy, and the philosophy journals won’t publish because it’s too much physics.

In a recent piece on Aeon, Pigliucci elaborates on the demarcation problem, how to tell science from pseudoscience. He seems to think this problem is what underlies some physicists’ worries about string theory and the multiverse, worries that were topic of a workshop that both he and I attended last year.

But he got it wrong. While I know lots of physicists critical of string theory for one reason or the other, none of them would go so far to declare it pseudoscience. No, the demarcation problem that physicists worry about isn’t that between science and pseudoscience. It’s that between science and philosophy. It is not without irony that Pigliucci in his essay conflates the two fields. Or maybe the purpose of his essay was an attempt to revive the “string wars,” in which case, wake me when it’s over.

To me, the part of philosophy that is relevant to physics is what I’d like to call “pre-science” – sharpening questions sufficiently so that they can eventually be addressed by scientific means. Maudlin in his above mentioned essay expressed a very similar point of view.

Philosophers in that area are necessarily ahead of scientists. But they also never get the credit for actually answering a question, because for that they’ll first have to hand it over to scientists. Like a psychologist, thus, the philosopher of physics succeeds by eventually making themselves superfluous. It seems a thankless job. There’s a reason I preferred studying physics instead.

Many of the “bad philosophers” are those who aren’t quick enough to notice that a question they are thinking about has been taken over by scientists. That this failure to notice can evidently persist, in some cases, for decades is another institutionalized problem that originates in the lack of communication between both fields.

Hence, I wish there were more philosophers willing to make it their business to advance science and to communicate across the boundaries. Maybe physicists would complain less that philosophy is useless if it wasn’t useless.

Saturday, August 06, 2016

The LHC “nightmare scenario” has come true.

The recently deceased diphoton
bump. Img Src: Matt Strassler.

I finished high school in 1995. It was the year the top quark was discovered, a prediction dating back to 1973. As I read the articles in the news, I was fascinated by the mathematics that allowed physicists to reconstruct the structure of elementary matter. It wouldn’t have been difficult to predict in 1995 that I’d go on to make a PhD in theoretical high energy physics.

Little did I realize that for more than 20 years the so provisional looking standard model would remain undefeated world-champion of accuracy, irritatingly successful in its arbitrariness and yet impossible to surpass. We added neutrino masses in the late 1990s, but this idea dates back to the 1950s. The prediction of the Higgs, discovered 2012, originated in the early 1960s. And while the poor standard model has been discounted as “ugly” by everyone from Stephen Hawking to Michio Kaku to Paul Davies, it’s still the best we can do.

Since I entered physics, I’ve seen grand unified models proposed and falsified. I’ve seen loads of dark matter candidates not being found, followed by a ritual parameter adjustment to explain the lack of detection. I’ve seen supersymmetric particles being “predicted” with constantly increasing masses, from some GeV to some 100 GeV to LHC energies of some TeV. And now that the LHC hasn’t seen any superpartners either, particle physicists are more than willing to once again move the goalposts.

During my professional career, all I have seen is failure. A failure of particle physicists to uncover a more powerful mathematical framework to improve upon the theories we already have. Yes, failure is part of science – it’s frustrating, but not worrisome. What worries me much more is our failure to learn from failure. Rather than trying something new, we’ve been trying the same thing over and over again, expecting different results.

When I look at the data what I see is that our reliance on gauge-symmetry and the attempt at unification, the use of naturalness as guidance, and the trust in beauty and simplicity aren’t working. The cosmological constant isn’t natural. The Higgs mass isn’t natural. The standard model isn’t pretty, and the concordance model isn’t simple. Grand unification failed. It failed again. And yet we haven’t drawn any consequences from this: Particle physicists are still playing today by the same rules as in 1973.

For the last ten years you’ve been told that the LHC must see some new physics besides the Higgs because otherwise nature isn’t “natural” – a technical term invented to describe the degree of numerical coincidence of a theory. I’ve been laughed at when I explained that I don’t buy into naturalness because it’s a philosophical criterion, not a scientific one. But on that matter I got the last laugh: Nature, it turns out, doesn’t like to be told what’s presumably natural.

The idea of naturalness that has been preached for so long is plainly not compatible with the LHC data, regardless of what else will be found in the data yet to come. And now that naturalness is in the way of moving predictions for so-far undiscovered particles – yet again! – to higher energies, particle physicists, opportunistic as always, are suddenly more than willing to discard of naturalness to justify the next larger collider.

Now that the diphoton bump is gone, we’ve entered what has become known as the “nightmare scenario” for the LHC: The Higgs and nothing else. Many particle physicists thought of this as the worst possible outcome. It has left them without guidance, lost in a thicket of rapidly multiplying models. Without some new physics, they have nothing to work with that they haven’t already had for 50 years, no new input that can tell them in which direction to look for the ultimate goal of unification and/or quantum gravity.

That the LHC hasn’t seen evidence for new physics is to me a clear signal that we’ve been doing something wrong, that our experience from constructing the standard model is no longer a promising direction to continue. We’ve maneuvered ourselves into a dead end by relying on aesthetic guidance to decide which experiments are the most promising. I hope that this latest null result will send a clear message that you can’t trust the judgement of scientists whose future funding depends on their continued optimism.

Things can only get better.

[This post previously appeared in a longer version on Starts With A Bang.]

Tuesday, August 02, 2016

Math blind

[Img Src: LifeScience]
Why must school children suffer through so much math which they will never need in their life? That’s one of these questions which I see opinion pieces about every couple of months. Most of them go back to a person by name Andrew Hacker whose complaint is that:
“Every other subject is about something. Poetry is about something. Even most modern art is about something. Math is about nothing. Math describes much of the world but is all about itself, and it has the most fantastic conundrums. But it is not about the world.”

Yes, mathematics is an entirely self-referential language. That’s the very reason why it’s so useful. Complaining that math isn’t about some thing is like complaining that paint isn’t an image – and even Hacker concedes that math can be used to describe much of the world. For most scientists the discussion stops at this point. The verdict in my filter bubble in unanimous: mathematics is the language of nature, and if schools teach one thing, that’s what they should teach.

I agree with that of course. And yet, the argument that math is the language of nature preaches to the converted. For the rest it’s meaningless rhetoric, countered by the argument that schools should teach what’s necessary: necessary to fill in a tax return, calculate a mortgage rate, or maybe estimate how many bricks you need to build a wall along the US-Mexican border.

School curriculums have to be modernized every now and then, no doubt about this. But the goal cannot be to reduce a subject of education based on the reasoning that it’s difficult. Math is the base of scientific literacy. You need math to understand risk assessments, to read statistics, and to understand graphs. You need math to understand modern science and tell it from pseudoscience. Much of the profusion of quack medicine like quantum healing or homeopathy is due to people’s inability to grasp even the basics of the underlying theories (or their failure to notice the absence thereof). For that you’d need, guess what, math.

But most importantly, you need math to understand what it even means to understand. The only real truths are mathematical truths, and so proving theorems is the only way to learn how to lead watertight arguments. That doesn’t mean that math teaches you how to lead successful arguments, in the sense of convincing someone. But it teaches you how to lead correct arguments. And that skill should be worth something, even if Hacker might complain that the arguments are about nothing.

I thought of this recently when my daughters had their school enrollment checkup.

One of the twins, Lara, doesn’t have stereo vision. We know this because she’s had regular eye exams, and while she sees well on both eyes separately, she doesn’t see anything on the 3d test card. I’ve explained to her why it’s important she wears her eye-cover and I try to coax her into doing some muscle building exercises. But she doesn’t understand.

And how could she? She’s never seen 3d. She doesn’t know what she doesn’t see. And it’s not an obvious disability: Lara tells distances by size and context. She knows that birds are small and cars are large and hence small cars are far away. For all she can tell, she sees just as well as everybody else. There are few instances when stereo-vision really makes a difference, one of them is catching a ball. But at 5 years she’s just as clumsy as all the other kids.

Being math-blind too is not an obvious disability. You can lead a pleasant life without mathematics because it’s possible to fill in the lack of knowledge with heuristics and anecdotes. And yet, without math, you’ll never see reality for what it is – you’ll lead your life in the fudgy realm of maybe-truths.

Lara doesn’t know triangulation and she doesn’t know vector spaces, and when I give her examples for what she’s missing, she’ll just put on this blank look that children reserve for incomprehensible adult talk, listen politely, and then reply “Today I built a moon rocket in kindergarten.”

I hear an echo of my 5 year old’s voice in these essays about the value of math education. It’s trying to tell someone they are missing part of the picture, and getting a reply like “I have never used the quadratic formula in my personal life.” Fine then, but totally irrelevant. Rather than factoring polynomials, let’s teach kids differential equations or network growth, which is arguably more useful to understand the world.

Math isn’t going away. On the very contrary it’s bound to dramatically increase in significance as the social sciences become more quantitative. We need that precision to make informed decisions and to avoid reinventing the wheel over and over again. And like schools teach the basics of political theory so that children understand the use of democracy, they must teach mathematics so that they understand the use of quantitative forecasts, uncertainties, and, most of all, to recognize the boundary between fact and opinion.

Sunday, July 24, 2016

Can we please agree what we mean by “Big Bang”?


Can you answer the following question?

At the Big Bang the observable universe had the size of:
    A) A point (no size).
    B) A grapefruit.
    C) 168 meters.

The right answer would be “all of the above.” And that’s not because I can’t tell a point from a grapefruit, it’s because physicists can’t agree what they mean by Big Bang!

For someone in quantum gravity, the Big Bang is the initial singularity that occurs in General Relativity when the current expansion of the universe is extrapolated back to the beginning of time. At the Big Bang, then, the universe had size zero and an infinite energy density. Nobody believes this to be a physically meaningful event. We interpret it as a mathematical artifact which merely signals the breakdown of General Relativity.

If you ask a particle physicist, they’ll therefore sensibly put the Big Bang at the time where the density of matter was at the Planck scale – about 80 orders of magnitude higher than the density of a neutron star. That’s where General Relativity breaks down; it doesn’t make sense to extrapolate back farther than this. At this Big Bang, space and time were subject to significant quantum fluctuations and it’s questionable that even speaking of size makes sense, since that would require a well-defined notion of distance.

Cosmologists tend to be even more conservative. The currently most widely used model for the evolution of the universe posits that briefly after the Planck epoch an exponential expansion, known as inflation, took place. At the end of inflation, so the assumption, the energy of the field which drives the exponential expansion is dumped into particles of the standard model. Cosmologists like to put the Big Bang at the end of inflation because inflation itself hasn’t been observationally confirmed. But they can’t agree how long inflation lasted, and so the estimates for the size of the universe range between a grapefruit and a football field.

Finally, if you ask someone in science communication, they’ll throw up their hands in despair and then explain that the Big Bang isn’t an event but a theory for the evolution of the universe. Wikipedia engages in the same obfuscation – if you look up “Big Bang” you get instead an explanation for “Big Bang theory,” leaving you to wonder what it’s a theory of.

I admit it’s not a problem that bugs physicists a lot because they don’t normally debate the meaning of words. They’ll write down whatever equations they use, and this prevents further verbal confusion. Of course the rest of the world should also work this way, by first writing down definitions before entering unnecessary arguments.

While I am waiting for mathematical enlightment to catch on, I find this state of affairs terribly annoying. I recently had an argument on twitter about whether or not the LHC “recreates the Big Bang,” as the popular press likes to claim. It doesn’t. But it’s hard to make a point if no two references agree on what the Big Bang is to begin with, not to mention that it was neither big nor did it bang. If biologists adopted physicists standards, they’d refer to infants as blastocysts, and if you complained about it they’d explain both are phases of pregnancy theory.

I find this nomenclature unfortunate because it raises the impression we understand far less about the early universe than we do. If physicists can’t agree whether the universe at the Big Bang had the size of the White House or of a point, would you give them 5 billion dollars to slam things into each other? Maybe they’ll accidentally open a portal to a parallel universe where the US Presidential candidates are Donald Duck and Brigitta MacBridge.

Historically, the term “Big Bang” was coined by Fred Hoyle, a staunch believer in steady state cosmology. He used the phrase to make fun of Lemaitre, who, in 1927, had found a solution to Einstein’s field equations according to which the universe wasn’t eternally constant in time. Lemaitre showed, for the first time, that matter caused space to expand, which implied that the universe must have had an initial moment from which it started expanding. They didn’t then worry about exactly when the Big Bang would have been – back then they worried whether cosmology was science at all.

But we’re not in the 1940s any more, and precise science deserves precise terminology. Maybe we should rename the different stages of the universe that into “Big Bang,” “Big Bing” and “Big Bong.” This idea has much potential by allowing further refinement to “Big Bång,” “Big Bîng” or “Big Böng.” I’m sure Hoyle would approve. Then he would laugh and quote Niels Bohr, “Never express yourself more clearly than you are able to think.”

You can count me to the Planck epoch camp.

Monday, July 18, 2016

Can black holes tunnel to white holes?

Tl;dr: Yes, but it’s unlikely.

If black holes attract your attention, white holes might blow your mind.

A white hole is a time-reversed black hole, an anti-collapse. While a black hole contains a region from which nothing can escape, a white hole contains a region to which nothing can fall in. Since the time-reversal of a solution of General Relativity is another solution, we know that white holes exist mathematically. But are they real?

Black holes were originally believed to merely be of mathematical interest, solutions that exist but cannot come into being in the natural world. As physicists understood more about General Relativity, however, the exact opposite turned out to be the case: It is hard to avoid black holes. They generically form from matter that collapses under its own gravitational pull. Today it is widely accepted that the black hole solutions of General Relativity describe to high accuracy astrophysical objects which we observe in the real universe.

The simplest black hole solutions in General Relativity are the Schwarzschild-solutions, or their generalizations to rotating and electrically charged black holes. These solutions however are not physically realistic because they are entirely time-independent, which means such black holes must have existed forever. Schwarzschild black holes, since they are time-reversal invariant, also necessarily come together with a white hole. Realistic black holes, on the contrary, which are formed from collapsing matter, do not have to be paired with white holes.

(Aside: Karl Schwarzschild was German. Schwarz means black, Schild means shield. Probably a family crest. It’s got nothing to do with children.)

But there are many things we don’t understand about black holes, most prominently how they handle information of the matter that falls in. Solving the black hole information loss problem requires that information finds a way out of the black hole, and this could be done for example by flipping a black hole over to a white hole. In this case the collapse would not complete, and instead the black hole would burst, releasing all that it had previously swallowed.

It’s an intriguing and simple option. This black-to-white-hole transition has been discussed in the literature for some while, recently by Rovelli and Vidotto in the Planck star idea. It’s also subject of a last week’s paper by Barcelo and Carballo-Rubio.

Is this a plausible solution to the black hole information loss problem?

It is certainly possible to join part of the black hole solution with part of the white hole solution. But doing this brings some problems.

The first problem is that at the junction the matter must get a kick that transfers it from one state into the other. This kick cannot be achieved by any known physics – we know this from the singularity theorems. There isn’t anything in the known physics can prevent a black hole from collapsing entirely once the horizon is formed. Whatever makes this kick hence needs to violate one of the energy conditions, it must be new physics.

Something like this could happen in a region with quantum gravitational effects. But this region is normally confined to deep inside the black hole. A transition to a white hole could therefore happen, but only if the black hole is very small, for example because it has evaporated for a long time.

But this isn’t the only problem.

Before we think about the stability of black holes, let us think about a simpler question. Why doesn’t dough unmix into eggs and flour and sugar neatly separated? Because that would require an entropy decrease. The unmixing can happen, but it’s exceedingly unlikely, hence we never see it.

A black hole too has entropy. It has indeed enormous entropy. It saturates the possible entropy that can be contained within a closed surface. If matter collapses to a black hole, that’s a very likely process to happen. Consequently, if you time-reverse this collapse, you get an exceedingly unlikely process. This solution exists, but it’s not going to happen unless the black hole is extremely tiny, close by the Planck scale.

It is possible that the white hole which a black hole supposedly turns into is not the exact time-reverse, but instead another solution that further increases entropy. But in that case I don’t know where this solution comes from. And even so I would suspect that the kick required at the junction must be extremely finetuned. And either way, it’s not a problem I’ve seen addressed in the literature. (If anybody knows a reference, please let me know.)

In a paper written for the 2016 Awards for Essays on Gravitation, Haggard and Rovelli make an argument in favor of their idea, but instead they just highlight the problem with it. They claim that small quantum fluctuations around the semi-classical limit which is General Relativity can add up over time, eventually resulting in large deviations. Yes, this can happen. But the probability that this happens is tiny, otherwise the semi-classical limit wouldn’t be the semi-classical limit.

The most likely thing to happen instead is that quantum fluctuations average out to give back the semi-classical limit. Hence, no white-hole transition. For the black-to-white-hole transition one would need quantum fluctuations to conspire together in just the right way. That’s possible. But it’s exceedingly unlikely.

In the other recent paper the authors find a surprisingly large transition rate for black to white holes. But they use a highly symmetrized configuration with very few degrees of freedom. This must vastly overestimate the probability for transition. It’s an interesting mathematical example, but it has very little to do with real black holes out there.

In summary: That black holes transition to white holes and in this way release information is an idea appealing because of its simplicity. But I remain unconvinced because I am missing a good argument demonstrating that such a process is likely to happen.

Tuesday, July 12, 2016

Pulsars could probe black hole horizons

The first antenna of MeerKAT,
a SKA precursor in South Africa.
[Image Source.]

It’s hard to see black holes – after all, their defining feature is that they swallow light. But it’s also hard to discourage scientists from trying to shed light on mysteries. In a recent paper, a group of researchers from Long Island University and Virginia Tech have proposed a new way to probe the near-horizon region of black holes and, potentially, quantum gravitational effects.

    Shining Light on Quantum Gravity with Pulsar-Black Hole Binaries
    John Estes, Michael Kavic, Matthew Lippert, John H. Simonetti
    arXiv:1607.00018 [hep-th]

The idea is simple and yet promising: Search for a binary system in which a pulsar and a black hole orbit around each other, then analyze the pulsar signal for unusual fluctuations.

A pulsar is a rapidly rotating neutron star that emits a focused beam of electromagnetic radiation. This beam goes into the direction of the poles of the magnetic field, and is normally not aligned with the neutron star’s axis of rotation. The beam therefore spins with a regular period like a lighthouse beacon. If Earth is located within the beam’s reach, our telescopes receive a pulse every time the beam points into our direction.

Pulsar timing can be extremely precise. We know some pulsars that have been flashing for decades every couple of milliseconds to a precision of a few microseconds. This high regularity allows astrophysicists to search for signals which might affect the timing. Fluctuations of space-time itself, for example, would increase the pulsar-timing uncertainty, a method that has been used to derive constraints on the stochastic gravitational wave background. And if a pulsar is in a binary system with a black hole, the pulsar’s signal might scrape by the black hole and thus encode information about the horizon which we can catch on Earth.


No such pulsar-black hole binaries are known to date. But upcoming experiments like eLISA and the Square Kilometer Array (SKA) will almost certainly detect new pulsars. In their paper, the authors estimate that SKA might observe up to 100 new pulsar-black hole binaries, and they put the probability that a newly discovered system would have a suitable orientation at roughly one in a hundred. If they are right, the SKA would have a good chance to find a promising binary.

Much of the paper is dedicated to arguing that the timing accuracy of such a binary pulsar could carry information about quantum gravitational effects. This is not impossible but speculative. Quantum gravitational effects are normally expect to be strong towards the black hole singularity, ie well inside the black hole and hidden from observation. Naïve dimensional estimates reveal that quantum gravity should be unobservably small in the horizon area.

However, this argument has recently been questioned in the aftermath of the firewall controversy surrounding black holes, because one solution to the black hole firewall paradox is that quantum gravitational effects can stretch over much longer distances than the dimensional estimates lead one to expect. Steve Giddings has long been a proponent of such long-distance fluctuations, and scenarios like black hole fuzzballs, or Dvali’s Bose-Einstein Computers also lead to horizon-scale deviations from general relativity. It is hence something that one should definitely look for.

Previous proposals to test the near-horizon geometry were based on measurements of gravitational waves from merger events or the black hole shadow, each of which could reveal deviations from general relativity. However, so far these were quite general ideas lacking quantitative estimates. To my knowledge, this paper is the first to demonstrate that it’s technologically feasible.

Michael Kavic, one of the authors of this paper, will attend our September conference on “Experimental Search for Quantum Gravity.” We’re still planning to life-streaming the talks, so stay tuned and you’ll get a chance to listen in.

Monday, July 04, 2016

Why the LHC is such a disappointment: A delusion by name “naturalness”

Naturalness, according to physicists.

Before the LHC turned on, theoretical physicists had high hopes the collisions would reveal new physics besides the Higgs. The chances of that happening get smaller by the day. The possibility still exists, but the absence of new physics so far has already taught us an important lesson: Nature isn’t natural. At least not according to theoretical physicists.

The reason that many in the community expected new physics at the LHC was the criterion of naturalness. Naturalness, in general, is the requirement that a theory should not contain dimensionless numbers that are either very large or very small. If that is so, then theorists will complain the numbers are “finetuned” and regard the theory as contrived and hand-made, not to say ugly.

Technical naturalness (originally proposed by ‘t Hooft) is a formalized version of naturalness which is applied in the context of effective field theories in particular. Since you can convert any number much larger than one into a number much smaller than one by taking its inverse, it’s sufficient to consider small numbers in the following. A theory is technically natural if all suspiciously small numbers are protected by a symmetry. The standard model is technically natural, except for the mass of the Higgs.

The Higgs is the only (fundamental) scalar we know and, unlike all the other particles, its mass receives quantum corrections of the order of the cutoff of the theory. The cutoff is assumed to be close by the Planck energy – that means the estimated mass is 15 orders of magnitude larger than the observed mass. This too-large mass of the Higgs could be remedied simply by subtracting a similarly large term. This term however would have to be delicately chosen so that it almost, but not exactly, cancels the huge Planck-scale contribution. It would hence require finetuning.

In the framework of effective field theories, a theory that is not natural is one that requires a lot of finetuning at high energies to get the theory at low energies to work out correctly. The degree of finetuning can, and has been, quantified in various measures of naturalness. Finetuning is thought of as unacceptable because the theory at high energy is presumed to be more fundamental. The physics we find at low energies, so the argument, should not be highly sensitive to the choice we make for that more fundamental theory.

Until a few years ago, most high energy particle theorists therefore would have told you that the apparent need to finetuning the Higgs mass means that new physics must appear nearby the energy scale where the Higgs will be produced. The new physics, for example supersymmetry, would avoid the finetuning.

There’s a standard tale they have about the use of naturalness arguments, which goes somewhat like this:

1) The electron mass isn’t natural in classical electrodynamics, and if one wants to avoid finetuning this means new physics has to appear at around 70 MeV. Indeed, new physics appears even earlier in form of the positron, rendering the electron mass technically natural.

2) The difference between the masses of the neutral and charged pion is not natural because it’s suspiciously small. To prevent fine-tuning one estimates new physics must appear around 700 MeV, and indeed it shows up in form of the rho meson.

3) The lack of flavor changing neutral currents in the standard model means that a parameter which could a priori have been anything must be very small. To avoid fine-tuning, the existence of the charm quark is required. And indeed, the charm quark shows up in the estimated energy range.

From these three examples only the last one was an actual prediction (Glashow, Iliopoulos, and Maiani, 1970). To my knowledge this is the only prediction that technical naturalness has ever given rise to – the other two examples are post-dictions.

Not exactly a great score card.

But well, given that the standard model – in hindsight – obeys this principle, it seems reasonable enough to extrapolate it to the Higgs mass. Or does it? Seeing that the cosmological constant, the only other known example where the Planck mass comes in, isn’t natural either, I am not very convinced.

A much larger problem with naturalness is that it’s a circular argument and thus a merely aesthetic criterion. Or, if you prefer, a philosophic criterion. You cannot make a statement about the likeliness of an occurrence without a probability distribution. And that distribution already necessitates a choice.

In the currently used naturalness arguments, the probability distribution is assumed to be uniform (or at least approximately uniform) in a range that can be normalized to one by dividing through suitable powers of the cutoff. Any other type of distribution, say, one that is sharply peaked around small values, would require the introduction of such a small value in the distribution already. But such a small value justifies itself by the probability distribution just like a number close to one justifies itself by its probability distribution.

Naturalness, hence, becomes a chicken-and-egg problem: Put in the number one, get out the number one. Put in 0.00004, get out 0.00004. The only way to break that circle is to just postulate that some number is somehow better than all other numbers.

The number one is indeed a special number in that it’s the unit element of the multiplication group. One can try to exploit this to come up with a mechanism that prefers a uniform distribution with an approximate width of one by introducing a probability distribution on the space of probability distributions, leading to a recursion relation. But that just leaves one to explain why that mechanism.

Another way to see that this can’t solve the problem is that any such mechanism will depend on the basis in the space of functions. Eg, you could try to single out a probability distribution by asking that it’s the same as its Fourier-transformation. But the Fourier-transformation is just one of infinitely many basis transformations in the space of functions. So again, why exactly this one?

Or you could try to introduce a probability distribution on the space of transformations among bases of probability distributions, and so on. Indeed I’ve played around with this for some while. But in the end you are always left with an ambiguity, either you have to choose the distribution, or the basis, or the transformation. It’s just pushing around the bump under the carpet.

The basic reason there’s no solution to this conundrum is that you’d need another theory for the probability distribution, and that theory per assumption isn’t part of the theory for which you want the distribution. (It’s similar to the issue with the meta-law for time-varying fundamental constants, in case you’re familiar with this argument.)

In any case, whether you buy my conclusion or not, it should give you a pause that high energy theorists don’t ever address the question where the probability distribution comes from. Suppose there indeed was a UV-complete theory of everything that predicted all the parameters in the standard model. Why then would you expect the parameters to be stochastically distributed to begin with?

This lacking probability distribution, however, isn’t my main issue with naturalness. Let’s just postulate that the distribution is uniform and admit it’s an aesthetic criterion, alrighty then. My main issue with naturalness is that it’s a fundamentally nonsensical criterion.

Any theory that we can conceive of which describes nature correctly must necessarily contain hand-picked assumptions which we have chosen “just” to fit observations. If that wasn’t so, all we’d have left to pick assumptions would be mathematical consistency, and we’d end up in Tegmark’s mathematical universe. In the mathematical universe then, we’d no longer have to choose a consistent theory, ok. But we’d instead have to figure out where we are, and that’s the same question in green.

All our theories contain lots of assumptions like Hilbert-spaces and Lie-algebras and Haussdorf measures and so on. For none of these is there any explanation other than “it works.” In the space of all possible mathematics, the selection of this particular math is infinitely fine-tuned already – and it has to be, for otherwise we’d be lost again in Tegmark space.

The mere idea that we can justify the choice of assumptions for our theories in any other way than requiring them to reproduce observations is logical mush. The existing naturalness arguments single out a particular type of assumption – parameters that take on numerical values – but what’s worse about this hand-selected assumption than any other hand-selected assumption?

This is not to say that naturalness is always a useless criterion. It can be applied in cases where one knows the probability distribution, for example for the typical distances between stars or the typical quantum fluctuation in the early universe, etc. I also suspect that it is possible to find an argument for the naturalness of the standard model that does not necessitate to postulate a probability distribution, but I am not aware of one.

It’s somewhat of a mystery to me why naturalness has become so popular in theoretical high energy physics. I’m happy to see it go out of the window now. Keep your eyes open in the next couple of years and you’ll witness that turning point in the history of science when theoretical physicists stopped dictating nature what’s supposedly natural.

Friday, June 24, 2016

Where can new physics hide?

Also an acronym for “Not Even Wrong.”

The year is 2016, and physicists are restless. Four years ago, the LHC confirmed the Higgs-boson, the last outstanding prediction of the standard model. The chances were good, so they thought, that the LHC would also discover other new particles – naturalness seem to demand it. But their hopes were disappointed.

The standard model and general relativity do a great job, but physicists know this can’t be it. Or at least they think they know: The theories are incomplete, not only disagreeable and staring each other in the face without talking, but inadmissibly wrong, giving rise to paradoxa with no known cure. There has to be more to find, somewhere. But where?

The hiding places for novel phenomena are getting smaller. But physicists haven’t yet exhausted their options. Here are the most promising areas where they currently search:

1. Weak Coupling

Particle collisions at high energies, like those reached at the LHC, can produce all existing particles up to the energy that the colliding particles had. The amount of new particles however depends on the strength by which they couple to the particles that were brought to collision (for the LHC that’s protons, or their constituents quarks and gluons, respectively). A particle that couples very weakly might be produced so rarely that it could have gone unnoticed so far.

Physicists have proposed many new particles which fall into this category because weakly interacting stuff generally looks a lot like dark matter. Most notably there are the weakly interacting massive particles (WIMPs), sterile neutrinos (that are neutrinos which don’t couple to the known leptons), and axions (proposed to solve the strong CP problem and also a dark matter candidate).

These particles are being looked for both by direct detection measurements – monitoring large tanks in underground mines for rare interactions – and by looking out for unexplained astrophysical processes that could make for an indirect signal.

2. High Energies

If the particles are not of the weakly interacting type, we would have noticed them already, unless their mass is beyond the energy that we have reached so far with particle colliders. In this category we find all the supersymmetric partner particles, which are much heavier than the standard model particles because supersymmetry is broken. Also at high energies could hide excitations of particles that exist in models with compactified extra dimensions. These excitations are similar to higher harmonics of a string and show up at certain discrete energy levels which depend on the size of the extra dimension.

Strictly speaking, it isn’t the mass that is relevant to the question whether a particle can be discovered, but the energy necessary to produce the particles, which includes binding energy. An interaction like the strong nuclear force, for example, displays “confinement” which means that it takes a lot of energy to tear quarks apart even though their masses are not all that large. Hence, quarks could have constituents – often called “preons” – that have an interaction – dubbed “technicolor” – similar to the strong nuclear force. The most obvious models of technicolor however ran into conflict with data decades ago. The idea however isn’t entirely dead, and though the surviving models aren’t presently particularly popular, some variants are still viable.

These phenomena are being looked for at the LHC and also in highly energetic cosmic ray showers.

3. High Precision

High precision tests of standard model processes are complementary to high energy measurements. They can be sensitive to tiniest effects stemming from virtual particles with energies too high to be produced at colliders, but still making a contribution at lower energies due to quantum effects. Examples for this are proton decay, neutron-antineutron oscillation, the muon g-2, the neutron electric dipole moment, or Kaon oscillations. There are existing experiments for all of these, searching for deviations from the standard model, and the precision for these measurements is constantly increasing.

A somewhat different high precision test is the search for neutrinoless double-beta decay which would demonstrate that neutrinos are Majorana-particles, an entirely new type of particle. (When it comes to fundamental particles that is. Majorana particles have recently been produced as emergent excitations in condensed matter systems.)

4. Long ago

In the early universe, matter was much denser and hotter than we can hope to ever achieve in our particle colliders. Hence, signatures left over from this time can deliver a bounty of new insights. The temperature fluctuations in the cosmic microwave background (B-modes and non-Gaussianities) may be able to test scenarios of inflation or its alternatives (like phase transitions from a non-geometric phase), whether our universe had a big bounce instead of a big bang, and – with some optimism – even whether gravity was quantized back them.

5. Far away

Some signatures of new physics appear on long distances rather than of short. An outstanding question is for example what’s the shape of the universe? Is it really infinitely large, or does it close back onto itself? And if it does, then how does it do this? One can study these questions by looking for repeating patterns in the temperature fluctuation of the cosmic microwave background (CMB). If we live in a multiverse, it might occasionally happen that two universes collide, and this too would leave a signal in the CMB.

New insights might also hide in some of the well-known problems with the cosmological concordance model, such as the too pronounced galaxy cusps or the too many dwarf galaxies that don’t fit well with observations. It is widely believed that these problems are numerical issues or due to a lack of understanding of astrophysical processes and not pointers to something fundamentally new. But who knows?

Another novel phenomenon that would become noticeable on long distances is a fifth force, which would lead to subtle deviations from general relativity. This might have all kinds of effects, from violations of the equivalence principle to a time-dependence of dark energy. Hence, there are experiments testing the equivalence principle and the constancy of dark energy to every higher precision.

6. Right here

Not all experiments are huge and expensive. While tabletop discoveries have become increasingly unlikely simply because we’ve pretty much tried all that could be done, there are still areas where small-scale lab experiments reach into unknown territory. This is the case notably in the foundations of quantum mechanics, where nanoscale devices, single photon sources and – detectors, and increasingly sophisticated noise-control technics have enabled previously impossible experiments. Maybe one day we’ll be able to solve the dispute over the “correct” interpretation of quantum mechanics simply by measuring which one is right.

So, physics isn’t over yet. It has become more difficult to test new fundamental theories, but we are pushing the limits in many currently running experiments.

[This post previously appeared on Starts With a Bang.]

Wissenschaft auf Abwegen

Ich war am Montag in Regensburg und habe dort einen öffentlichen Vortrag gegeben zum Thema “Wissenschaft auf Abwegen” für eine Reihe unter dem Titel “Was ist Wirklich?” Das ganze ist jetzt auf YouTube. Das Video besteht aus etwa 30 Minuten Vortrag und danach noch eine Stunde Diskussion. Alles in Deutsch. Nur was für eche Fans ;)

Saturday, June 18, 2016

New study finds no sign of entanglement with other universes

Somewhere in the multiverse
you’re having a good day.
The German Autobahn is famous for its lack of speed limits, and yet the greatest speed limit of all comes from a German: Nothing, Albert Einstein taught us, is allowed to travel faster than light. This doesn’t prevent our ideas from racing, but sometimes it prevents us from ticketing them.

If we live in an eternally inflating multiverse that contains a vast number of universes, then the other universes recede from us faster than light. We are hence “causally disconnected” from the rest of the multiverse, separated from the other universes by the ongoing exponential expansion of space, unable to ever make a measurement that could confirm their existence. It is this causal disconnect that has lead multiverse critics to complain the idea isn’t within the realm of science.

There are however some situations in which a multiverse can give rise to observable consequences. One is that our universe might in the past have collided with another universe, which would have left a tell-tale signature in the cosmic microwave background. Unfortunately, no evidence for this has been found.

Another proposal for how to test the multiverse is to exploit the subtle non-locality that quantum mechanics gives rise to. If we live in an ensemble of universes, and these universes started out in an entangled quantum state, then we might be able to today detect relics of their past entanglement.

This idea was made concrete by Richard Holman, Laura Mersini-Houghton, and Tomo Takahashi ten years ago. In their model (hep-th/0611223, hep-th/0612142), the original entanglement present among universes in the landscape decays and effectively leaves a correction to the potential that gives rise to inflation in our universe. This corrected potential in return affects observables that we can measure today.

The particular way of Mersini-Houghton and Holman to include entanglement in the landscape isn’t by any means derived from first principles. It is a phenomenological construction that implicitly makes many assumptions about the way quantum effects are realized on the landscape. But, hey, it’s a model that makes predictions, and in theoretical high energy today that’s something to be grateful for.

They predicted back then that such an entanglement-corrected cosmology would in particular affect the physics on very large scales, giving rise to a modulation of the power spectrum that makes the cold spot a more likely appearance, a suppression of the power at large angular scale, and an alignment in the directions in which large structures move – the so-called “dark flow.” The tentative evidence of a dark flow, which was predicted in 2008 had gone by 2013. But this disagreement with the data didn’t do much to the popularity of the model in the press.

In a recent paper, William Kinney from the University at Buffalo put to test the multiverse-entanglement with the most recent cosmological data:
    Limits on Entanglement Effects in the String Landscape from Planck and BICEP/Keck Data
    William H. Kinney
    arXiv:1606.00672 [astro-ph.CO]
The brief summary is that not only hasn’t he found any evidence for the entanglement-modification, he has ruled out the formerly proposed model for two general types of inflationary potentials. The first, a generic exponential inflation, is by itself incompatible with the data, but adding the entanglement correction doesn’t help to make it fit. The second, Starobinski inflation, is by itself a good fit to the data, but the entanglement correction spoils the fit.

Much to my puzzlement, his analysis also shows that some of the predictions of the original model (such as the modulation of the power spectrum) weren’t predictions to begin with, because Kinney in his calculation found that there are choices of parameters in which these effects don’t appear at all.

Leaving aside that this sheds a rather odd light on the original predictions, it’s not even clear exactly what has been ruled out here. What Kinney’s analysis does is to exclude a particular form of the effective potential for inflation (the one with the entanglement modification). This potential is, in the model by Holman and Mersini-Houghton, a function of the original potential (the one without the entanglement correction). Rather than ruling out the entanglement-modification, I can hence interpret this result to mean that the original potential just wasn’t the right one.

Or, in other words, how am I to know that one can’t find some other potential that will fit the data after adding the entanglement correction. The only difficulty I see in this would be to ensure that the uncorrected potential should still lead to eternal inflation.

To add meat to an unfalsifiable idea that made predictions which weren’t, one of the authors who proposed the entanglement model, Laura Mersini-Houghton, is apparently quite unhappy with Kinney’s paper and tries to use an intellectual property claim to get it removed from the arXiv (see comments for details). I will resist the temptation to comment on the matter and simply direct you to the Wikipedia entry on the Streisand Effect. Dear Internet, please do your job.

For better or worse, I have in the last years been dragged into a discussion about what is and isn’t science, which has forced me to think more about the multiverse than I and my infinitely many copies believe is good for their sanity. After this latter episode, the status is that I side with Joe Silk who captured it well: “[O]ne can always find inflationary models to explain whatever phenomenon is represented by the flavour of the month.”

Monday, June 13, 2016

String phenomenology of the somewhat different kind

[Cat’s cradle. Image Source.]
Ten years ago, I didn’t take the “string wars” seriously. To begin with, referring to such an esoteric conflict as “war” seems disrespectful to millions caught in actual wars. In comparison to their suffering it’s hard to take anything seriously.

Leaving aside my discomfort with the nomenclature, the focus on string theory struck me as odd. String theory as a research area stands out in hep-th and gr-qc merely because of the large number of followers, not by the supposedly controversial research practices. For anybody working in the field it is apparent that string theorists don’t differ in their single-minded focus from physicists in other disciplines. Overspecialization is a common disease of academia, but one that necessarily goes along with division of labor, and often it is an efficient route to fast progress.

No, I thought back then, string theory wasn’t the disease, it was merely a symptom. The underlying disease was one that would surely soon be recognized and addressed: Theoreticians – as scientists whose most-used equipment is their own brain – must be careful to avoid systematic bias introduced by their apparatuses. In other words, scientific communities, and especially those which lack timely feedback by data, need guidelines to avoid social and cognitive biases.

This is so obvious it came as a surprise to me that, in 2006, everybody was hitting on Lee Smolin for pointing out what everybody knew anyway, that string theorists, lacking experimental feedback for decades, had drifted off in a math bubble with questionable relevance for the description of nature. It’s somewhat ironic that, from my personal experience, the situation is actually worse in Loop Quantum Gravity, an approach pioneered, among others, by Lee Smolin. At least the math used by string theorists seems to be good for something. The same cannot be said about LQG.

Ten years later, it is clear that I was wrong in thinking that just drawing attention to the problem would seed a solution. Not only has the situation not improved, it has worsened. We now have some theoretical physicists who argue that we should alter the scientific method so that the success of a theory can be assessed by means other than empirical evidence. This idea, which has sprung up in the philosophy community, isn’t all that bad in principle. In practice, however, it will merely serve to exacerbate social streamlining: If theorists can draw on criteria other than the ability of a theory to explain observations, the first criterion they’ll take into account is aesthetic value, and the second is popularity with their colleagues. Nothing good can come out of this.

And nothing good has come out of it, nothing has changed. The string wars clearly were more interesting for sociologists than they were for physicists. In the last couple of months several articles have appeared which comment on various aspects of this episode, which I’ve read and want to briefly summarize for you.

First, there is
    Collective Belief, Kuhn, and the String Theory Community
    Weatherall, James Owen and Gilbert, Margaret
    philsci-archive:11413
This paper is a very Smolin-centric discussion of whether string theorists are exceptional in their group beliefs. The authors argue that, no, actually string theorists just behave like normal humans and “these features seem unusual to Smolin not because they are actually unusual, but because he occupies an unusual position from which to observe them.” He is unusual, the authors explain, for having worked on string theory, but then deciding to not continue in the field.

It makes sense, the authors write, that people whose well-being to some extent depends on the acceptance by the group will adapt to the group:
“Expressing a contrary view – bucking the consensus – is an offense against the other members of the community… So, irrespective of their personal beliefs, there are pressures on individual scientists to speak in certain ways. Moreover, insofar as individuals are psychologically disposed to avoid cognitive dissonance, the obligation to speak in certain ways can affect one’s personal beliefs so as to bring them into line with the consensus, further suppressing dissent from within the group.”
Furthermore:
“As parties to a joint commitment, members of the string theory community are obligated to act as mouthpieces of their collective belief.”
I actually thought we knew this since 1895, when Le Bon’s published his “Study of the Popular Mind.”

The authors of the paper then point out that it’s normal for members of a scientific community to not jump ship at the slightest indication of conflicting evidence because often such evidence turns out to be misleading. It didn’t become clear to me what evidence they might be referring to; supposedly it’s non-empirical.

They further argue that a certain disregard for what is happening outside one’s own research area is also normal: “Science is successful in part because of a distinctive kind of focused, collaborative research,” and due to their commitment to the agenda “participants can be expected to resist change with respect to the framework of collective beliefs.”

This is all reasonable enough. Unfortunately, the authors entirely miss the main point, the very reason for the whole debate. The question isn’t whether string theorists’ behavior is that of normal humans – I don’t think that was ever in doubt – but whether that “normal human behavior” is beneficial for science. Scientific research requires, in a very specific sense, non-human behavior. It’s not normal for individuals to disregard subjective assessments and to not pay attention to social pressure. And yet, that is exactly what good science would require.

The second paper is
This paper is basically a summary of the string wars that focuses on the question whether or not string theory can be considered science. This “demarcation problem” is a topic that philosophers and sociologists love to discuss, but to me it really isn’t particularly interesting how you classify some research area, to me the question is whether it’s good for something. This is a question which should be decided by the community, but as long as decision making is influenced by social pressures and cognitive biases I can’t trust the community judgement.

The article has a lot of fun quotations from very convinced string theorists, for example by David Gross: “String theory is full of qualitative predictions, such as the production of black holes at the LHC.” I’m not sure what’s the difference between a qualitative prediction and no prediction, but either way it’s certainly not a prediction that was very successful. Also nice is John Schwarz claiming that “supersymmetry is the major prediction of string theory that could appear at accessible energies” and that “some of these superpartners should be observable at the LHC.” Lots of coulds and shoulds that didn’t quite pan out.

While the article gives a good overview on the opinions about string theory that were voiced during the 2006 controversy, the authors themselves clearly don’t know very well the topic they are writing about. A particularly odd statement that highlights their skewed perspective is: “String theory currently enjoys a privileged status by virtue of being the dominant paradigm within theoretical physics.”

I find it quite annoying how frequently I encounter this extrapolation from a particular research area – may that be string theory, supersymmetry, or multiverse cosmology – to all of physics. The vast majority of physicists work in fields like quantum optics, photonics, hadronic and nuclear physics, statistical mechanics, atomic physics, solid state physics, low-temperature physics, plasma physics, astrophysics, condensed matter physics, and so on. They have nothing whatsoever to do with string theory, and certainly would be very surprised to hear that it’s “the dominant paradigm.”

In any case, you might find this paper useful if you didn’t follow the discussion 10 years ago.

Finally, there is this paper

The title of the paper doesn’t explicitly refer to string theory, but most of it is also a discussion of the demarcation problem on the example of arXiv trackbacks. (I suspect this paper is a spin-off of the previous paper.)

ArXiv trackbacks, in case you didn’t know, are links to blogposts that show up on some papers’ arxiv sites, when the blogpost has referred to the paper. To exactly which blogs trackbacks show up and who makes the decision whether they do is one of the arXiv’s best-kept secrets. Peter Woit’s blog, infamously, doesn’t show up in the arXiv trackbacks on the, rather spurious, reason that he supposedly doesn’t count as “active researcher.” The paper tells the full 2006 story with lots of quotes from bloggers you are probably familiar with.

The arXiv recently conducted a user survey, among other things about the trackback feature, which makes me think they might have some updates planned.

On the question who counts as crackpot, the paper (unsurprisingly) doesn’t come to a conclusion other than noting that scientists deal with the issue by stating “we know one when we see one.” I don’t think there can be any other definition than that. To me the notion of “crackpot” is an excellent example of an emergent feature – it’s a demarcation that the community creates during its operation. Any attempt to come up with a definition from first principles is hence doomed to fail.

The rest of the paper is a general discussion of the role of blogs in science communication, but I didn’t find it particularly insightful. The author comes to the (correct) conclusion that blog content turned out not to have such a short life-time as many feared, but otherwise basically just notes that there are as many ways to use blogs as there are bloggers. But then if you are reading this, you already knew that.

One of the main benefits that I see in blogs isn’t mentioned in the paper at all, which is that blogs supports communication between scientific communities that are only loosely connected. In my own research area, I read the papers, hear the seminars, and go to conferences, and I therefore know pretty well what is going on – with or without blogs. But I use blogs to keep up to date in adjacent fields, like cosmology, astrophysics and, to a lesser extent, condensed matter physics and quantum optics. For this purpose I find blogs considerably more useful than popular science news, because the latter often doesn’t provide a useful amount of detail and commentary, not to mention that they all tend to latch onto the same three papers that made big unsubstantiated claims.

Don’t worry, I haven’t suddenly become obsessed with string theory. I’ve read through these sociology papers mainly because I cannot not write a few paragraphs about the topic in my book. But I promise that’s it from me about string theory for some while.

Update: Peter Woit has some comments on the trackback issue.