Sunday, November 12, 2017

Away Note

I am overseas the coming week, giving a seminar at Perimeter Institute on Tuesday, a colloq in Toronto on Wednesday, and on Thursday I am scheduled to “make sense of mind-blowing physics” with Natalie Wolchover in New York. The latter event, I am told, has a live webcast starting at 6:30 pm Eastern, so dial in if you fancy seeing my new haircut. (Short again.)

Please be warned that things on this blog will go very slowly while I am away. On this occasion I want to remind you that I have comment moderation turned on. This means comments will not appear until I manually approve them. I usually check the queue at least once per day.


(The above image is the announcement for the New York event. Find the seven layout blunders.)

Friday, November 10, 2017

Naturalness is dead. Long live naturalness.

I was elated when I saw that Gian Francesco Giudice announced the “Dawn of the Post-Naturalness Era,” as the title of his recent paper promises. The craze in particle physics, I thought, might finally come to an end; data brought reason back to Earth after all.

But disillusionment followed swiftly when I read the paper.

Gian Francesco Giudice is a theoretical physicist at CERN. He is maybe not the most prominent member of his species, but he has been extremely influential in establishing “naturalness” as a criterion to select worthwhile theories of particle physics. Together with Riccardo Barbieri, Giudice wrote one of the pioneering papers on how to quantify naturalness, thereby significantly contributing to the belief that it is a scientific criterion. To date the paper has been cited more than 1000 times.

Giudice was also the first person I interviewed for my upcoming book about the relevance of arguments from beauty in particle physics. It became clear to me quickly, however, that he does not think naturalness is an argument from beauty. Instead, Giudice, like many in the field, believes the criterion is mathematically well-defined. When I saw his new paper, I hoped he’d come around to see the mistake. But I was overly optimistic.

As Giudice makes pretty clear in the paper, he still thinks that “naturalness is a well-defined concept.” I have previously explained why that is wrong, or rather why, if you make naturalness well-defined, it becomes meaningless. A quick walk through the argument goes as follows.

Naturalness in quantum field theories – ie, theories of the type of the standard model of particle physics – means that a theory at low energies does not sensitively depend on the choice of parameters at high energies. I often hear people say this means that “the high-energy physics decouples.” But note that changing the parameters of a theory is not a physical process. The parameters are whatever they are.

The processes that are physically possible at high energies decouple whenever effective field theories work, pretty much by definition of what it means to have an effective theory. But this is not the decoupling that naturalness relies on. To quantify naturalness you move around between theories in an abstract theory space. This is very similar to moving around in the landscape of the multiverse. Indeed, it is probably not a coincidence that both ideas became popular around the same time, in the mid 1990s.

If you now want to quantify how sensitively a theory at low energy depends on the choice of parameters at high energies, you first have to define the probability for making such choices. This means you need a probability distribution on theory space. Yes, it’s the exact same problem you also have for inflation and in the multiverse.

In most papers on naturalness, however, the probability distribution is left unspecified which implicitly means one chooses a uniform distribution over an interval of about length 1. The typical justification for this is that once you factor out all dimensionful parameters, you should only have numbers of order 1 left. It is with this assumption that naturalness becomes meaningless because you have now simply postulated that numbers of order 1 are better than other numbers.

You wanted to avoid arbitrary choices, but in the end you had to make an arbitrary choice. This turns the whole idea ad absurdum.

That you have to hand-select a probability distribution to make naturalness well-defined used to be well-known. One of the early papers on the topic clearly states
“The “theoretical license” at one’s discretion when making this choice [for the probability distribution] necessarily introduces an element of arbitrariness to the construction.” 
Anderson and Castano, Phys. Lett. B 347:300-308 (1995)

Giudice too mentions “statistical comparisons” on theory space, so I am sure he is aware of the need to define the distribution. He also writes, however, that “naturalness is an inescapable consequence of the ingredients generally used to construct effective field theories.” But of course it is not. If it was, why make it an additional requirement?

(At this point usually someone starts quoting the decoupling theorem. In case you are that person let me say that a) no one has used mass-dependent regularization schemes since the 1980s for good reasons, and b) not only is it questionable to assume perturbative renormalizability, we actually know that gravity isn’t perturbatively renormalizable. In other words, it’s an irrelevant objection, so please let me go on.)

In his paper, Giudice further claims that “naturalness has been a good guiding principle” which is a strange thing to say about a principle that has led to merely one successful prediction but at least three failed predictions, more if you count other numerical coincidences that physicists obsess about like the WIMP miracle or gauge coupling unification. The tale of the “good guiding principle” is one of the peculiar myths that gets passed around in communities until everyone believes it.

Having said that, Giudice’s paper also contains some good points. He suggests, for example, that the use of symmetry principles in the foundations of physics might have outlasted its use. Symmetries might just be emergent at low energies. This is a fairly old idea which goes back at least to the 1980s, but it’s still considered outlandish by most particle physicists. (I discuss it in my book, too.)

Giudice furthermore points out that in case your high energy physics mixes with the low energy physics (commonly referred to as “UV/IR mixing”) it’s not clear what naturalness even means. Since this mixing is believed to be a common feature of non-commutative geometries and quite possibly quantum gravity in general, I have picked people’s brains on this for some years. But I only got shoulder shrugs, and I am none the wiser today. Giudice in his paper also doesn’t have much to say about the consequences other than that it is “a big source of confusion,” on which I totally agree.

But the conclusion that Giudice comes to at the end of his paper seems to be the exact opposite of mine.

I believe what is needed for progress in the foundations of physics is more mathematical rigor. Obsessing about ill-defined criteria like naturalness that don’t even make good working hypotheses isn’t helpful. And it would serve particle physicists well to identify their previous mistakes in order to avoid repeating them. I dearly hope they will not just replace one beauty-criterion by another.

Giudice on the other hand thinks that “we need pure unbridled speculation, driven by imagination and vision.” Which sounds great, except that theoretical particle physics has not exactly suffered from a dearth of speculation. Instead, it has suffered from a lack of sound logic.

Be that as it may, I found the paper insightful in many regards. I certainly agree that this is a time of crisis but that this is also an opportunity for change to the better. Giudice’s paper is very timely. It is also merely moderately technical, so I encourage you to give it a read yourself.

Monday, November 06, 2017

How Popper killed Particle Physics

Popper, upside-down.
Image: Wikipedia.
Popper is dead. Has been dead since 1994 to be precise. But also his philosophy, that a scientific idea needs to be falsifiable, is dead.

And luckily so, because it was utterly impractical. In practice, scientists can’t falsify theories. That’s because any theory can be amended in hindsight so that it fits new data. Don’t roll your eyes – updating your knowledge in response to new information is scientifically entirely sound procedure.

So, no, you can’t falsify theories. Never could. You could still fit planetary orbits with a quadrillion of epicycles or invent a luminiferous aether which just exactly mimics special relativity. Of course no one in their right mind does that. That’s because repeatedly fixed theories become hideously difficult, not to mention hideous, period. What happens instead of falsification is that scientists transition to simpler explanations.

To be fair, I think Popper in his later years backpedaled from his early theses. But many physicists not only still believe in Popper, they also opportunistically misinterpret the original Popper.

Even in his worst moments Popper never said a theory is scientific just because it’s falsifiable. That’s Popper upside-down and clearly nonsense. Unfortunately, upside-down Popper now drives theory-development, both in cosmology and in high energy physics.

It’s not hard to come up with theories that are falsifiable but not scientific. By scientific I mean the theory has a reasonable chance of accurately describing nature. (Strictly speaking it’s not an either/or criterion until one quantifies “reasonable chance” but it will suffice for the present purpose.)

I may predict for example, that Donald Trump will be shot by an elderly lady before his first term is over. That’s compatible with present knowledge and totally falsifiable. But chances it’s correct are basically zero and that makes it a prophecy, not a scientific theory.

The idea that falsifiability is sufficient to make a theory scientific is an argument I hear frequently from amateur physicists. “But you can test it!” they insist. Then they explain how their theory reworks the quantum or what have you. And post their insights in all-caps on my time-line. Indeed, as I am writing this, a comment comes in: “A good idea need only be testable,” says Uncle Al. Sorry, Uncle, but that’s rubbish.

You’d think that scientists know better. But two years ago I sat in a talk by Professor Lisa Randall who spoke about how dark matter killed the dinosaurs. Srsly. This was when I realized the very same mistake befalls professional particle physicists. Upside-down Popper is a widely-spread malaise.

Randall, you see, has a theory for particle dark matter with some interaction that allows the dark matter to clump within galaxies and form disks similar to normal matter. Our solar system, so the idea, periodically passes through the dark matter disk, which then causes extinction events. Or something like that.

Frankly I can’t recall the details, but they’re not so relevant. I’m just telling you this because someone asked “Why these dark matter particles? Why this interaction?” To which Randall’s answer was (I paraphrase) I don’t know but you can test it.

I don’t mean to pick on her specifically, it just so happens that this talk was the moment I understood what’s wrong with the argument. Falsifiability alone doesn’t make a theory scientific.

If the only argument that speaks for your idea is that it’s compatible with present data and makes a testable prediction, that’s not enough. My idea that Trump will get shot is totally compatible with all we presently know. And it does make a testable prediction. But it will not enter the annals of science, and why is that? Because you can effortlessly produce some million similar prophecies.

In the foundations of physics, compatibility with existing data is a high bar to jump, or so they want you to believe. That’s because if you cook up a new theory you first have to reproduce all achievements of the already established theories. This bar you will not jump unless you actually understand the present theories, which is why it’s safe to ignore the all-caps insights on my timeline.

But you can learn how to jump the bar. Granted, it will take you a decade. But after this you know all the contemporary techniques to mass-produce “theories” that are compatible with the established theories and make eternally amendable predictions for future experiments. In my upcoming book, I refer to these techniques as “the hidden rules of physics.”

These hidden rules tell you how to add particles to the standard model and then make it difficult to measure them, or add fields to general relativity and then explain why we can’t see them, and so on. Once you know how to do that, you’ll jump the bar every time. All you have to do then is twiddle the details so that your predictions are just about to become measureable in the next, say, 5 years. And if the predictions don’t work out, you’ll fiddle again.

And that’s what most theorists and phenomenologists in high energy physics live from today.

There are so many of these made-up theories now that the chances any one of them is correct are basically zero. There are infinitely many “hidden sectors” of particles and fields that you can invent and then couple so lightly that you can’t measure them or make them so heavy that you need a larger collider to produce them. The quality criteria are incredibly low, getting lower by the day. It’s a race to the bottom. And the bottom might be at asymptotically minus infinity.

This overproduction of worthless predictions is the theoreticians’ version of p-value hacking. To get away with it, you just never tell anyone how many models you tried that didn’t work as desired. You fumble things together until everything looks nice and then the community will approve. It’ll get published. You can give talks about it. That’s because you have met the current quality standard.  You see this happen both in particle physics and in cosmology and, more recently, also in quantum gravity.

This nonsense has been going on for so long, no one sees anything wrong with it. And note how very similar this is to the dismal situation in psychology and the other life-sciences, where abusing statistics had become so common it was just normal practice. How long will it take for theoretical physicists to admit they have problems too?

Some of you may recall the book of philosopher Richard Dawid who claimed that the absence of alternatives speaks for string theory. This argument is wrong of course. To begin with there are alternatives to string theory, just that Richard conveniently doesn’t discuss them. But what’s more important is that there could be many alternatives that we do not know of. Richard bases his arguments on Bayesian reasoning and in this case the unknown number of unknown alternatives renders his no-alternative argument unusable.

But a variant of this argument illuminates what speaks against, rather than for, a theory. Let me call it the “Too Many Alternatives Argument.”

In this argument you don’t want to show that the probability for one particular theory is large, but that the probability for any particular theory is small. You can do this even though you still don’t know the total number of alternatives because you know there are at least as many alternatives as the ones that were published. This probabilistic estimate will tell you that the more alternatives have been found, the smaller the chances that any one of them is correct.

Really you don’t need Bayesian mysticism to see the logic, but it makes it sound more sciency. The point is that the easier it is to come up with predictions the lower their predictive value.

Duh, you say. I hear you. How come particle physicist think this is good scientific practice? It’s because of upside-down Popper! They make falsifiable predictions – and they believe that’s enough.

Yes, I know. I’m well on the way to make myself the most-hated person in high energy physics. It’s no fun. But look, even psychologists have addressed their problems by introducing better quality criteria. If they can do it, so can we.

At least I hope we can.

Thursday, November 02, 2017

Book Review: Max Tegmark “Our Mathematical Universe”

Our Mathematical Universe: My Quest for the Ultimate Nature of Reality
Knopf (January 2014)

Max Tegmark just published his second book, “Life 3.0.” I gracefully declined reviewing it, seeing that three years weren’t sufficient to finish his first book. But thusly reminded of my shortfall, I made another attempt and finally got to the end. So here’s a late review or, if you haven’t made it through in three years either, a summary.

Tegmark is a cosmologist at MIT and his first book, “Our Mathematical Universe,” is about the idea that the world is not merely described by mathematics, but actually made of mathematics.

I told you ten years ago why this is nonsense and haven’t changed my mind since. It was therefore pretty clear I wouldn’t be fond of Max’s message.

But. Well. People like Max don’t grow on trees. I have much sympathy for his free-range ideas and also, even though I’ve met him several times, I never really figured out what he was tenured for. Probably not the mathematical universe. Once upon a time, I was sure, he must have done actual physics.

Indeed, as the book reveals, Tegmark did CMB analysis before everyone else did it. This solid scientific ground is also where he begins his story: With engaging explanations of contemporary cosmology, the evolution of the universe, general relativity, and all that. He then moves on to inflation, eternal inflation and the multiverse, to quantum mechanics in general and the many worlds interpretation in particular. After this, he comes to the book’s main theme, the mathematical universe hypothesis. At that point we’re at page 250 or so.

Tegmark writes well. He uses helpful analogies and sprinkles some personal anecdotes which makes the topic more digestible. The book also has a lot of figures, about half of which are helpful. I believe I have seen most of them on his slides.

Throughout the book, Tegmark is careful to point out where he leaves behind established science and crosses over into speculation. However, by extrapolating from the biased sample of people-he-spends-time-with, Tegmark seems to have come to believe the multiverse is much more accepted than is the case. Still, it is certainly a topic that is much discussed and worth writing about.

But even though Tegmark’s story flows nicely, I got stuck over and over again. The problem isn’t that the book is badly written. The problem is that, to paraphrase John Mellencamp, the book goes on long after the thrill of reading is gone.

Already in the first parts of the book, Tegmark displays an unfortunate tendency to clutter his arguments with dispensable asides. I got the impression he is so excited about writing that, while at it, he just also has to mention this other thing that he once worked on, and that great idea he had which didn’t work, and why that didn’t work, and how that connects with yet something else. And did I mention that? By the way, let me add this. Which is related to that. And a good friend of mine thinks so. But I don’t think so. And so on.

And then, just when you think the worst is over, Tegmark goes on to tell you what he thinks about alien life and consciousness and asteroid impacts and nuclear war and artificial intelligence.

To me, his writing exhibits a familiar dilemma. If you’ve spent years thinking about a topic, the major challenge isn’t deciding what to tell the reader. It’s deciding what to not tell them. And while some readers may welcome Tegmark’s excursions, I suspect that many of them will have trouble seeing the connections that he, without any doubt, sees so clearly.

As to the content. The major problems with Max’s idea that the universe is made of mathematics rather than merely described by mathematics are:
  1. The hypothesis is ill-defined without explaining what “is real” means. I therefore don’t know what’s the point even talking about it.

  2. Leaving this aside, Max erroneously thinks it’s the simplest explanation for why mathematics is so useful, and hence supported by Ockham’s razor (though he doesn’t explicitly say so). The argument is that if reality is merely described by mathematics rather than actually made of mathematics, then one needs an additional criterion to define what makes some things real and others not.

    But that argument is logically wrong. Saying that the universe is accurately described by mathematics makes no assumption about whether it “really is” mathematics (scare quotes to remind you that that’s ill-defined). It is unnecessary to specify whether the universe is mathematics or is something more, evidenced by scientists never bothering with such a specification. Ockham’s razor thus speaks against the mathematical universe.

  3. He claims that a theory which is devoid of “human baggage” must be formulated in mathematics. I challenge you to prove this, preferably without using human baggage. If that was too meta: Just because we don’t know anything better than math to describe nature doesn’t mean there is nothing.

  4. Max also erroneously thinks, or at least claims in the book, that the mathematical universe hypothesis is testable. Because, so he writes, it predicts that we will continue to find mathematical descriptions for natural phenomena.

    But of course if there was something for which we do not manage to find a mathematical description, that would never prove the mathematical universe wrong. After all, it might merely mean we were too dumb to figure out the math. Now that I think of it, maybe our failure to quantize gravity falsifies the mathematical universe.
There are further various statements in the book which I can’t make sense of. For example, I have no idea what an “element” of a mathematical structure is. I only know elements of sets. I also don’t understand why Tegmark believes accepting that our universe is a mathematical structure means that differential equations no longer need initial conditions. Or so he seems to say. Even more perplexing, he argues that the multiverse explains why the constants of nature seem finetuned for the existence of life. This is a misunderstanding of both finetuning and the anthropic principle.

There. I’ve done it again. I set out with the best intention to say nice things, but all that comes out is “wrong, wrong, wrong.”

To work off my guilt, I’ll now have to buy his new book too. Check back in three years.