Thursday, December 31, 2015

Book review: “Beyond the Galaxy” by Ethan Siegel

Beyond the Galaxy: How Humanity Looked Beyond Our Milky Way and Discovered the Entire Universe
By Ethan Siegel
World Scientific Publishing Co (December 9, 2015)

Ethan Siegel’s book is an introduction to modern cosmology that delivers all the facts without the equations. Like Ethan’s collection “Starts With a Bang,” it is well-explained and accessible for the reader without any prior knowledge in physics. But this access doesn’t come without effort. This isn’t a book for the strolling pedestrian who likes being dazzled by the wonders of modern science, it’s a book for the inquirer who wants to turn around everything behind the display-window of science news.

“Beyond the Galaxy” tells the history of the universe and the basics of the relevant measurement techniques. It explains the big bang theory and inflation, the formation of matter in the early universe, dark matter, dark energy, and briefly mentions the multiverse. Siegel elaborates on the cosmic microwave background and what we have learned from it, baryon acoustic oscillations, and supernovae redshift. For the most part, the book sticks closely with well-established physics and stays away from speculations, except when it comes to the possible explanations for dark matter and dark energy.

Having said what the book contains, let me spell out what it doesn’t contain. This is not a book about astrophysics. You will not find elaborate discussions about all the known astrophysical objects and their physical process. This is also not a book about particle physics. Ethan does not include dark matter direct detection experiments, and while some particle physics necessarily enters the discussion of matter formation, he sticks with the very essentials. It is also not a history book. Though Ethan does a good job giving the reader a sense of the timeline of discoveries, this is clearly not the focus of his interest.

Ethan might not be the most lyrical writer ever, but his explanations are infallibly clear and comprehensible. The book is accompanied by numerous illustrations that are mostly helpful, though some of them contain more information than is explained in the text.

In short, Ethan’s book is the missing link between cosmology textbooks and popular science articles. It will ease your transition if you are attempting one, or, if that is not your intention, it will serve to tie together the patchy knowledge that news articles often leave us with. It is the ideal starting point if you want to get serious about digging into cosmology, or if you are just dissatisfied by the vagueness of much contemporary science writing. It is, in one word, a sciency book.

[Disclaimer: Free review copy, plus I write for Ethan once per month.]

Wednesday, December 30, 2015

How does a lightsaber work? Here is my best guess.

A lightsaber works by emitting a stream of magnetic monopoles. Magnetic monopoles are heavy particles that source magnetic fields. They are so-far undiscovered but many physicists believe they are real due to theoretical arguments. For string theorist Joe Polchinski, for example, “the existence of magnetic monopoles seems like one of the safest bets that one can make about physics not yet seen.” Magnetic monopoles are so heavy however that they cannot be produced by any known processes in the universe – a minor technological complication that I will come back to below.




Depending on the speed at which the monopoles are emitted, they will either escape or return back to the saber’s hilt which has the opposite magnetic charge. You could of course just blast your opponent with the monopoles, but that would be rather boring. The point of a lightsaber isn’t to merely kill your enemies, but to kill them with style.



So you are emitting this stream of monopoles. Since the hilt has the opposing magnetic charge they pull after them magnetic force lines. Next you eject some electrically charged particles – electrons or ions – into this field with an initial angular velocity. These will circle in spirals around the magnetic field and, due to the circular motion, they will emit synchroton radiation, which is why you can see the blade.

Due to the emission of light and the occasional collision with air molecules, the electrically charged particles slow down and eventually escape the magnetic field. That doesn’t sound really healthy, so you might want to make sure that their kinetic energy isn’t too high. To then still get an emission spectrum with a significant contribution in the visible range, you need a huge magnetic field. Which can’t really be healthy either, but at least it decays inversely proportional to the distance from the blade.

Letting the monopoles escape has the advantage that you don’t have to devise a complicated mechanism to make sure they actually return back to the hilt. It has the disadvantage though that one fighter’s monopoles can be sucked up by the other’s saber if that has opposite charge. Can the blades pass through each other? Well, if they both have the same charges, they repel. You couldn’t easily pass them through each other, but they would probably distort each other to some extent. How much depends on the strength of the magnetic field that keeps the electrons caught.


Finally, there is the question how to produce the magnetic monopoles to begin with. For this, you need a pocket-sized accelerator that generates collision energies at the Planck scale. The most commonly used method for this is to use a Kyber crystal. This also means that you need to know string theory to accurately calculate how a lightsaber operates. May the Force be with you.

[For more speculation, see also Is a Real Lightsaber Possible? by Don Lincoln.]

Tuesday, December 29, 2015

Book review: “Seven brief lessons on physics” by Carlo Rovelli

Seven Brief Lessons on Physics
By Carlo Rovelli
Allen Lane (September 24, 2015)

Carlo Rovelli’s book is a collection of essays about the fundamental laws of physics as we presently know them, and the road that lies ahead. General Relativity, quantum mechanics, particle physics, cosmology, quantum gravity, the arrow of time, and consciousness, are the topics that he touches upon in this slim, pocket-sized, 79 pages collection.

Rovelli is one of the founders of the research program of Loop Quantum Gravity, an approach to understanding the quantum nature of space and time. His “Seven brief lessons on physics” are short on scientific detail, but excel in capturing the fascination of the subject and its relevance to understand our universe, our existence, and ourselves. In laying out the big questions driving physicists’ quest for a better understanding of nature Rovelli makes it clear how the, often abstract, contemporary research is intimately connected with the ancient desire to find our place in this world.

As a scientist, I would like to complain about numerous slight inaccuracies, but I forgive them since they are admittedly not essential to the message Rovelli is conveying, that is the value of knowledge for the sake of knowledge itself. The book is more a work of art and philosophy than of science, it’s the work of a public intellectual reaching out to the masses. I applaud Carlo for not dumbing down his writing, for not being afraid of using multi-syllable words and constructing nested sentences; it’s a pleasure to read. He seems to spend too much time on the beach playing with snail-shells though.

I might have recommended the book as a Christmas present for your relatives who never quite seem to understand why anyone would spend their life pondering the arrow of time, but I was too busy pondering the arrow of time to finish the book before Christmas.

I would recommend this book to anyone who wants to understand how fundamental questions in physics tie together with the mystery of our own existence, or maybe just wants a reminder of what got them into this field decades ago.

[Disclaimer: I got the book as gift from the author.]

Sunday, December 27, 2015

Dear Dr B: Is string theory science?

This question was asked by Scientific American, hovering over an article by Davide Castelvecchi.

They should have asked Ethan Siegel. Because a few days ago he strayed from the path of awesome news about the universe to inform his readership that “String Theory is not Science.” Unlike Davide however, Ethan has not yet learned the fine art of not expressing opinions that marks the true science writer. And so Ethan dismayed Peter Woit, Lubos Motl, and me in one sweep. That’s a noteworthy achievement, Ethan!

Upon my inquiry (essentially a polite version of “wtf?”) Ethan clarified that he meant string theory has no scientific evidence speaking for it and changed the title to “Why String Theory Is Not A Scientific Theory.” (See URL for original title.)

Now, Ethan is wrong with believing that string theory doesn’t have evidence speaking for it and I’ll come to this in a minute. But the main reason for his misleading title, even after the correction, is a self-induced problem of US science communicators. In reaction to an often raised Creationist’s claim that Darwinian natural selection is “just a theory,” they have bent over backwards trying to convince the public that scientists use the word “theory” to mean an explanation that has been confirmed by evidence to high accuracy. Unfortunately, that’s not how scientists actually use the word, have never used it, and will probably never use it.

Scientists don’t name their research programs following certain rules. Instead, which expression sticks is mostly coincidence. Brans-Dicke theory, Scalar-Tensor theory, terror management theory, or recapitulation theory, are but a few examples of “theories” that have little or no evidence speaking in their favor. Maybe that shouldn’t be so. Maybe “theory” should be a title reserved only for explanations widely accepted in the scientific community. But looking up definitions before assigning names isn’t how language works. Peanuts also aren’t nuts (they are legumes), and neither are Cashews (they are seeds). But, really, who gives a damn?

Speaking of nuts, the sensible reaction to the “just a theory” claim is not to conjure up rules according to which scientists allegedly use one word or the other, but to point out that any consistent explanation is better than a collection of 2000 years old fairy tales that are neither internally consistent nor consistent with observation, and thus an entirely useless waste of time.

And science really is all about finding useful explanations for observations, where “useful” means that they increase our understanding of the world around us and/or allow us to shape nature to our benefits. To find these useful explanations, scientists employ the often-quoted method of proposing hypotheses and subsequently testing them. The role of theory development in this is to identify the hypotheses which are most promising and thus deserve being put to test.

This pre-selection of hypotheses is a step often left out in the description of the scientific method, but it is highly relevant, and its relevance has only increased in the last decades. We cannot possibly test all randomly produced hypotheses – we neither have the time nor the resources. All fields of science therefore have tight quality controls for which hypotheses are worth paying attention to. The more costly experimental test of new hypotheses becomes, the more relevant is this hypotheses pre-selection. And it is in this step where non-empirical theory assessment enters.

Non-empirical theory assessment was topic of the workshop that Davide Castelvecchi’s SciAm article reported on. (For more information about the workshop, see also Natalie Wolchover’s summary in Quanta, and my summary on Starts with a Bang.) Non-empirical theory assessment is the use of criteria that scientists draw upon to judge on the promise of a theory before it can be put to experimental test.

This isn’t new. Theoretical physicists have always used non-empirical assessment. What is new is that in foundational physics it has remained the only assessment for decades, which hugely inflates the potential impact of even smallest mistakes. As long as we have frequent empirical assessment, faulty non-empirical assessment cannot lead theorists far astray. But take away the empirical test, and non-empirical assessment requires utmost objectivity in judgement or we will end up in a completely wrong place.

Richard Dawid, one of the organizers of the Munich workshop, has, in a recent book, summarized some non-empirical criteria that practitioners list in favor of string theory. It is an interesting book, but of little practical use because it doesn’t also assess other theories (so the scientist complains about the philosopher).

String theory arguably has empirical evidence speaking for it because it is compatible with the theories that we know, the standard model and general relativity. The problem is though that, for what the evidence is concerned, string theory so far isn’t any better than the existing theories. There isn’t a single piece of data that string theory explains which the standard model or general relativity doesn’t explain.

The reason many theoretical physicists prefer string theory over the existing theories are purely non-empirical. They consider it a better theory because it unifies all known interactions in a common framework and is believed to solve consistency problems in the existing theories, like the black hole information loss problem and the formation of singularities in general relativity. Whether it is actually correct as a unified theory of all interactions is still unknown. And short of a uniqueness proof, no non-empirical argument will change anything about this.

What is known however is that string theory is intimately related to quantum field theories and gravity, both of which are well-confirmed by evidence. This is why many physicists are convinced that string theory too has some use in the description of nature, even if this use eventually may not be to describe the quantum structure of space and time. And so, in the last decade string theory has become regarded less as a “final theory” and more as mathematical framework to address questions that are difficult or impossible to answer with quantum field theory or general relativity. It yet has to prove its use on these accounts.

Speculation in theory development is a necessary part of the scientific method. If a theory isn’t developed to explain already existing data, there is always a lag between the hypotheses and their tests. String theory is just another such speculation, and it is thereby a normal part of science. I have never met a physicist who claimed that string theory isn’t science. This is a statement I have only come across by people who are not familiar with the field – which is why Ethan’s recent blogpost puzzled me greatly.

No, the question that separates the community is not whether string theory is science. The controversial question is how long is too long to wait for data supporting a theory? Are 30 years too long? Does it make any sense to demand payoff after a certain time?

It doesn’t make any sense to me to force theorists to abandon a research project because experimental test is slow to come by. It seems natural that in the process of knowledge discovery it becomes increasingly harder to find evidence for new theories. What one should do in this case though is not admit defeat on the experimental front and focus solely on the theory, but instead increase efforts to find new evidence that could guide the development of the theory. That, and the non-empirical criteria should be regularly scrutinized to prevent scientists from discarding hypotheses for the wrong reasons.

I am not sure who is responsible for this needlessly provocative title of the SciAm piece, just that it’s most likely not the author, because the same article previously appeared in Nature News with the somewhat more reasonable title “Feuding physicists turn to philosophy for help.” There was, however, not much feud at the workshop, because it was mainly populated by string theory proponents and multiverse opponents, who nodded to each other’s talks. The main feud, as always, will be carried out in the blogosphere...

Tl;dr: Yes, string theory is science. No, this doesn’t mean we know it’s a correct description of nature.

Thursday, December 24, 2015

Is light a wave or a particle?

2015 was the International Year of Light. In May, I came across this video by the Max Planck Society, in which some random people on the street in Munich were asked whether light is a wave or a particle. Most of them answered in German, but here is a rough translation of their replies:
    “Uh, physics. It's been a long time. I guess it’s... a particle. — Particle. — Particle. — A particle. — A particle. — Light is... a particle. — I had physics up to 12th class. We discussed this a whole year. But I still don’t know. — A wave. — A wave? — Is this a trick question? — It’s both! Wave-particle duality. You should know that. — The duality of light. — It acts as both. — It’s hard to quantify what it is. It’s energy. — I am fascinated that nature has paradoxa. That one finds out through physics that not everything can be computed.”
So I thought some explanation is in order:



This is the first time I’ve tried the new green screen. As you can see, it has indeed solved my eye-erasure problem. (And for the experts, I hope you apologize my sloppiness in specifying the U(1) gauge group.)

On that occasion, I also want to wish you all Happy Holidays!


Like what you find on my blog? I want to kindly draw your attention to the donate button in the top right corner :o)

Saturday, December 19, 2015

Ask Dr B: Is the multiverse science? Is the multiverse real?

Kay zum Felde asked:
“Is the multiverse science? How can we test it?”
I added “Is the multiverse real” after Google offered it as autocomplete:


Dear Kay,

This is a timely question, one that has been much on my mind in the last years. Some influential theoretical physicists – like Brian Greene, Lenny Susskind, Sean Carroll, and Max Tegmark – argue that the appearance of multiverses in various contemporary theories signals that we have entered a new era of science. This idea however has been met with fierce opposition by others – like George Ellis, Joe Silk, Paul Steinhardt, and Paul Davies – who criticize the lack of testability.

If the multiverse idea is right, and we live in one of many – maybe infinitely many – different universes, then some of our fundamental questions about nature might never be answered with certainty. We might merely be able to make statements about how likely we are to inhabit a universe with some particular laws of nature. Or maybe we cannot even calculate this probability, but just have to accept that some things are as they are, with no possibility to find deeper answers.

What bugs the multiverse opponents most about this explanation – or rather lack of explanation – is that succumbing to the multiverse paradigm feels like admitting defeat in our quest for understanding nature. They seem to be afraid that merely considering the multiverse an option discourages further inquiries, inquiries that might lead to better answers.

I think the multiverse isn’t remotely as radical an idea as it has been portrayed, and that some aspects of it might turn out to be useful. But before I go on, let me first clarify what we are talking about.

What is the multiverse?

The multiverse is a collection of universes, one of which is ours. The other universes might be very different from the one we find ourselves in. There are various types of multiverses that theoretical physicists believe are logical consequences of their theories. The best known ones are:
  • The string theory landscape
    String theory doesn’t uniquely predict which particles, fields, and parameters a universe contains. If one believes that string theory is the final theory, and there is nothing more to say than that, then we have no way to explain why we observe one particular universe. To make the final theory claim consistent with the lack of predictability, one therefore has to accept that any possible universe has the same right to existence as ours. Consequently, we live in a multiverse.

  • Eternal inflation
    In some currently very popular models for the early universe our universe is just a small patch of a larger space. As result of a quantum fluctuation the initially rapid expansion – known as “inflation” – slows down in the region around us and galaxies can be formed. But outside our universe inflation continues, and randomly occurring quantum fluctuations go on to spawn off other universes – eternally. If one believes that this theory is correct and that we understand how the quantum vacuum couples to gravity, then, so the argument, the other universes are equally real as ours.

  • Many worlds interpretation
    In the Copenhagen interpretation of quantum mechanics the act of measurement is ad hoc. It is simply postulated that measurement “collapses” the wave-function from a state with quantum properties (such as being in two places at once) to a distinct state (at only one place). This postulate agrees with all observations, but it is regarded unappealing by many (including myself). One way to avoid this postulate is to instead posit that the wave-function never collapses. Instead it ‘branches’ into different universes, one for each possible measurement outcome – a whole multiverse of measurement outcomes.

  • The Mathematical Universe
    The Mathematical Universe is Max Tegmark’s brain child, in which he takes the final theory claim to its extreme. Any theory that describes only our universe requires the selection of some mathematics among all possible mathematics. But if a theory is a final theory, there is no way to justify any particular selection, because any selection would require another theory to explain it. And so, the only final theory there can be is one in which all mathematics exists somewhere in the multiverse.
This list might raise the impression that the multiverse is a new finding, but that isn’t so. New is only the interpretation. Since every theory requires observational input to fix parameters or pick axioms, every theory leads to a multiverse. Without sufficient observational input any theory becomes ambiguous – it gives rise to a multiverse.

Take Newtonian gravity: Is there a universe for each value of Newton’s constant? Or General Relativity: Do all solutions to the field equations exist? And Loop Quantum Gravity has multiverses with different parameters for an infinite number of solutions like string theory. It’s just that Loop Quantum Gravity never tried to be a theory of everything, so nobody worries about this.

What is new about the multiverse idea is that some physicists are no longer content with having a theory that describes observation. They now have additional requirements for a good theory, like for example that the theory have no ad hoc prescriptions like collapsing wavefunctions; no small, large, or in fact any numbers; or initial conditions that are likely according to some currently accepted probability distribution.

Is the multiverse science?

Science is what describes our observations of nature. But this is the goal and not necessarily the case for each step along the way. And so, taking multiverses seriously, rather than treating them as the mathematical artifact that I think they are, might eventually lead to new insights. The real controversy about the multiverses is how likely it is that new insights will emerge from this approach eventually.

The maybe best example for how multiverses might become scientific is eternal inflation. It has been argued that the different universes might not be entirely disconnected, but can collide, thereby leaving observable signatures in the cosmic microwave background. Another example for testability comes from Mersini-Houghton and Holman who have looked into potentially observable consequences of entanglement between different universes. And in a rather mindbending recent work, Garriga, Vilenkin and Zhang, have argued that the multiverse might give rise to a distribution of small black holes in our universe which also has consequences that could become observable in the future.

As to probability distributions on the string theory landscape, I don’t see any conceptual problem with that. If someone could, based on a few assumptions, come up with a probability measure according to which the universe we observe is the most likely one, that would for me be a valid computation of the standard model parameters. The problem is of course to come up with such a measure.

Similar things could be said about all other multiverses. They don’t presently seem very useful to describe nature. But pursuing the idea might eventually give rise to observable consequences and further insights.

We have known since the dawn of quantum mechanics that it’s wrong to require all mathematical structures of a theory to directly correspond to observables – wave-functions are the best counter example. How willing physicists are to accept non-observable ingredients of a theory as necessary depends on their trust in the theory and on their hope that it might give rise to deeper insights. But there isn’t a priori anything unscientific with a theory that contains elements that are unobservable.

So is the multiverse science? It is an extreme speculation, and opinions widely differ on how promising it is as a route is to deeper understanding. But speculations are a normal part of theory development, and the multiverse is scientific as long as physicists strive to eventually derive observable consequences.

Is the multiverse real?

The multiverse has some brain-bursting consequences. For example that everything that can happen does happen, and it happens an infinite amount of times. There are thus infinitely many copies of you, somewhere out there, doing their own thing, or doing exactly the same as you. What does that mean? I have no clue. But it makes for an interesting dinner conversation through the second bottle of wine.

Is it real? I think it’s a mistake to think of “being real” as a binary variable, a property that an object either has or has not. Reality has many different layers, and how real we perceive something depends on how immediate our inference of the object from sensory input is.

A dog peeing on your leg has a very simple and direct relation to your sensory input that does not require much decoding. You would almost certainly consider it real. On the contrary, evidence for the quark model contained in a large array of data on a screen is a very indirect sensory input that requires a great deal of decoding. How real you consider quarks thus depends on your knowledge of, and trust in, the theory and the data. Or trust in the scientists dealing with the theory and the data as it were. For most physicists the theory underlying the quark model has proved reliable and accurate to such high precision that they consider quarks as real as the peeing dog.

But the longer the chain of inference, and the less trust you have in the theories used for inference, the less real objects become. In this layered reality the multiverse is currently at the outer fringes. It’s as unreal as something can be without being plain fantasy. For some practitioners who greatly trust their theories, the multiverse might appear almost as real as the universe we observe. But for most of us these theories are wild speculations and consequently we have little trust in this inference.

So is the multiverse real? It is “less real” than everything else physicists have deduced from their theories – so far.

Wednesday, December 16, 2015

No, you don’t need general relativity to ride a hoverboard.

Image credit: Technologistlaboratory.
This morning, someone sent me a link to a piece that appeared on WIRED

The hoverboards in question here are the currently fashionable two-wheeled motorized boards that are driven by shifting your weight. I haven’t tried one, but it sure looks like fun.

I would have ignored this article as your average internet nonsense, but turns out the WIRED piece is written by someone by name Rhett Allain who, according to the website “is an Associate Professor of Physics at Southeastern Louisiana University.” Which makes me fear that some readers might actually believe what he wrote. Because he is something with professor, certainly he must know the physics.

Now, the claim of the article is correct in the sense that if you took the laws of physics and removed general relativity then there would be no galaxy formation, no planet Earth, no people, and certainly no hoverboards. I don’t think though that Allain had such a philosophical argument in mind. Besides, on this ground you could equally well argue that you can’t throw a pebble without general relativity because there wouldn’t be any pebbles.

What Allain argues instead is that you somehow need the effects of gravity to be the same as that of acceleration and that this sounds a little like general relativity, therefore you need general relativity.

You should find this claim immediately suspicious because if you know one thing about general relativity it’s that it’s hard to test. If you couldn’t “ride a hoverboard without Einstein’s theory of General Relativity,” then why bother with light deflection and gravitational lensing to prove that the theory is correct? Must be a giant conspiracy of scientists wasting taxpayers’ money I presume.

Image Credit: Jared Mecham
Another reason to be suspicious about the correctness of this argument is the author’s explanation that special relativity is special because “Well, before Einstein, everyone thought reference frames were relative.” I am hoping this was just a typographical error, but just to avoid any confusion: before Einstein time was absolute. It’s called special relativity because according to Einstein, time too is relative.

But to come back to the issue about gravity. What you need to drive a hoverboard is to balance the inertial force caused by the board’s acceleration with another force, for which you have pretty much only gravity available. If the board accelerates and pushes forward your feet (friction required), you better bend forward to shift your center of mass because otherwise you’ll fall flat on your back. Bend forward too much and you fall on your nose because gravity. Don’t bend enough, you’ll fall backwards because inertia. To keep standing, you need to balance these forces.

This is basic mechanics and has nothing to do with General Relativity. That one of the forces is gravity is irrelevant to the requirement that you have to balance them to not fall. And even if you take into account that it’s gravity, Newtonian gravity is entirely sufficient. And it doesn’t have anything to do with hoverboards either. You can also see people standing on a train bend forwards when the train accelerates because otherwise they’ll fall in dominoes. You don’t need to bend when sitting because the seat back balances the force for you.

What’s different about general relativity is that it explains gravity is not a force but a property of space-time. That is, it deviates from Newtonian gravity. These deviations are ridiculously small corrections though and you don’t need to take them into account for your average Joe on the Hoverboard, unless possibly Joe is a Neutron star.

The key ingredient to general relativity is the equivalence principle, a simplified version of which states that the gravitational mass is equal to the inertial mass. This is my best guess of what Allain was alluding to. But you don’t need the equivalence principle to balance forces. The equivalence principle just tells you exactly how the forces are balanced. In this case it would tell you the angle you have to aim at to not fall.

In summary: The correct statement would have been “You can’t ride a hoverboard without balancing forces.” If you lean too much forward and write about General Relativity without knowing how it works, you’ll fall flat on your nose.

Friday, December 11, 2015

Quantum gravity could be observable in the oscillation frequency of heavy quantum states

Observable consequences of quantum gravity were long thought inaccessible by experiment. But we theorists might have underestimated our experimental colleagues. Technology has now advanced so much that macroscopic objects, weighting as much as a billionth of a gram, can be coaxed to behave as quantum objects. A billionth of a gram might not sound much, but it is huge compared to the elementary particles that quantum physics normally is all about. It might indeed be enough to become sensitive to quantum gravitational effects.

One of the most general predictions of quantum gravity is that it induces a limit to the resolution of structures. This limit is at an exceedingly tiny distance that is the Planck-length, 10-33 cm. There is no way we can directly probe it. However, theoretically the presence of such a minimal length scale leads to a modification of quantum field theory. This is generally thought of as an effective description of quantum gravitational effects.

These models with a minimal length scale come in three types. One in which Poincaré-invariance, the symmetry of Special Relativity, is broken by the introduction of a preferred frame. One in which Poincaré-symmetry is deformed for freely propagating particles. And one in which it is deformed, but only for virtual particles.

The first two types of these models make predictions that have already been ruled out. The third one is the most plausible model because it leaves Special Relativity intact in all observables – the deformation only enters in intermediate steps. But for this reason, this type of model is also extremely hard to test. I worked on this ten years ago, but got eventually so frustrated that I abandoned the topic: Whatever observable I computed, it was dozens of orders of magnitude below measurement precision.

A recent paper by Alessio Belanchia et al now showed me that I might have given up too early. If one asks how such a modification of quantum mechanics affects the motion of heavy quantum mechanical oscillators, Planck-scale sensitivity is only a few orders of magnitudes away.
    Tests of Quantum Gravity induced non-locality via opto-mechanical quantum oscillators
    Alessio Belenchia, Dionigi M. T. Benincasa, Stefano Liberati, Francesco Marin, Francesco Marino, Antonello Ortolan
    arXiv:1512.02083 [gr-qc]

The title of their paper refers to “non-locality” because the modification due to a minimal length leads to higher-order terms in the Lagrangian. In fact, there have to be terms up to infinite order. This is a very tame type of non-locality, because it is confined to Planck scale distances. How strong the modification is however also depends on the mass of the object. So if you can get a quite massive object to display quantum behavior, then you can increase your sensitivity to effects that might be indicative of quantum gravity.

This has been tried before. A bad example was this attempt, which implicitly used models of either the first or second type, that are ruled out by experiment already. A more recent and much more promising attempt was this proposal. However, they wanted to test a model that is not very plausible on theoretical grounds, so their test is of limited interst. As I mentioned in my blogpost however, this was a remarkable proposal because it was the first demonstration that the sensitivity to Planck scale effects can now be reached.

The new paper uses a system that is pretty much the same as that in the previous proposal. It’s a small disk of silicone, weighting a nanogram or so, that is trapped in an electromagnetic potential and cooled down to some mK. In this trap, the disk oscillates at a frequency that depends on the mass and the potential. This is a pure quantum effect – it is observable and it has been observed.

Belanchia et al calculate how this oscillation would be modified if the non-local correction terms were present and find that the oscillation is no longer simply harmonic but becomes more complicated (see figure). They then estimate the size of the effect and come to the conclusion that, while it is challenging, existing technology is only a few orders of magnitude away from reaching Planck scale precision.

The motion of the mean-value, x, of the oscillator's position in the potential as a function of time, t. The black curve shows the motion without quantum gravitational effects, the red curve shows the motion with quantum gravitational effects (greatly enlarged for visibility). The experiment relies on measuring the difference.
I find this a very exciting development because both the phenomenological model that is being tested here and the experimental precision seems plausible to me. I have recently had some second and third thoughts about the model under question (it’s complicated) and believe that it has some serious shortcomings, but I don’t think that these matter in the limit considered here.

It is very likely that we will see more proposals for testing quantum gravity with heavy quantum-mechanical probes, because once sensitivity reaches a certain parameter range, there suddenly tend to be loads of opportunities. At this point I have become tentatively optimistic that we might indeed be able to measure quantum gravitational effects within, say, the next two decades. I am almost tempted to start working on this again...

Saturday, December 05, 2015

What Fermilab’s Holometer Experiment teaches us about Quantum Gravity.

Tl;dr: Nothing. It teaches us nothing. It just wasted time and money.

The Holometer experiment at Fermilab just published the results of their search for holographic space-time foam. They didn’t find any evidence for noise that could be indicative of quantum gravity.

The idea of the experiment was to find correlations in quantum gravitational fluctuations of space-time by using two very sensitive interferometers and comparing their measurements. Quantum gravitational fluctuations are exceedingly tiny, and in all existing models they are far too small to be picked up by interferometers. But the head of the experiment, Craig Hogan, argued that, if the holographic principle is valid, then the fluctuations should be large enough to be detectable by the experiment.

The holographic principle is the idea that everything that happens in a volume can be encoded on the volume’s surface. Many physicists believe that the principle is realized in nature. If that was so, it would indeed imply that fluctuations have correlations. But these correlations are not of the type that the experiment could test for. They are far too subtle to be measureable in this way.

In physics, all theories have to be expressed in form of a consistent mathematical description. Mathematical consistency is an extremely strong constraint when combined with the requirement that the theory also has to agree with all observations we already have. There is very little that can be changed in the existing theories that a) leads to new effects and b) does not spoil the compatibility with existing data. It’s not an easy job.

Hogan didn’t have a theory. It’s not that I am just grumpy  he said so himself: “It's a slight cheat because I don't have a theory,” as quoted by Michael Moyer in a 2012 Scientific American article.

For what I have extracted from Hogan’s papers on the arxiv, he tried twice to construct a theory that would capture his idea of holographic noise. The first violated Lorentz-invariance and was thus already ruled out by other data. The second violated basic properties of quantum mechanics and was thus already ruled out too. In the end he seems to have given up finding a theory. Indeed, it’s not an easy job.

Searching for a prediction based on a hunch rather than a theory makes it exceedingly unlikely that something will be found. That is because there is no proof that the effect would even be consistent with already existing data, which is difficult to achieve. But Hogan isn’t a no-one; he is head of Fermilab’s Center for Particle Astrophysics. I assume he got funding for his experiment by short-circuiting peer review. A proposal for such an experiment would never have passed peer review – it simply doesn’t live up to today’s quality standards in physics.

I wasn’t the only one perplexed about this experiment becoming reality. Hogan relates the following anecdote: “Lenny [Susskind] has an idea of how the holographic principle works, and this isn’t it. He’s pretty sure that we’re not going to see anything. We were at a conference last year, and he said that he would slit his throat if we saw this effect.” This is a quote from another Scientific American article. Oh, yes, Hogan definitely got plenty of press coverage for his idea.

Ok, so maybe I am grumpy. That’s because there are hundreds of people working on developing testable models for quantum gravitational effects, each of whom could tell you about more promising experiments than this. It’s a research area by name quantum gravity phenomenology. The whole point of quantum gravity phenomenology is to make sure that new experiments test promising ranges of parameter space, rather than just wasting money.

I might have kept my grumpiness to myself, but then the Fermilab Press release informed me that “Hogan is already putting forth a new model of holographic structure that would require similar instruments of the same sensitivity, but different configurations sensitive to the rotation of space. The Holometer, he said, will serve as a template for an entirely new field of experimental science.”

An entirely new field of experimental science, based on models that either don’t exist or are ruled out already and that, when put to test, morph into new ideas that require higher sensitivity. That scared me so much I thought somebody has to spell it out: I sincerely hope that Fermilab won’t pump any more money into this unless the idea goes through rigorous peer review. It isn’t just annoying. It’s a slap into the face of many hard-working physicists whose proposals for experiments are of much higher quality but who don’t get funding.

At the very least, if you have a model for what you test, you can rule out the model. With the Holometer you can’t even rule out anything because there is no theory and no model that would be tested with it. So what we have learned is nothing. I can only hope that at least this episode draws some attention to the necessity of having at mathematically consistent model. It’s not an easy job. But it has to be done.

The only good news here is that Lenny Susskind isn’t going to slit his throat.

Thursday, December 03, 2015

Peer Review and its Discontents [slide show]

I have made a slide-show of my Monday talk a the Munin conference and managed to squeeze a one-hour lecture into 23 minutes. Don't expect too much, nothing happens in this video, it's just me mumbling over the slides (no singing either ;)). I was also recorded on Monday, but if you prefer the version with me walking around and talking for 60 minutes you'll have to wait a few days until the recording goes online.



I am very much interested in finding a practical solution to these problems. If you have proposals to make, please get in touch with me or leave a comment.

Tuesday, December 01, 2015

Hawking radiation is not produced at the black hole horizon.

Stephen Hawking’s “Brief History of Time” was one of the first popular science books I read, and I hated it. I hated it because I didn’t understand it. My frustration with this book is a big part of the reason I’m a physicist today – at least I know who to blame.

I don’t hate the book any more – admittedly Hawking did a remarkable job of sparking public interest in the fundamental questions raised by black hole physics. But every once in a while I still want to punch the damned book. Not because I didn’t understand it, but because it convinced so many other people they did understand it.

In his book, Hawking painted a neat picture for black hole evaporation that is now widely used. According to this picture, black holes evaporate because pairs of virtual particles nearby the horizon are ripped apart by tidal forces. One of the particles gets caught behind the horizon and falls in, the other escapes. The result is a steady emission of particles from the black hole horizon. It’s simple, it’s intuitive, and it’s wrong.

Hawking’s is an illustrative picture, but nothing more than that. In reality – you will not be surprised to hear – the situation is more complicated.

The pairs of particles – to the extent that it makes sense to speak of particles at all – are not sharply localized. They are instead blurred out over a distance comparable to the black hole radius. The pairs do not start out as points, but as diffuse clouds smeared all around the black hole, and they only begin to separate when the escapee has retreated from the horizon a distance comparable to the black hole’s radius. This simple image that Hawking provided for the non-specialist is not backed up by the mathematics. It contains an element of the truth, but take it too seriously and it becomes highly misleading.

That this image isn’t accurate is not a new insight – it’s been known since the late 1970s that Hawking radiation is not produced in the immediate vicinity of the horizon. Already in Birrell and Davies’ textbook it is clearly spelled out that taking the particles from the far vicinity of the black hole and tracing them back to the horizon – thereby increasing (“blueshifting”) their frequency – does not deliver the accurate description in the horizon area. The two parts of the Hawking-pairs blur into each other in the horizon area, and to meaningfully speak of particles one should instead use a different, local, notion of particles. Better even, one should stick to calculating actually observable quantities like the stress-energy tensor.

That the particle pairs are not created in the immediate vicinity of the horizon was necessary to solve a conundrum that bothered physicists back then. The temperature of the black hole radiation is very small, but this is in the far distance to the black hole. For this radiation to have been able to escape, it must have started out with an enormous energy close by the black hole horizon. But if such an enormous energy was located there, then an infalling observer should notice and burn to ashes. This however violates the equivalence principle, according to which the infalling observer shouldn’t notice anything unusual upon crossing the horizon.

This problem is resolved by taking into account that tracing back the outgoing radiation to the horizon does not give a physically meaningful result. If one instead calculates the stress-energy in the vicinity of the horizon, one finds that it is small and remains small even upon horizon crossing. It is so small that an observer would only be able to tell the difference to flat space on distances comparable to the black hole radius (which is also the curvature scale). Everything fits nicely, and no disagreement with the equivalence principle comes about.

[I know this sounds very similar to the firewall problem that has been discussed more recently but it’s a different issue. The firewall problem comes about because if one requires the outgoing particles to carry information, then the correlation with the ingoing particles gets destroyed. This prevents a suitable cancellation in the near-horizon area. Again however one can criticize this conclusion by complaining that in the original “firewall paper” the stress-energy wasn’t calculated. I don’t think this is the origin of the problem, but other people do.]

The actual reason that black holes emit particles, the one that is backed up by mathematics, is that different observers have different notions of particles.

We are used to a particle either being there or not being there, but this is only the case so long as we move relative to each other at constant velocity. If an observer is accelerated, his definition of what a particle is changes. What looks like empty space for an observer at constant velocity suddenly seems to contain particles for an accelerated observer. This effect, named after Bill Unruh – who discovered it almost simultaneously with Hawking’s finding that black holes emit radiation – is exceedingly tiny for accelerations we experience in daily life, thus we never notice it.

The Unruh effect is very closely related to the Hawking effect by which black holes evaporate. Matter that collapses to a black hole creates a dynamical space-time that gives rise to an acceleration between observers in the past and in the future. The result is that the space-time around the collapsing matter, that did not contain particles before the black hole was formed, contains thermal radiation in the late stages of collapse. This Hawking-radiation that is emitted from the black hole is the same as the vacuum that initially surrounded the collapsing matter.

That, really, is the origin of particle emission from black holes: what is a “particle” depends on the observer. Not quite as simple, but dramatically more accurate.

The image provided by Hawking with the virtual particle pairs close by the horizon has been so stunningly successful that now even some physicists believe it is what really happens. The knowledge that blueshifting the radiation from infinity back to the horizon gives a grossly wrong stress-energy seems to have gotten buried in the literature. Unfortunately, misunderstanding the relation between the flux of Hawking-particles in the far distance and in the vicinity of the black hole leads one to erroneously conclude that the flux is much larger than it is. Getting this relation wrong is for example the reason why Mersini-Houghton came to falsely conclude that black holes don’t exist.

It seems about time someone reminds the community of this. And here comes Steve Giddings.

Steve Giddings is the nonlocal hero of George Musser’s new book “Spooky Action at a Distance.” For the past two decades or so he’s been on a mission to convince his colleagues that nonlocality is necessary to resolve the black hole information loss problem. I spent a year in Santa Barbara a few doors down the corridor from Steve, but I liked his papers better when we was still on the idea that black hole remnants keep the information. Be that as it may, Steve knows black holes inside and out, and he has a new note on the arxiv that discusses the question where Hawking radiation originates.

In his paper, Steve collects the existing arguments why we know the pairs of the Hawking radiation are not created in the vicinity of the horizon, and he adds some new arguments. He estimates the effective area from which Hawking-radiation is emitted and finds it to be a sphere with a radius considerably larger than the black hole. He also estimates the width of wave-packets of Hawking radiation and shows that it is much larger than the separation of the wave-packet’s center from the horizon. This nicely fits with some earlier work of his that demonstrated that the partner particles do not separate from each other until after they have left the vicinity of the black hole.

All this supports the conclusion that Hawking particles are not created in the near vicinity of the horizon, but instead come from a region surrounding the black hole with a few times the black hole’s radius.

Steve’s paper has an amusing acknowledgement in which he thanks Don Marolf for confirming that some of their colleagues indeed believe that Hawking radiation is created close by the horizon. I can understand this. When I first noticed this misunderstanding I also couldn’t quite believe it. I kept pointing towards Birrell-Davies but nobody was listening. In the end I almost thought I was the one who got it wrong. So, I for sure am very glad about Steve’s paper because now, rather than citing a 40 year old textbook, I can just cite his paper.

If Hawking’s book taught me one thing, it’s that sticky visual metaphors that can be a curse as much as they can be a blessing.