Wednesday, April 27, 2016

If you fall into a black hole

If you fall into a black hole, you’ll die. That much is pretty sure. But what happens before that?

The gravitational pull of a black hole depends on its mass. At a fixed distance from the center, it isn’t any stronger or weaker than that of a star with the same mass. The difference is that, since a black hole doesn’t have a surface, the gravitational pull can continue to increase as you approach the center.

The gravitational pull itself isn’t the problem, the problem is the change in the pull, the tidal force. It will stretch any extended object in a process with technical name “spaghettification.” That’s what will eventually kill you. Whether this happens before or after you cross the horizon depends, again, on the mass of the black hole. The larger the mass, the smaller the space-time curvature at the horizon, and the smaller the tidal force.

Leaving aside lots of hot gas and swirling particles, you have good chances to survive crossing the horizon of a supermassive black hole, like that in the center of our galaxy. You would, however, probably be torn apart before crossing the horizon of a solar-mass black hole.

It takes you a finite time to reach the horizon of a black hole. For an outside observer however, you seem to be moving slower and slower and will never quite reach the black hole, due to the (technically infinitely large) gravitational redshift. If you take into account that black holes evaporate, it doesn’t quite take forever, and your friends will eventually see you vanishing. It might just take a few hundred billion years.

In an article that recently appeared on “Quick And Dirty Tips” (featured by SciAm), Everyday Einstein Sabrina Stierwalt explains:
“As you approach a black hole, you do not notice a change in time as you experience it, but from an outsider’s perspective, time appears to slow down and eventually crawl to a stop for you [...] So who is right? This discrepancy, and whose reality is ultimately correct, is a highly contested area of current physics research.”
No, it isn’t. The two observers have different descriptions of the process of falling into a black hole because they both use different time coordinates. There is no contradiction between the conclusions they draw. The outside observer’s story is an infinitely stretched version of the infalling observer’s story, covering only the part before horizon crossing. Nobody contests this.

I suspect this confusion was caused by the idea of black hole complementarity. Which is indeed a highly contest area of current physics research. According to black hole complementarity the information that falls into a black hole both goes in and comes out. This is in contradiction with quantum mechanics which forbids making exact copies of a state. The idea of black hole complementarity is that nobody can ever make a measurement to document the forbidden copying and hence, it isn’t a real inconsistency. Making such measurements is typically impossible because the infalling observer only has a limited amount of time before hitting the singularity.

Black hole complementarity is actually a pretty philosophical idea.

Now, the black hole firewall issue points out that black hole complementarity is inconsistent. Even if you can’t measure that a copy has been made, pushing the infalling information in the outgoing radiation changes the vacuum state in the horizon vicinity to a state which is no longer empty: that’s the firewall.

Be that as it may, even in black hole complementarity the infalling observer still falls in, and crosses the horizon at a finite time.

The real question that drives much current research is how the information comes out of the black hole before it has completely evaporated. It’s a topic which has been discussed for more than 40 years now, and there is little sign that theorists will agree on a solution. And why would they? Leaving aside fluid analogies, there is no experimental evidence for what happens with black hole information, and there is hence no reason for theorists to converge on any one option.

The theory assessment in this research area is purely non-empirical, to use an expression by philosopher Richard Dawid. It’s why I think if we ever want to see progress on the foundations of physics we have to think very carefully about the non-empirical criteria that we use.

Anyway, the lesson here is: Everyday Einstein’s Quick and Dirty Tips is not a recommended travel guide for black holes.

Wednesday, April 20, 2016

Dear Dr B: Why is Lorentz-invariance in conflict with discreteness?

Can we build up space-time from
discrete entities?
“Could you elaborate (even) more on […] the exact tension between Lorentz invariance and attempts for discretisation?



Dear Noa:

Discretization is a common procedure to deal with infinities. Since quantum mechanics relates large energies to short (wave) lengths, introducing a shortest possible distance corresponds to cutting off momentum integrals. This can remove infinites that come in at large momenta (or, as the physicists say “in the UV”).

Such hard cut-off procedures were quite common in the early days of quantum field theory. They have since been replaced with more sophisticated regulation procedures, but these don’t work for quantum gravity. Hence it lies at hand to use discretization to get rid of the infinities that plague quantum gravity.

Lorentz-invariance is the symmetry of Special Relativity; it tells us how observables transform from one reference frame to another. Certain types of observables, called “scalars,” don’t change at all. In general, observables do change, but they do so under a well-defined procedure that is by the application of Lorentz-transformations.We call these “covariant.” Or at least we should. Most often invariance is conflated with covariance in the literature.

(To be precise, Lorentz-covariance isn’t the full symmetry of Special Relativity because there are also translations in space and time that should maintain the laws of nature. If you add these, you get Poincaré-invariance. But the translations aren’t so relevant for our purposes.)

Lorentz-transformations acting on distances and times lead to the phenomena of Lorentz-contraction and time-dilatation. That means observers at relative velocities to each other measure different lengths and time-intervals. As long as there aren’t any interactions, this has no consequences. But once you have objects that can interact, relativistic contraction has measurable consequences.

Heavy ions for example, which are collided in facilities like RHIC or the LHC, are accelerated to almost the speed of light, which results in a significant length contraction in beam direction, and a corresponding increase in the density. This relativistic squeeze has to be taken into account to correctly compute observables. It isn’t merely an apparent distortion, it’s a real effect.

Now consider you have a regular cubic lattice which is at rest relative to you. Alice comes by in a space-ship at high velocity, what does she see? She doesn’t see a cubic lattice – she sees a lattice that is squeezed into one direction due to Lorentz-contraction. Who of you is right? You’re both right. It’s just that the lattice isn’t invariant under the Lorentz-transformation, and neither are any interactions with it.

The lattice can therefore be used to define a preferred frame, that is a particular reference frame which isn’t like any other frame, violating observer independence. The easiest way to do this would be to use the frame in which the spacing is regular, ie your restframe. If you compute any observables that take into account interactions with the lattice, the result will now explicitly depend on the motion relative to the lattice. Condensed matter systems are thus generally not Lorentz-invariant.

A Lorentz-contraction can convert any distance, no matter how large, into another distance, no matter how short. Similarly, it can blue-shift long wavelengths to short wavelengths, and hence can make small momenta arbitrarily large. This however runs into conflict with the idea of cutting off momentum integrals. For this reason approaches to quantum gravity that rely on discretization or analogies to condensed matter systems are difficult to reconcile with Lorentz-invariance.

So what, you may say, let’s just throw out Lorentz-invariance then. Let us just take a tiny lattice spacing so that we won’t see the effects. Unfortunately, it isn’t that easy. Violations of Lorentz-invariance, even if tiny, spill over into all kinds of observables even at low energies.

A good example is vacuum Cherenkov radiation, that is the spontaneous emission of a photon by an electron. This effect is normally – ie when Lorentz-invariance is respected – forbidden due to energy-momentum conservation. It can only take place in a medium which has components that can recoil. But Lorentz-invariance violation would allow electrons to radiate off photons even in empty space. No such effect has been seen, and this leads to very strong bounds on Lorentz-invariance violation.

And this isn’t the only bound. There are literally dozens of particle interactions that have been checked for Lorentz-invariance violating contributions with absolutely no evidence showing up. Hence, we know that Lorentz-invariance, if not exact, is respected by nature to extremely high precision. And this is very hard to achieve in a model that relies on a discretization.

Having said that, I must point out that not every quantity of dimension length actually transforms as a distance. Thus, the existence of a fundamental length scale is not a priori in conflict with Lorentz-invariance. The best example is maybe the Planck length itself. It has dimension length, but it’s defined from constants of nature that are themselves frame-independent. It has units of a length, but it doesn’t transform as a distance. For the same reason string theory is perfectly compatible with Lorentz-invariance even though it contains a fundamental length scale.

The tension between discreteness and Lorentz-invariance appears always if you have objects that transform like distances or like areas or like spatial volumes. The Causal Set approach therefore is an exception to the problems with discreteness (to my knowledge the only exception). The reason is that Causal Sets are a randomly distributed collection of (unconnected!) points with a four-density that is constant on the average. The random distribution prevents the problems with regular lattices. And since points and four-volumes are both Lorentz-invariant, no preferred frame is introduced.

It is remarkable just how difficult Lorentz-invariance makes it to reconcile general relativity with quantum field theory. The fact that no violations of Lorentz-invariance have been found and the insight that discreteness therefore seems an ill-fated approach has significantly contributed to the conviction of string theorists that they are working on the only right approach. Needless to say there are some people who would disagree, such as probably Carlo Rovelli and Garrett Lisi.

Either way, the absence of Lorentz-invariance violations is one of the prime examples that I draw upon to demonstrate that it is possible to constrain theory development in quantum gravity with existing data. Everyone who still works on discrete approaches must now make really sure to demonstrate there is no conflict with observation.

Thanks for an interesting question!

Wednesday, April 13, 2016

Dark matter might connect galaxies through wormholes

Tl;dr: A new paper shows that one of the most popular types of dark matter – the axion – could make wormholes possible if strong electromagnetic fields, like those found around supermassive black holes, are present. Unclear remains how such wormholes would be formed and whether they would be stable.
Wormhole dress.
Source: Shenova.

Wouldn’t you sometimes like to vanish into a hole and crawl out in another galaxy? It might not be as impossible as it seems. General relativity has long been known to allow for “wormholes” that are short connections between seemingly very distant places. Unfortunately, these wormholes are unstable and cannot be traversed unless filled by “exotic matter,” which must have negative energy density to keep the hole from closing. And no matter that we have ever seen has this property.

The universe, however, contains a lot of matter that we have never seen, which might give you hope. We observe this “dark matter” only through its gravitational pull, but this is enough to tell that it behaves pretty much like regular matter. Dark matter too is thus not exotic enough to help with stabilizing wormholes. Or so we thought.

In a recent paper, Konstantinos Dimopoulos from the “Consortium for Fundamental Physics” at Lancaster University points out that dark matter might be able to mimic the behavior of exotic matter when caught in strong electromagnetic fields:
    Active galaxies may harbour wormholes if dark matter is axionic
    By Konstantinos Dimopoulos
    arXiv:1603.04671 [astro-ph.HE]
Axions are one of the most popular candidates for dark matter. The particles themselves are very light, but they form a condensate in the early universe that should still be around today, giving rise to the observed dark matter distribution. Like all other dark matter candidates, axions have been searched for but so far not been detected.

In his paper, Dimopoulos points out that, due to their peculiar coupling to electromagnetic fields, axions can acquire an apparent mass which makes a negative contribution to their energy. This effect isn’t so unusual – it is similar to the way that fermions obtain masses by coupling to the Higgs or that scalar fields can obtain effective masses by coupling to electromagnetic fields. In other words, it’s not totally unheard of.

Dimopoulos then estimates how strong an electromagnetic field is necessary to turn axions into exotic matter and finds that around supermassive black holes the conditions would just be right. Hence, he concludes, axionic dark matter might keep wormholes open and traversable.

In his present work, Dimopoulos has however not done a fully relativistic computation. He considers the axions in the background of the black hole, but not the coupled solution of axions plus black hole. The analysis so far also does not check whether the wormhole would indeed be stable, or if it would instead blow off the matter that is supposed to stabilize it. And finally, it leaves open the question how the wormhole would form. It is one thing to discuss configurations that are mathematically possible, but it’s another thing entirely to demonstrate that they can actually come into being in our universe.

So it’s an interesting idea, but it will take a little more to convince me that this is possible.

And in case you warmed up to the idea of getting out of this galaxy, let me remind you that the closest supermassive black hole is still 26,000 light years away.

Note added: As mentioned by a commenter (see below) the argument in the paper might be incorrect. I asked the author for comment, but no reply so far.
Another note: The author says he has revised and replaced the paper, and that the conclusions are not affected.

Thursday, April 07, 2016

10 Essentials of Quantum Mechanics

Vortices in a Bose-Einstein condensate.
Source: NIST.

Trying to score at next week’s dinner party? Here’s how to intimidate your boss by fluently speaking quantum.

1. Everything is quantum

It’s not like some things are quantum mechanical and other things are not. Everything obeys the same laws of quantum mechanics – it’s just that quantum effects of large objects are very hard to notice. This is why quantum mechanics was a latecomer in theoretical physics: It wasn’t until physicists had to explain why electrons sit on shells around the atomic nucleus that quantum mechanics became necessary to make accurate predictions.

2. Quantization doesn’t necessarily imply discreteness

“Quanta” are discrete chunks, but not everything becomes chunky on short scales. Electromagnetic waves are made of quanta called “photons,” so the waves can be thought of as a discretized. And electron shells around the atomic nucleus can only have certain discrete radii. But other particle properties do not become discrete even in a quantum theory. The position of electrons in the conducting band of a metal for example is not discrete – the electron can occupy any place within the band. And the energy values of the photons that make up electromagnetic waves are not discrete either. For this reason, quantizing gravity – should we finally succeed at it – also does not necessarily mean that space and time have to be made discrete.

3. Entanglement is not the same as superposition

A quantum superposition is the ability of a system to be in two different states at the same time, and yet, when measured, one always finds one particular state, never a superposition. Entanglement on the other hand is a correlation between parts of a system – something entirely different. Superpositions are not fundamental: Whether a state is or isn’t a superposition depends on what you want to measure. A state can for example be in a superposition of positions and not in a superposition of momenta – so the whole concept is ambiguous. Entanglement on the other hand is unambiguous: It is an intrinsic property of each system and the so-far best known measure of a system’s quantum-ness. (For more details, read “What is the difference between entanglement and superposition?”)

4. There is no spooky action at a distance

Nowhere in quantum mechanics is information ever transmitted non-locally, so that it jumps over a stretch of space without having to go through all places in between. Entanglement is itself non-local, but it doesn’t do any action – it is a correlation that is not connected to non-local transfer of information or any other observable. It was a great confusion in the early days of quantum mechanics, but we know today that the theory can be made perfectly compatible with Einstein’s theory of Special Relativity in which information cannot be transferred faster than the speed of light.

5. It’s an active research area

It’s not like quantum mechanics is yesterday’s news. True, the theory originated more than a century ago. But many aspects of it became testable only with modern technology. Quantum optics, quantum information, quantum computing, quantum cryptography, quantum thermodynamics, and quantum metrology are all recently formed and presently very active research areas. With the new technology, also interest in the foundations of quantum mechanics has been reignited.

6. Einstein didn’t deny it

Contrary to popular opinion, Einstein was not a quantum mechanics denier. He couldn’t possibly be – the theory was so successful early on that no serious scientist could dismiss it. Einstein instead argued that the theory was incomplete, and believed the inherent randomness of quantum processes must have a deeper explanation. It was not that he thought the randomness was wrong, he just thought that this wasn’t the end of the story. For an excellent clarification of Einstein’s views on quantum mechanics, I recommend George Musser’s article “What Einstein Really Thought about Quantum Mechanics” (paywalled, sorry).

7. It’s all about uncertainty

The central postulate of quantum mechanics is that there are pairs of observables that cannot simultaneously be measured, like for example the position and momentum of a particle. These pairs are called “conjugate variables,” and the impossibility to measure both their values precisely is what makes all the difference between a quantized and a non-quantized theory. In quantum mechanics, this uncertainty is fundamental, not due to experimental shortcomings.

8. Quantum effects are not necessarily small...

We do not normally observe quantum effects on long distances because the necessary correlations are very fragile. Treat them carefully enough however, and quantum effects can persist over long distances. Photons have for example been entangled over separations as much as several hundreds of kilometer. And in Bose-Einstein condensates, up to several million of atoms have been brought into one coherent quantum state. Some researchers even believe that dark matter has quantum effects which span through whole galaxies.

9. ...but they dominate the small scales

In quantum mechanics, every particle is also a wave and every wave is also a particle. The effects of quantum mechanics become very pronounced once one observes a particle on distances that are comparable to the associated wavelength. This is why atomic and subatomic physics cannot be understood without quantum mechanics, whereas planetary orbits are entirely unaffected by quantum behavior.

10. Schrödinger’s cat is dead. Or alive. But not both.

It was not well-understood in the early days of quantum mechanics, but the quantum behavior of macroscopic objects decays very rapidly. This “decoherence” is due to constant interactions with the environment which are, in relatively warm and dense places like those necessary for life, impossible to avoid. Bringing large objects into superpositions of two different states is therefore extremely difficult and the superposition fades rapidly.

The heaviest object that has so far been brought into a superposition of locations is a carbon-60 molecule, and it has been proposed to do this experiment also for viruses or even heavier creatures like bacteria. Thus, the paradox that Schrödinger’s cat once raised – the transfer of a quantum superposition (the decaying atom) to a large object (the cat) – has been resolved. We now understand that while small things like atoms can exist in superpositions for extended amounts of time, a large object would settle extremely rapidly in one particular state. That’s why we never see cats that are both dead and alive.

[This post previously appeared on Starts With A Bang.]

Sunday, April 03, 2016

New link between quantum computing and black hole may solve information loss problem

[image source: IBT]

If you leave the city limits of Established Knowledge and pass the Fields of Extrapolation, you enter the Forest of Speculations. As you get deeper into the forest, larger and larger trees impinge on the road, strangely deformed, knotted onto themselves, bent over backwards. They eventually grow so close that they block out the sunlight. It must be somewhere here, just before you cross over from speculation to insanity, that Gia Dvali looks for new ideas and drags them into the sunlight.

Dvali’s newest idea is that every black hole is a quantum computer. And not just any quantum computer, but a quantum computer made of a Bose-Einstein condensate that self-tunes to the quantum critical point. In one sweep, he has combined everything that is cool in physics at the moment.

This link between black holes and Bose-Einstein condensates is based on simple premises. Dvali set out to find some stuff that would share properties with black holes, notably the relation between entropy and mass (BH entropy), the decrease in entropy during evaporation (Page time), and the ability to scramble information quickly (scrambling time). What he found was that certain condensates do exactly this.

Consequently he went and conjectured that this is more than a coincidence, and that black holes themselves are condensates – condensates of gravitons, whose quantum criticality allows the fast scrambling. The gravitons equip black holes with quantum hair on horizon scale, and hence provide a solution to the black hole information loss problem by first storing information and then slowly leaking it out.

Bose-Einstein condensates on the other hand contain long-range quantum effects that make them good candidates for quantum computers. The individual q-bits that have been proposed for use in these condensates are normally correlated atoms trapped in optical lattices. Based on his analogy with black holes however, Dvali suggests to use a different type of state for information storage, which would optimize the storage capacity.

I had the opportunity to speak with Immanuel Bloch from the Max Planck Institute for Quantum Optics about Dvali’s idea, and I learned that while it seems possible to create a self-tuned condensate to mimic the black hole, addressing the states that Dvali has identified is difficult and, at least presently, not practical. You can read more about this in my recent Aeon essay.

But really, you may ask, what isn’t a quantum computer? Doesn’t anything that changes in time according to the equations of quantum mechanics process information and compute something? Doesn’t every piece of chalk execute the laws of nature and evaluate its own fate, doing a computation that somehow implies something with quantum?

That’s right. But when physicists speak of quantum computers, they mean a particularly powerful collection of entangled states, assemblies that allow to hold and manipulate much more information than a largely classical state. It’s this property of quantum computers specifically that Dvali claims black holes must also possess. The chalk just won’t do.

If it is correct what Dvali says, a real black hole out there in space doesn’t compute anything in particular. It merely stores the information of what fell in and spits it back out again. But a better understanding of how to initialize a state might allow us one day – give it some hundred years – to make use of nature’s ability to distribute information enormously quickly.

The relevant question is of course, can you test that it’s true?

I first heard of Dvali’s idea on a conference I attended last year in July. In his talk, Dvali spoke about possible observational evidence for the quantum hair due to modifications of orbits nearby the black hole. At least that’s my dim recollection almost a year later. He showed some preliminary results of this, but the paper hasn’t gotten published and the slides aren’t online. Instead, together with some collaborators, he published a paper arguing that the idea is compatible with the Hawking, Perry, Strominger proposal to solve the black hole information loss, which also relies on black hole hair.

In November then, I heard another talk by Stefan Hofmann, who had also worked on some aspects of the idea that black holes are Bose-Einstein condensates. He told the audience that one might see a modification in the gravitational wave signal of black hole merger ringdowns. Which have since indeed been detected. Again though, there is no paper.

So I am tentatively hopeful that we can look for evidence of this idea in the soon future, but so far there aren’t any predictions. I have an own proposal to add for observational consequences of this approach, which is to look at the scattering cross-section of the graviton condensate with photons in the wave-length regime of the horizon-size (ie radio-waves). I don’t have time to really work on this, but if you’re looking for one-year project in quantum gravity phenomenology, this one seems interesting.

Dvali’s idea has some loose ends of course. Notably it isn’t clear how the condensate escapes collapse, at least it isn’t clear to me and not clear to anyone I talked to. The general argument is that for the condensate the semi-classical limit is a bad approximation, and thus the singularity theorems are rather meaningless. While that might be, it’s too vague for my comfort. The idea also seems superficially similar to the fuzzball proposal, and it would be good to know the relation or differences.

After these words of caution, let me add that this link between condensed matter, quantum information, and black holes isn’t as crazy as it seems at first. In the last years, a lot of research has piled up that tightens the connections between these fields. Indeed, a recent paper by Brown et al hypothesizes that black holes are not only the most efficient storage devices but indeed the fastest computers.

It’s amazing just how much we have learned from a single solution to Einstein’s field equations, and not even a particularly difficult one. “Black hole physics” really should be a research field on its own right.

Friday, April 01, 2016

3 Billion Years Old Math Problem Solved by Prodigy Fetus

[image source:]
For Berta’s mother, the first kick already made clear that her daughter was extraordinary: “This wasn’t just any odd kick, it was a p-wave cross-correlation seismogram.” But this pregnancy exceeded even the most enthusiastic mother’s expectations. Still three months shy of her due date, fetus Berta just published her first paper in the renown mathematics journal “Reviews in Topology.” And it isn’t just any odd cohomological invariance that she has taken on, but one of the thorniest problems known to mathematicians.

Like most of the big mathematical puzzles, this one is easy to understand, and yet even the greatest minds on the planet have so far been unsuccessful proving it. Consider you have a box of arbitrary dimension, filled with randomly spaced polyhedra that touch on exactly three surfaces each. Now you take them out of the box, remove one surface, turn the box by 90 degrees around Donald Trump’s belly button, and then put the polyhedral back into the box. Put in simple terms this immediately raises the question: “Who cares?”

Berta’s proof demonstrates that the proposition is correct. Her work, which has been lauded by colleagues as “masterwork of incomprehensibility” and “a lucid dream caught in equations,” draws upon recent research in fields ranging from differential geometry to category theory to volcanology. The complete proof adds up to 5000 pages. “It’s quite something,” says her mother who was nicknamed “next Einstein’s mom” on Berta’s reddit AMA last week. “We hope the paper will be peer reviewed by the time she makes her driver’s license.”

Monday, March 28, 2016

Dear Dr. B: What are the requirements for a successful theory of quantum gravity?

“I've often heard you say that we don't have a theory of quantum gravity yet. What would be the requirements, the conditions, for quantum gravity to earn the label of 'a theory' ?

I am particularly interested in the nuances on the difference between satisfying current theories (GR&QM) and satisfying existing experimental data. Because a theory often entails an interpretation whereas a piece of experimental evidence or observation can be regarded as correct 'an sich'.

That aside from satisfying the need for new predictions, etc.

Thank you,

Best Regards,

Noa Drake”

Dear Noa,

I want to answer your question in two parts. First: What does it take for a hypothesis to earn the label “theory” in physics? And second: What are the requirements for a theory of quantum gravity in particular?”

What does it take for a hypothesis to earn the label “theory” in physics?

Like almost all nomenclature in physics – except the names of new heavy elements – the label “theory” is not awarded by some agreed-upon regulation, but emerges from usage in the community – or doesn’t. Contrary to what some science popularizers want the public to believe, scientists do not use the word “theory” in a very precise way. Some names stick, others don’t, and trying to change a name already in use is often futile.

The best way to capture what physicists mean with “theory” is that it describes an identification between mathematical structures and observables. The theory is the map between the math-world and the real world. A “model” on the other hand is something slightly different: it’s the stand-in for the real world that is being mapped by help of the theory. For example the standard model is the math-thing which is mapped by quantum field theory to the real world. The cosmological concordance model is mapped by the theory of general relativity to the real world. And so on.

But of course not everybody agrees. Frank Wilczek and Sean Carroll for example want to rename the standard model to “core theory.” David Gross argues that string theory isn’t a theory, but actually a “framework.” And Paul Steinhardt insists on calling the model of inflation a “paradigm.” I have a theory that physicists like being disagreeable.

Sticking with my own nomenclature, what it takes to make a theory in physics is 1) a mathematically consistent formulation – at least in some well-controlled approximation, 2) an unambiguous identification of observables, and 3) agreement with all available data relevant in the range in which the theory applies.

These are high demands, and the difficulty of meeting them is almost always underestimated by those who don’t work in the field. Physics is a very advanced discipline and the existing theories have been confirmed to extremely high precision. It is therefore very hard to make any changes that improve the existing theories rather than screwing them up altogether.

What are the requirements for a theory of quantum gravity in particular?

The combination of the standard model and general relativity is not mathematically consistent at energies beyond the Planck scale, which is why we know that a theory of quantum gravity is necessary. The successful theory of quantum gravity must achieve mathematical consistencies at all energies, or – if it is not a final theory – at least well beyond the Planck scale.

If you quantize gravity like the other interactions, the theory you end up with – perturbatively quantized gravity – breaks down at high energies; it produces nonsensical answers. In physics parlance, high energies are often referred to as “the ultra-violet” or “the UV” for short, and the missing theory is hence the “UV-completion” of perturbatively quantized gravity.

At the energies that we have tested so far, quantum gravity must reproduce general relativity with a suitable coupling to the standard model. Strictly speaking it doesn’t have to reproduce these models themselves, but only the data that we have measured. But since there is such a lot of data at low energies, and we already know this data is described by the standard model and general relativity, we don’t try to reproduce each and every observation. Instead we just try to recover the already known theories in the low-energy approximation.

That the theory of quantum gravity must remove inconsistencies in the combination of the standard model and general relativity means in particular it must solve the black hole information loss problem. It also means that it must produce meaningful answers for the interaction probabilities of particles at energies beyond the Planck scale. It is furthermore generally believed that quantum gravity will avoid the formation of space-time singularities, though this isn’t strictly speaking necessary for mathematical consistency.

These requirements are very strong and incredibly hard to meet. There are presently only a few serious candidates for quantum gravity: string theory, loop quantum gravity, asymptotically safe gravity, causal dynamical triangulation, and, somewhat down the line, causal sets and a collection of emergent gravity ideas.

Among those candidates, string theory and asymptotically safe gravity have a well-established compatibility with general relativity and the standard model. From these two, string theory is favored by the vast majority of physicists in the field, primarily because it has given rise to more insights and contains more internal connections. Whenever I ask someone what they think about asymptotically safe gravity, they tell me that would be “depressing” or “disappointing.” I know, it sounds more like psychology than physics.

Having said that, let me mention for completeness that, based on purely logical reasoning, it isn’t necessary to find a UV-completion for perturbatively quantized gravity. Instead of quantizing gravity at high energies, you can ‘unquantize’ matter at high energies, which also solves the problem. From all existing attempts to remove the inconsistencies that arise when combining the standard model with general relativity, this is the possibly most unpopular option.

I do not think that the data we have so far plus the requirement of mathematical consistency will allow us to derive one unique theory. This means that without additional data physicists have no reason to ever converge on any one approach to quantum gravity.

Thank you for an interesting question!

Wednesday, March 23, 2016

Hey Bill Nye, Please stop talking nonsense about quantum mechanics.

Bill Nye, also known as The Science Guy, is a popular science communicator in the USA. He has appeared regularly on TV and, together with Corey Powell, produced two books. On Twitter, he has gathered 2.8 million followers, by which he ranks somewhere between Brian Cox and Neil deGrasse Tyson. This morning, a video of Bill Nye explaining quantum entanglement was pointed out to me:

The video seems to be part of a series in which he answers questions from his fans. Here we have a young man by name Tom from Western Australia calling in. The transcript starts as follows:
Tom: Hi, Bill. Tom, from Western Australia. If quantum entanglement or quantum spookiness can allow us to transmit information instantaneously, that is faster than the speed of light, how do you think this could, dare I say it, change the world?

Bill Nye: Tom, I love you man. Thanks for the tip of the hat there, the turn of phrase. Will quantum entanglement change the world? If this turns out to be a real thing, well, or if we can take advantage of it, it seems to me the first thing that will change is computing. We’ll be able to make computers that work extraordinarily fast. But it carries with it, for me, this belief that we’ll be able to go back in time; that we’ll be able to harness energy somehow from black holes and other astrophysical phenomenon that we observe in the cosmos but not so readily here on earth. We’ll see. Tom, in Western Australia, maybe you’ll be the physicist that figures quantum entanglement out at its next level and create practical applications. But for now, I’m not counting on it to change the world.
I thought I must have slept through Easter and it’s already April 1st. I replayed this like 5 times. But it didn’t get any better. So what else can I do but take to my blog in the futile attempt to bring sanity back to earth?

Dear Tom,

This is an interesting question which allows one to engage in some lovely science fiction speculation, but first let us be clear that quantum entanglement does not allow to transmit information faster than the speed of light. Entanglement is a non-local correlation that enforces particles to share properties, potentially over long distances. But there is no way to send information through this link because the particles are quantum mechanical and their properties are randomly distributed.

Quantum entanglement is a real thing, we know this already. This has been demonstrated in countless experiments, and while multi-particle correlations are an active research area, the basic phenomenon is well-understood. But entanglement does not imply a spooky “action” at a distance – this is a misleading historical phrase which lives on in science communication just because it has a nice ring to it. Nothing ever acts between the entangled particles – they are merely correlated. That entanglement might allow faster-than-light communication was a confusion in the 1950s, but it’s long been understood that quantum mechanics is perfectly compatible with Einstein’s theory of Special Relativity in which information cannot be transmitted faster than the speed of light.

No, it really can’t. Sorry about that. Yes, I too would love to send messages to the other side of the universe without having to wait some billion years for a reply. But for all we presently know about the laws of nature, it’s not possible.

Entanglement is the relevant ingredient in building quantum computers, and these could indeed dramatically speed up information processing and storage capacities, hence the effort that is being made to build one. But this has nothing to do with exchanging information faster than light, it merely relies on the number of different states that quantum particles can be brought into, which is huge compared to those of normal computers. (Which also work only thanks to quantum mechanics, but normal computers don’t use quantum states for information processing.)

Now let us forget about the real world for a moment, and imagine what we could do if it was possible to send information faster than the speed of light, even though this is to our best present knowledge not possible. Maybe this is what your question really was?

The short answer is that you are likely to screw up reality altogether. Once you can send information faster than the speed of light, you can also send it back in time. If you can send information back in time, you can create inconsistent histories, that is, you can create various different pasts, a problem commonly known as “grandfather paradox:” What happens if you travel back in time and kill your grandpa? Will Marty McFly be born if he doesn’t get his mom to dance with his dad? Exactly this problem.

Multiple histories, or quantum mechanical parallel worlds, are a commonly used scenario in the science fiction literature and movie industry, and they make for some mind-bending fun. For a critical take on how these ideas hold up to real science, I can recommend Xaq Rzetelny’s awesome article “Trek at 50: The quest for a unifying theory of time travel in Star Trek.

I have no fucking clue what Bill thinks this has to do with harnessing energy from black holes, but I hope this won’t discourage you from signing up for a physics degree.

Dear Bill,

Every day I get emails from people who want to convince me that they have found a way to create a wormhole, harness vacuum energy, travel back in time, or that they know how to connect the conscious mind with the quantum, whatever that means. They often argue with quotes from papers or textbooks which they have badly misunderstood. But they no longer have to do this. Now they can quote Bill The Science Guy who said that quantum entanglement would allow us to harness energy from black holes and to travel back in time.

Maybe you were joking and I didn’t get it. But if it’s a joke, let me tell you that nobody in my newsfeed seems to have found it funny.

Seriously, man, fix that. Sincerely,


Sunday, March 20, 2016

Can we get some sympathy for the nerdy loners please?


“What is she doing?” – “She is sitting there.” – “Ye-es. But what is she do-ing?”

“She isn’t doing anything. She is just. Sitting there.”

“How long do we wait?” – “We wait until the clock is 29 and 10.”

I’m sitting there because I have a problem. The problem isn’t that I have children – children who, despite my best efforts, still can’t read the clock. I am sitting there because I have a problem with a differential equation. Actually, several of them.

You’d think two non-stop nagging kids would have cured me from getting eaten up by equations. But they’ve just made me better at zoning out. Hooked on a suitably interesting problem – it’s inevitably something-with-physics – I am basically incommunicable, sometimes for weeks at a time.

Not like that’s news. 20 years ago I was your stereotypical nerd. The student in an oversized hoodie, with glasses and an always overdue haircut. No matter where I went, I dragged around a huge backpack full of books – just in case I had to look up something about that problem I was on. Nobody was surprised I ended up with a PhD in theoretical physics.

I’ve since swapped the hoodies for mommy-wear that doesn’t make it quite as easy for toddlers to hide food in it. I’ve found a way to tie up the mess that is my hair. And I’ve learned to make conversation. Though my attempts at small-talk inevitably seem to start with “I recently read...”

But despite my efforts to hide it, I’m afraid I’m still your stereotypical nerd.

I get often asked if it’s difficult to be one of the few women in a field dominated by men. Yes, sometimes. But leaving aside the inevitable awkwardness that comes with hearing your own voice stand out an octave above everyone else’s, theoretical physics has always been my intellectual home, the go-to place when in need of likeminded people. The stories about the lone genius waiting to be hit by an apple, they didn’t turn me off, they were my aspiration. I just wanted to be left alone solving problems. And for the biggest part I have been left alone.

There’s a price to pay, of course, for wanting to be left alone. Which is that you might be left alone.

Ágnes Móscy is the exact opposite of your stereotypical nerd. She’s as intelligent as artsy, and she dabbles with ease between communities. She seems infinitely energetic and is a wonderful woman, warm and welcoming, cool and clever. In recent years, Ágnes has become very engaged in the good cause of supporting minorities in physics. She has gone about it as you expect of a scientist, with numbers and facts, with data and references, giving lectures and educating her colleagues. I admire her initiative.

I had to say some nice things about Ágnes first because next comes some criticism.

The other day she wrote a piece for Huffpo hitting on the supposed myth of the lonely genius.

I will agree that genius is as word as useless as overused. Nobody really knows what it means, and it has an unfortunate ring of “genetics” to it. That’s unfortunate because a recent study has found evidence that women shy away from fields that are believed to require inborn talent rather than hard work. Then there’s another study which demonstrated that students are more likely to associate “genius” with male professors than with female and black professors. And Ágnes is right of course when she says that most of us in physics aren’t geniuses, whatever exactly you think it means, so why use a label that is neither descriptive nor helpful?

I’d sign a petition to trashcan “genius,” together with “next Einstein.”

Then Ágnes makes a case that the loner in physics is as much a myth as the genius. You won’t be surprised to hear I disagree.

True, scientists always build on other’s work, and once they’ve built, they must tell their colleagues about it. Communication isn’t only a necessary part of research, it’s also the best way to make sure you’re not fooling yourself. That talking to other people about your problems can be useful is a lesson I first had to learn, but even I eventually learned it.

Still, there is a stage of research that remains lonely. That phase in which you don’t really know just what you know, when you have an idea but you can’t put into words, a problem so diffuse you’re not sure what the problem is.

Fields Medalist Michael Atiyah (who I now don’t dare to call a genius because you might think I want to discourage girls from studying math) put it this way in a recent interview with Siobhan Roberts for Quanta Magazine:
“Dreams happen during the daytime, they happen at night. You can call them a vision or intuition. But basically they’re a state of mind—without words, pictures, formulas or statements. It’s “pre” all that. It’s pre-Plato. It’s a very primordial feeling. And again, if you try to grasp it, it always dies. So when you wake up in the morning, some vague residue lingers, the ghost of an idea. You try to remember what it was and you only get half of it right, and maybe that’s the best you can do.”
Tell me how that’s not lonely work.

As I am raising two girls, I am all too aware of occupational stereotypes. Like many academics, my husband and I are fighting the pink/blue divide, the gender segregation that starts already in kindergarten. I don’t want my daughters to think following their interests isn’t socially appropriate because some professions aren’t for women.

I am therefore all in favor of initiatives targeting girls with science toys and educational games, because of course I hope that’s where my kids’ interests are. Also, I get to play with the stuff myself. (I recently bought a microscope that attaches to the phone because I thought the girls might want have a close look at some leaves. Instead my husband used it to inspect our gauze curtains and proceeded to use them as a refraction lattice. I’m still waiting to get my microscope and laser pointer back.)

But while I hope my children will go on to become scientists, I first and foremost want them to find out which profession they will be most happy with, whether that means physicist or midwife. And I don’t want young women to get talked into something they aren’t genuinely into, just because the statistics say there should be more women in physics. I don’t want them to be mislead by marketing physics as something it is not.

So let’s tell it like it is.

Physics isn’t all teamwork and communication skills, it’s not all collaboration and conferences, it’s not all chalk and talk. That’s some of it, but physics is also a lot of reading and a lot of thinking – and sometimes it’s lonely.

There are stages in your research in which you will hit on a problem that no one can help you with. Because that’s what research is all about – finding and solving problems that no one has solved before. And sometimes you will get stuck, annoyed about yourself, frustrated about your own inability to make sense of these equations. You will feel stupid and you will feel lonely and you will feel like nobody can understand you – because nobody can understand you.

That’s physics too.

Science only stands to benefit from more diversity. Different cultural and social backgrounds, different experiences and different personality traits serve to broaden our perspectives and may lead to new approaches to old problems. But attracting new customers shouldn’t scare away the regulars. We have use for the nerdy loners too.

Having reached almost 40 years of age, I’ve survived long enough to no longer care if people think I’m not normal. Not normal for leaving the party early, not normal for scribbling notes on my arm, not normal for spontaneously bursting into lectures about Lorentz-invariance violating operators.

Luckily, I am married to a man who doesn’t only have much understanding for my problems, but also seems to have textbooks on each and every obscure subfield of physics. There’s a reason he’s in the acknowledgements of almost all of my papers.

I hope that you, too, find a niche in life where you fit in. And if you want to be left alone, don’t let anyone tell you there is no place for loners in this world any more.

“29 and 10. That’s 39.”

She can’t yet read the clock. But she’s good at math.

Tuesday, March 15, 2016

Researchers propose experiment to measure the gravitational force of milli-gram objects, reaching almost into the quantum realm.

Neutrinos, gravitational waves, light deflection on the sun – the history of physics is full with phenomena once believed immeasurably small but now yesterday’s news. And on the list of impossible things turned possible, quantum gravity might be next.

Quantum gravitational effects have widely been believed inaccessible by experiment because enormously high energy densities are required to make them comparably large as other quantum effects. This argument however neglects that quantum effects of gravity can also become relevant for massive objects in quantum superpositions. Once we are able to measure the gravitational pull of an object that is in a superposition of two different places, we can determine whether the gravitational field is in a quantum superposition as well.

This neat idea has two problematic aspects. First, since gravity is very weak, measuring gravitational fields of small objects is extremely difficult. And second, bringing massive objects into quantum states is hard because the states rapidly decohere due to interaction with the environment. However, technological advances on both aspects of the problem have been stunning during the last decade.

In two previous posts we discussed some examples of massive quantum oscillators that can create location superpositions of objects as heavy as a nano-gram. The objects under consideration here are typically small disks made of silicon that are bombarded with laser light while trapped between two mirrors. A nano-gram might not sound much, but compared to the masses of elementary particles that’s enormous.

Meanwhile, progress on the other aspect of the problem - measuring tiny gravitational fields – has also been remarkable. Currently, the smallest mass whose gravitational pull has been measured is about 90g. But a recent proposal by the group of Markus Aspelmeyer in Vienna lays out a method for measuring the gravitational force of masses as small as a few milli-gram.
    A micromechanical proof-of-principle experiment for measuring the gravitational force of milligram masses
    Jonas Schmöle, Mathias Dragosits, Hans Hepach, Markus Aspelmeyer
    arXiv:1602.07539 [physics.ins-det]

Their proposal relies on a relatively new field of technology that employs micro-mechanical devices, which basically means you make your whole measurement apparatus as small as you can, piling single atoms on atoms. This trend, which has itself become possible only by the nanotechnology required to to design these devices, allows measurements of unprecedented precision.

The smallest force that has so far been measured with nano-devices is around a zepto-Newton (zepto is 10-21). That’s not yet the world-record in tiny-force measurements, which is currently held by a group in Berkely and lies at about a yocto-Newton (that’s 10-24). But the huge benefit of the nano-devices is that you can get them close to the probe, whereas the experiment holding the record relies on precisely tracking the motion of a cloud of atoms in a trap. Not only doesn’t the cloud-tracking mean that it’s difficult to scale up the mass without ruining precision. The necessity to trap the particles also means that it’s difficult to get the source of the force-field close to the probe. The use of micro-mechanical devices in contrast does not have the same limitations and thus lends itself better to the task of measuring the gravitational force exerted by quantum systems.

The Aspelmeyer group sketches their experiment as shown in the figure below

[From arXiv:1602.07539]

The blue circles are the masses whose gravitational interaction one wants to measure, with the source mass to the right and the test-mass to the left. The test-mass is attached to the micro-mechanical oscillator, whereas the source-mass is driven by another oscillator close by the systems’ resonance frequency. The gravitational pull between the two masses transfers the oscillation of the source-mass to the test-mass, where it can be picked up by the detector.

In their paper, the experimentalists argue that it should be possible by this method to measure the gravitational force of a source mass not heavier than a few milli-grams. And that’s the conservative estimate. With better detector efficiency even that limit could be improved on.

There are still a few orders of magnitude between a milli-gram and a nano-gram, which is the current maximum mass for which quantum superpositions have been achieved. But in typical estimates for quantum gravitational effects you end up at least 30 orders of magnitude away from measurement precision. Now we are talking about five orders of magnitude – and that in a field with rapid technological developments for which there is no fundamental limit in sight.

What is most remarkable about this development is that this proposal relies on technology that until a few years ago literally nobody in quantum gravity ever talked about. It’s not even that the technological development has been faster than anticipated, it’s a possibility that plainly wasn’t on the radar. Now there is a Nobel Prize waiting here, for the first experimental measurement of quantum gravitational effects.

And as the Prize comes within reach, competition will speed up the pace. So stay tuned, I am sure we will hear more about this soon.

Wednesday, March 09, 2016

A new era of science

[img source:]
Here in basic research we all preach the gospel of serendipity. Breakthroughs cannot be planned, insights not be forced, geniuses not be bred. We tell ourselves – and everybody willing to listen – that predicting the outcome of a research project is more difficult than doing the research in the first place. And half of all discoveries are made while tinkering with something else anyway. Now please join me for the chorus, and let us repeat once again that the World Wide Web was invented at CERN – while studying elementary particles.

But in theoretical physics the age of serendipitous discovery is nearing its end. You don’t tinker with a 27 km collider and don’t coincidentally detect gravitational waves while looking for a better way to toast bread. Modern experiments succeed by careful planning over the course of decades. They rely on collaborations of thousands of people and cost billions of dollars. While we always try to include multipurpose detectors hoping to catch unexpected signals, there is no doubt that our machines are built for very specific purposes.

And the selection is harsh. For every detector that gets funding, three others don’t. For every satellite mission that goes into orbit, five others never get off the ground. Modern physics isn’t about serendipitous discoveries – it’s about risk/benefit analyses and impact assessments. It’s about enhanced design, horizontal integration, and progressive growth strategies. Breakthroughs cannot be planned, but you sure can call in a committee meeting to evaluate their ROI and disruptive potential.

There is no doubt that scientific research takes up resources. It requires both time and money, which is really just a proxy for energy. And as our knowledge increases, new discoveries have become more difficult, requiring us too pool funding and create large international collaborations.

This process is most pronounced in basic research in physics – cosmology and particle physics – because in this area we deal with the smallest and the most distant objects in the universe. Things that are hard to see, basically. But the trend towards Big Science can be witnessed also in other discipline’s billion-dollar investments like the Human Genome Project, the Human Brain Project, or the National Ecological Observatory Network. “It's analogous to our LHC, ” says Ash Ballantyne, a bioclimatologist at the University of Montana in Missoula, who has never heard of physics envy and doesn’t want to be reminded of it either.

These plus-sized projects will keep a whole generation of scientists busy - and the future will bring more of this, not less. This increasing cost of experiments in frontier research has slowly, but inevitably, changed the way we do science. And it is fundamentally redefining the role of theory development. Yes, we are entering a new era of science – whether we like that or not.

Again, this change is most apparent in basic research in physics. The community’s assessment of a theory’s promise must be drawn upon to justify investment in an experimental test of that theory. Hence the increased scrutiny that theory-assessment gets as of recently. In the end it comes down to the question where we should put our money.

We often act like knowledge discovery is a luxury. We act like it’s something societies can support optionally, to the extent that they feel like funding it. We act like it’s something that will continue, somehow, anyway. The situation, however, is much scarier than that.

At every level of knowledge we have the capability to exploit only a finite amount of resources. To unlock new resources, we have to invest the ones we have to discover new knowledge and develop new technologies. The newly unlocked resources can then be used for further exploration. And so on.

It has worked so far. But at any level in this game, we might fail. We might not succeed in using the resources we have smartly enough to upgrade to the next level. If we don’t invest sufficiently into knowledge discovery, or invest into the wrong things, we might get stuck – and might end up unable to proceed beyond a certain level of technology. Forever.

And so, when I look at the papers on hep-th and gr-qc, I don’t think about the next 3 years or 5 years, as my funding agency wants me to. I think about the next 3000 or 5000 years. Which of this research holds the promise of discovering knowledge necessary to get to the next level? The bigger and more costly experiments become, the larger the responsibility of theorists who claim that testing a theory will uncover worthwhile new insights. Do we live up to this responsibility?

I don’t think we do. Worse, I think we can’t because funding pressures force theoreticians to overemphasize the promise of their own research. The necessity of marketing is now a reality of science. Our assessment of research agendas is inevitably biased and non-objective. For most of the papers I see on hep-th and gr-qc, I think people work on these topics simply because they can. They can get this research published and they can get it funded. It tells you all about academia and very little about the promise of a theory.

While our colleagues in experiment have entered a new era of science, we theorists are still stuck in the 20st century. We still believe our task is being fighters for our own ideas, when we should instead be working together on identifying those experiments most likely to advance our societies. We still pretend that science is somehow self-correcting because a failed experiment will force us to discard a hypothesis – and we ignore the troubling fact that there are only so many experiments we can do, ever. We better place our bets very carefully because we won’t be able to bet arbitrarily often.

The reality of life is that nothing is infinite. Time, energy, manpower – all of this is limited. The bigger science projects become, the more carefully we have to direct our investments. Yes, it’s a new era of science. Are we ready?

Wednesday, March 02, 2016

Dear Dr. B: What is the difference between entanglement and superposition?

The only photo in existence
that shows me in high heels.

This is an excellent question which you didn’t ask. I’ll answer it anyway because confusing entangled states with superpositions is a very common mistake. And an unfortunate one: without knowing the difference between entanglement and superposition the most interesting phenomena of quantum mechanics remain impossible to understand – so listen closely, or you’ll forever remain stuck in the 19th century.

Let us start by decoding the word “superposition.” Physicists work with equations, the solutions of which describe the system they are interested in. That might be, for example, an electromagnetic wave going through a double slit. If you manage to solve the equations for that system, you can then calculate what you will observe on the screen.

A “superposition” is simply a sum of two solutions, possibly with constant factors in front of the terms. Now, some equations, like those of quantum mechanics, have the nice property that the sum of two solutions is also a solution, where each solution corresponds to a different setup of your experiment. But that superpositions of solutions are also solutions has nothing to do with quantum mechanics specifically. You can also, for example, superpose electromagnetic waves – solutions to the sourceless Maxwell equations – and the superposition is again a solution to Maxwell’s equations. So to begin with, when we are dealing with quantum states, we should more carefully speak of “quantum superpositions.”

Quantum superpositions are different from non-quantum superpositions in that they are valid solutions to the equations of quantum mechanics, but they are never being measured. That’s the whole mystery of the measurement process: the “collapse” of a superposition of solutions to a single solution.

Take for example a lonely photon that goes through a double slit. It is a superposition of two states that each describe a wave emerging from one of the slits. Yet, if you measure the photon on the screen, it’s always in one single point. The superposition of solutions in quantum mechanics tells you merely the probability for measuring the photon at one specific point which, for the double-slit, reproduces the interference pattern of the waves.

But I cheated...

Because what you think of as a quantum superposition depends on what you want to measure. A state might be a superposition for one measurement, but not for another. Indeed the whole expression “quantum superposition” is entirely meaningless without saying what is being superposed. A photon can be in a superposition of many different positions, and yet not be in a superposition of momenta. So is it or is it not a superposition? That’s entirely due to your choice of observable – even before you have observed anything.

All this is just to say that whether a particle is or isn’t in a superposition is ambiguous. You can always make its superposition go away by just wanting it to go away and changing the notation. Or, slightly more technical, you can always remove a superposition of basis states just by defining the superposition as a new basis state. It is for this reason somewhat unfortunate that superpositions – the cat being both dead and alive – often serve as examples for quantum-ness. You could equally well say the cat is in one state of dead-and-aliveness, not in a superposition of two states one of which is dead and one alive.

Now to entanglement.

Entanglement is a correlation between different parts of a system. The simplest case is a correlation between particles, but really you can entangle all kinds of things and properties of things. You find out whether a system has entanglement by dividing it up into two subsystems. Then you consider both systems separately. If the two subsystems were entangled, then looking at them separately will inevitably reduce the information. In physics speak, you “trace out” one subsystem and are left with a mixed state for the other subsystem.

The best known example is a pair of particles, each with either spin +1 or -1. You don’t know which particle has which spin, but you do know that the sum of both has to be zero. So if you have your particles in two separate boxes, you have a state that is either +1 in the left box and -1 in the right box, or -1 in the left box and +1 in the right box.

Now divide the system up in two subsystems that are the two boxes, and throw away one of them. What do you know about the remaining box? Well, all you know is that it’s either +1 or -1, and you have lost the information that was contained in the link between the two boxes, the one that said “If this is +1, then this must be -1, and the other way round.” That information is gone for good. If you crunch the numbers, you find that correlations between quantum states can be stronger than correlations between non-quantum states could ever be. It is the existence of these strong correlations that tests of Bell’s theorem have looked for – and confirmed.

Most importantly, whether a system has entanglement between two subsystems is a yes or no question. You cannot create entanglement by a choice of observable, and you can’t make it go away either. It is really entanglement – the spooky action at a distance – that is the embodiment of quantum-ness, and not the dead-and-aliveness of superpositions.

[For a more technical explanation, I can recommend these notes by Robert Helling, who used to blog but now has kids.]

Tuesday, March 01, 2016

Tim Gowers and I have something in common. Unfortunately it’s not our math skills.

Heavy paper.
What would you say if a man with British accent cold-calls you one evening to offer money because he likes your blog?

I said no.

In my world – the world of academic paper-war – we don’t just get money for our work. What we get is permission to administrate somebody else’s money according to the attached 80-page guidelines (note the change in section 15b that affects taxation of 10 year deductibles). Restrictions on the use of funds are abundant and invite applicants to rest their foreheads on cold surfaces.

The German Research Foundation for example, will – if you are very lucky – grant you money for a scientific meeting. But you’re not allowed to buy food with it. Because, you must know, real scientists don’t eat. And to thank you for organizing the meeting you don’t yourself get paid – that wouldn’t be an allowed use of funds. No, they thank you by requesting further reports and forms.

At least you can sometimes get money for scientific meetings. But convincing a funding agency to pay a bill for public outreach or open access initiatives is like getting a toddler to eat broccoli: No matter how convincingly you argue it’s in their own interest, you end up eating it yourself. And since writing proposals sucks, I mean, sucks up time, at some point I gave up trying to make a case that this blog is unpaid public outreach that you'd think research foundations should be supportive of. I just write – and on occasion I carefully rest my forehead on cold surfaces.

Then came the time I was running low on income – unemployed between two temporary contracts – and decided to pitch a story to a magazine. I was lucky and landed an assignment instantly. And so, for the first time in my life, I turned in work to a deadline, wrote an invoice, and got paid in return. I. Made. Money. Writing. It was a revelation. Unfortunately, my published masterwork is now hidden behind a paywall. I am not happy about this, you are not happy about this, and the man with the British accent wasn’t happy about it either. Thus his offer.

But I said no.

Because all I could see was time wasted trying to justify proper means of spending someone else’s money on suitable purposes that might be, for example, a conference fee that finances the first class ticket of the attending Nobel Prize winner. That, you see, is an allowed way of spending money in academia.

My cold-caller was undeterred and called again a week later to inquire whether I had changed my mind. I was visiting my mom, and mom, always the voice of reason, told me to just take the damn money. But I didn’t.

I don’t like being reminded of money. Money is evil. Money corrupts. I only pay with sanitized plastic. I swipe a card through a machine and get handed groceries in return – that’s not money, that’s magic. I look at my bank account statements so rarely I didn’t notice for three years I accidentally paid a gym membership fee in a country I don’t even live. In case my finances turn belly-up I assume the bank will call and yell at me. Which, now that I think of it, seems unlikely because I moved at least a dozen times since opening my account. And I’m not good updating addresses either. I did call the gym though and yelled at them – I got my money back.

Then the British man told me he also supports Tim Gowers new journal. “G-O-W-ers?,” I asked. Yes, that Tim. That would be the math guy responsible for the equations in my G+ feed.

Tim Gowers. [Not sure whose photo, but not mine]
Tim Gowers, of course, also writes a blog. Besides that, he’s won the 1998 Fields Medal which makes him officially a genius. I sent him an email inquiring about our common friend. Tim wrote back he reads my blog. He reads my blog! A genius reads my blog! I mean, another genius – besides my mom who gets toddlers to eat broccoli.

Thusly, I thought, if it’s good enough for Gowers, it’s probably good enough for me. So I said yes. And, after some more weeks of consideration, sent my bank account details to the British man. You have to be careful with that kind of thing, says my mom.

That was last year in December. Then I forgot about the whole story and returned to my differential equations.

Tim, meanwhile, got busy setting up the webpage for his new journal “Discrete Analysis” which covers the emerging fields related to additive combinatorics (not to be confused with addictive combinatorics, more commonly known as Sudoku). His open-access initiative has attracted some attention because the journal’s site doesn’t itself host the articles it publishes – it merely links to files which are stored on the arXiv. The arXiv is an open-access server in operation since the early 1990s. It allows researchers in physics, math, and related disciplines to upload and share articles that have not, or not yet, been peer-reviewed and published. “Discrete Analysis” adds the peer-review, with minimal effort and minimal expenses.

Tim’s isn’t the first such “arxiv-overlay” journal – I myself published last year in another overlay-journal called SIGMA – but it is still a new development that is eyed with some skepticism. By relying on the arXiv to store files, the overlays render server costs somebody else’s problem. That’s convenient but doesn’t make the problem go away. Another issue is that the arXiv itself already moderates submissions, a process that the overlay journals have no control over.

Either way, it is a trend that I welcome because overlays offer scientists what they need from journals without the strings and costs attached by commercial publishers. It is, most importantly, an opportunity for the community to reclaim the conditions under which their research is shared, and also to innovate the format as they please:

“I wanted it to be better than a normal journal in important respects,” says Tim, “If you visit the website, you will notice that each article gives you an option to click on the words ‘Editorial introduction.’ If you do so, then up comes a description of the article (not on a new webpage, I hasten to add), which sets it in some kind of context and helps you to judge whether you want to find out more by going to the arXiv and reading it.”

But even overlay journals don’t operate at zero cost. The website of “Discrete Analysis” was designed by Scholastica’s team, and their platform will also handle the journal’s publication process. They charge $10 per submission and there are a couple of other expenses that the editorial board has to cover, such as services necessary to issue article DOIs. Tim wants to avoid handing on the journal expenses to the authors. Which brings in, among others, the support from my caller with the British accent.

In the two months that passed since I last heard from him, I found out that 10 years ago someone proved there is no non-trivial solution to the equations I was trying to solve. Well, at least that explains why I couldn’t find one. My hence scheduled two-day cursing retreat was interrupted by a message from The British Man. Did the money arrive?, he wanted to know. This way forced to check my bank account, it turned out not only didn’t his money arrive, but neither did I ever receive salary for my new job.

This gives me an excuse to lecture you on another pitfall of academic funding. Even after you have filed five copies of various tax-documents and sent the birth dates of the University President and Vice-president to an institution that handles your grant for another institution and is supposed to wire it to a third institution which handles it for your institution, the money might get lost along the way – and frequently does.

In this case they simply forgot to put me on the payroll. Luckily, the issue could be resolved quickly, and the next day also the wire transfer from Great Britain arrived. Good thing because, as mommy guilt reminded me, this bank account pays for the girls’ daycare and lunch. My writer friends won’t be surprised to hear however that I also had to notice several payments for my freelance work did not come through. When I grow up, I hope someone tells me how life works. /lecture

Tim Gowers invited submissions for “Discrete Analysis” starting last September, and the website of the new journal launched todayyou can read his own blogpost here. For the community, they key question is now whether arxiv-overlay journals like Tim’s will be able to gain a status similar to that of traditional journals. The only way to find out is to try.

Public outreach in general, and science blogging in particular, is vital for the communication of science, both within our communities and to the public. And so are open access initiatives. Even though they are essential to advance research and integrate it into our society, funding agencies have been slow to accept these services as part of their mission.

While we wait for academia to finally digest the invention of the world wide web, it is encouraging to see that some think forward. And so, I am happy today to acknowledge this blog is now supported by the caller with the British accent, Ilyas Khan of Cambridge Quantum Computing. Ilyas has quietly supported a number of scientific endeavors. Although he is best known for enabling Wittgenstein's Nachlass to become openly and freely accessible by funding the project that was implemented by Trinity College Cambridge, he is also a sponsor of Tim Gowers' new journal Discrete Analysis.

Friday, February 26, 2016

"Rate your Supervisor" comes to High Energy Physics

A new website called the "HEP Postdoc Project" allows postdocs in high energy physics to rate their supervisors in categories like "friendliness," "expertise," and "accessibility."

I normally ignore emails that more or less explicitly ask me to advertise sites on my blog, but decided to make an exception for this one. It seems a hand-made project run by a small number of anonymous postdocs who want to help their fellows find good supervisors. And it's a community that I care much about.

While I appreciate the initiative, I have to admit being generally unenthusiastic about anonymous ratings on point scales. Having had the pleasure of reading though an estimated several thousand of recommendation letters, I have found that an assessment of skills is only useful if you know the person it comes from.

Much of this is cultural. A letter from a Russian prof that says this student isn't entirely bad at math might mean the student is up next for the Fields Medal. On the other hand, letters from North Americans tend to exclusively contain positive statements, and the way to read them is to search for qualities that were not listed.

But leaving aside the cultural stereotypes, more important are personal differences in the way people express themselves and use point scales, even if they are given a description for each rating (and that is missing on the website). We occasionally used 5 point rating scales in committees. You then notice quickly that some people tend to clump everyone in the middle-range, while others are more comfortable using the high and low scores. Then again others either give a high rating or refuse to have any opinion. To get a meaningful aggregate, you can't just take an average, you need to know roughly how each committee member uses the scale. (Which will require endless hours of butt-flattening meetings. Trust me, I'd be happy being done with clicking on a star scale.)

You could object that any type of online rating suffers from these problems and yet they seem to serve some purpose. That's right of course, so this isn't to say they're entirely useless. Thus I am sharing this link thinking it's better than nothing. And at the very least you can have some fun browsing through the list to see who got the lowest marks ;)

Wednesday, February 24, 2016

10 Years BackRe(action)

Yes, today marks the 10th anniversary of my first post on this blog.

I started blogging while I was in Santa Barbara, in a tiny fifth-floor office that slightly swayed with the occasional Earthquakes. I meant to write about postdoc-life in California, but ended up instead writing mostly about my research interests. Because, well, that's what I'm interested in. Sorry, California.

Those were the years of the String Wars and of Black Holes at the LHC. And since my writing was on target, traffic to this blog increased rapidly -- a somewhat surprising and occasionally disturbing experience.

Over the years, I repeatedly tried to share the work of regularly feeding this blog, but noticed it's more effort trying to convince others to write than to just write myself. And no, it's not zero effort. In an attempt to improve my Germenglish, I have read Strunk's "Elements of Style" forwards and backwards, along with several books titled "Writing Well" (which were written really well!), and I hope you benefit from it. For me, the outcome has been that now I can't read my older blogposts without crying over my own clumsy writing. Also, there's link-rot. But if you have some tolerance for awkward English and missing images, there's 10 years worth of archives totalling more than 1500 entries waiting in the side-bar.

The content of this blog has slightly changed over the years. Notably, I don't share links here any more. For this, I use instead my twitter and facebook accounts, which you can follow to get reading recommendations and the briefer commentaries. But since I can't stand cluttered pages, this blog is still ad-free and I don't make money with it. So if you like my writing, please have a close look at the donate-button in the top-right corner.

In the 10 years that have passed, this blog moved with me through the time-zones, from California to Canada, from Canada to Sweden, and from Sweden eventually back to Germany. It witnessed my wedding and my pregnancy and my daughters turning from babies to toddlers to Kindergartners. And the journey goes on. As some of you know already, I'm writing a book (or at least I'm supposed to be writing a book), so stay tuned, there's more to come.

I want to thank all of you for reading along, especially the commenters. I know that some of you have been around since the first days, and you have become part of my extended family. You have taught me a lot, about life and about science and about English grammar.

A special thank you goes to those of you who have sent me donations since I put up the button a few months ago. It is a great encouragement for me to continue.

Monday, February 22, 2016

Too many anti-neutrinos: Evidence builds for new anomaly

Bump ahead.
Tl;dr: A third experiment has reported an unexplained bump in the spectrum of reactor-produced anti-neutrinos. Speculations for the cause of the signal so far focus on incomplete nuclear fission models.

Neutrinos are the least understood of the known elementary particles, and they just presented physicists with a new puzzle. While monitoring the neutrino flux from nearby nuclear power plants, three different experiments have measured an unexpected bump around 5 MeV. First reported by the Double Chooz experiment in 2014, the excess was originally not statistically significant
5 MeV bump as seen by Double Chooz. Image source: arXiv:1406.7763
Last year, a second experiment, RENO, reported an excess but did not assign a measure of significance. However, the bump is clearly visible in their data
5 MeV bump as seen by RENO. Image source: arXiv:1511.05849
The newest bump is from the Daya Bay collaboration and was just published in PRL

5 MeV bump as seen by Daya Bay. Image source: arXiv:1508.04233

They give the excess a local significance of 4.1 σ – a probability of less than one in ten thousand for the signal being due to pure chance.

This is a remarkable significance for a particle that interacts so feebly, and an impressive illustration of how much detector technology has improved. Originally, the neutrino’s interaction was thought to be so weak that to measure it at all it seemed necessary placing detectors next to the most potent neutrino source known – a nuclear bomb explosion.

And this is exactly what Frederick Reines and Clyde Cowan set out to do. In 1951, they devised “Project Poltergeist” to detect the neutrino emission from a nuclear bomb: “Anyone untutored in the effects of nuclear explosions would be deterred by the challenge of conducting an experiment so close to the bomb,” wrote Reines, “but we knew otherwise from experience and pressed on.” And their audacious proposal was approved swiftly: “Life was much simpler in those days—no lengthy proposals or complex review committees,” recalls Reines.

Briefly after their proposal was approved, however, the two men found a better experimental design and instead placed a larger detector close by a nuclear power plant. But the controlled splitting of nuclei in a power plant needs much longer to produce the same number of neutrinos as a nuclear bomb blast, and patience was required of Reines and Cowan. Their patience eventually paid off: They were awarded the 1995 Nobel Prize in physics for the first successful detection of neutrinos – a full 65 years after the particles were first predicted.

Another Nobel Prize for neutrinos was handed out just last year, this one commemorating the neutrino’s ability to “oscillate,” that is to change between different neutrino types as they travel. But, as the recent measurements demonstrate, neutrinos still have surprises in stock.

Good news first, the new experiments have confirmed the neutrino oscillations. On short base-lines as that of Daya Bay – a few kilometer – the electron-anti-neutrinos that are emitted during nuclear fission change into to tau-anti-neutrinos and arrive at the detector in reduced numbers. The wavelength of the oscillation between the two particles depends on the energy – higher energy means a longer wavelength. Thus, a detector placed at fixed distance from the emission point will see a different energy-distribution of particles than that at emission.

The emitted energy spectrum can be deduced from the composition of the reactor core – a known mixture of Uranium and Plutonium, each in two different isotopes. After the initial split, these isotopes leave behind a bunch of radioactive nuclei which then decay further. The math is messy, but not hugely complicated. With nuclear fission and decay models as input, the experimentalists can then extract from their data the change in the energy-distribution due to neutrino oscillation. And the parameters of the oscillation that they have observed fit those of other experiments.

Now to the bad news. The fits of the oscillation parameters to the energy spectrum do not take into account the overall number of particles. And when they look at the overall number, the Daya Bay experiment, like other reactor neutrino experiments before, falls about 6% short of expectation. And then there is the other oddity: the energy spectrum has a marked bump that does not agree with the predictions based on nuclear models. There are too many neutrinos in the energy range of 5 MeV.

There are four possible origins for this discrepancy: Detection, travel, production, and misunderstood background. Let us look at them one after the other.

Detection: The three experiments all use the same type of detector, a liquid scintillator with Gadolinium target. Neutrino-nucleus cross-sections are badly understood because neutrinos interact so weakly and very little data is available. However, the experimentalists calibrate their detectors with other radioactive sources in near vicinity, and no bumps have been seen in these reference measurements. This strongly speaks against detector shortcomings as an explanation.

Travel: An overall lack of particles could be explained with oscillation into a so-far undiscovered new type of ‘sterile’ neutrino. However, such an oscillation cannot account for a bump in the spectrum. This could thus at best be a partial explanation, though an intriguing one.

Production: The missing neutrinos and the bump in the spectrum are inferred relative to the expected neutrino flux from the power plant. To calculate the emission spectrum, the physicists rely on nuclear models. The isotopes in the power plant’s core are among the best studied nuclei ever, but still this is a likely source of error. Most research studies of radioactive nuclei investigate them in small numbers, whereas in a reactor a huge number of different nuclei are able to interact with each other. A few proposals have been put forward that mostly focus on the decay of Rubidium and Yttrium isotopes because these make the main contribution to the high energy tail of the spectrum. But so far none of the proposed explanations has been entirely convincing.

Background: Daya Bay and RENO both state that the signal is correlated with the reactor power which makes it implausible that it’s a background effect. There aren’t many details in the paper about the time-dependence of the emission though. It would seem possible to me that reactor power depends on the time of the day or on the season, both of which could also be correlated with background. But this admittedly seems like a long shot.

Thus, at the moment the most conservative explanation is a lacking understanding of processes taking place in the nuclear power plant. It presently seems very unlikely to me that there is fundamentally new physics involved in this – if the signal is real to begin with. It looks convincing to me, but I asked fellow blogger Tommaso Dorigo for his thoughts: “Their signal looks a bit shaky to me - it is very dependent on the modeling of the spectrum and the p-value is unimpressive, given that there is no reason to single out the 5 MeV region a priori. I bet it's a modeling issue.”

Whatever the origin of the reactor antineutrino anomaly, it will require further experiments. As Anna Hayes, a nuclear theorist at Los Alamos National Laboratory, told Fermilab’s Symmetry Magazine: “Nobody expected that from neutrino physics. They uncovered something that nuclear physics was unaware of for 40 years.”