Friday, July 25, 2014

Can black holes bounce to white holes?

Fast track to wisdom: Sure, but who cares if they can? We want to know if they do.

Black holes are defined by the presence of an event horizon which is the boundary of a region from which nothing can escape, ever. The word black hole is also often used to mean something that looks for a long time very similar to a black hole and that traps light, not eternally but only temporarily. Such space-times are said to have an “apparent horizon.” That they are not strictly speaking black holes was origin of the recent Stephen Hawking quote according to which black holes may not exist, by which he meant they might have only an apparent horizon instead of an eternal event horizon.

A white hole is an upside-down version of a black hole; it has an event horizon that is a boundary to a region in which nothing can ever enter. Static black hole solutions, describing unrealistic black holes that have existed forever and continue to exist forever, are actually a combination of a black hole and a white hole.

The horizon itself is a global construct, it is locally entirely unremarkable and regular. You would not note crossing the horizon, but the classical black hole solution contains a singularity in the center. This singularity is usually interpreted as the breakdown of classical general relativity and is expected to be removed by the yet-to-be-found theory of quantum gravity. 

You do however not need quantum gravity to construct singularity-free black hole space-times. Hawking and Ellis’ singularity theorems prove that singularities must form from certain matter configurations, provided the matter is normal matter and cannot develop negative pressure and/or density. All you have to do to get rid of the singularity is invent some funny type of matter that refuses to be squeezed arbitrarily. This is not possible with any type of matter we know, and so just pushes around the bump under the carpet: Now rather than having to explain quantum effects of gravity you have to explain where the funny matter comes from. It is normally interpreted not as matter but as a quantum gravitational contribution to the stress-energy tensor, but either way it’s basically the physicist’s way of using a kitten photo to cover the hole in wall.

Singularity-free black hole solutions have been constructed almost for as long as the black hole solution has been known – people have always been disturbed by the singularity. Using matter other than normal ones allowed constructing both wormhole solutions as well as black holes that turn into white holes and allow an exit into a second space-time region. Now if a black hole is really a black hole with an event horizon, then the second space-time region is causally disconnected from the first. If the black hole has only an apparent horizon, then this does not have to be so, and also the white hole then is not really a white hole, it just looks like one.

The latter solution is quite popular in quantum gravity. It basically describes matter collapsing, forming an apparent horizon and a strong quantum gravity region inside but no singularity, then evaporating and returning to an almost flat space-time. There are various ways to construct these space-times. The details differ, but the corresponding causal diagrams all look basically the same.

This recent paper for example used a collapsing shell turning into an expanding shell. The title “Singularity free gravitational collapse in an effective dynamical quantum spacetime” basically says it all. Note how the resulting causal diagram (left in figure below) looks pretty much the same as the one Lee and I constructed based on general considerations in our 2009 paper (middle in figure below), which again looks pretty much the same as the one that Ashtekar and Bojowald discussed in 2005 (right in figure below), and I could go on and add a dozen more papers discussing similar causal diagrams. (Note that the shaded regions do not mean the same in each figure.)



One needs a concrete ansatz for the matter of course to be able to calculate anything. The general structure of the causal diagram is good for classification purposes, but not useful for quantitative reasoning, for example about the evaporation.

Haggard and Rovelli and recently added to this discussion with a new paper about black holes bouncing to white holes.

    Black hole fireworks: quantum-gravity effects outside the horizon spark black to white hole tunneling
    Hal M. Haggard, Carlo Rovelli
    arXiv: 1407.0989

Ron Cowen at Nature News announced this as a new idea, and while the paper does contain new ideas, that black holes may turn into white holes is in and by itself not new. And so it follows some clarification.

Haggard and Rovelli’s paper contains two ideas that are connected by an argument, but not by a calculation, so I want to discuss them separately. Before we start it is important to note that their argument does not take into account Hawking radiation. The whole process is supposed to happen already without outgoing radiation. For this reason the situation is completely time-reversal invariant, which makes it significantly easier to construct a metric. It is also easier to arrive at a result that has nothing to do with reality.

So, the one thing that is new in the Haggard and Rovelli paper is that they construct a space-time diagram, describing a black hole turning into a white hole, both with apparent horizons, and do so by a cutting-procedure rather than altering the equation of state of the matter. As source they use a collapsing shell that is supposed to bounce. This cutting procedure is fine in principle, even though it is not often used. The problem is that you end up with a metric that exists as solution to some source, but you then have to calculate what the source has to do in order to give you the metric. This however is not done in the paper. I want to offer you a guess though as to what source would be necessary to create their metric.

The cutting that is done in the paper takes a part of the black hole metric (describing the inside of the shell) with an arm extending into the horizon region, then squeezes this arm together so that it shrinks in radial extension no longer extends into the regime below the Schwarzschild radius, which is normally behind the horizon. This squeezed part of the black hole metric is then matched to empty space, describing the inside of the shell. See image below

Figure 4 from arXiv: 1407.0989

They do not specify what happens to the shell after it has reached the end of the region that was cut, explaining one would need quantum gravity for this. The result is glued together with the time-reversed case, and so they get a metric that forms an apparent horizon and bounces at a radius where one normally would not expect quantum gravitational effects. (Working towards making more concrete the so far quite vague idea of Planck stars that we discussed here.)

The cutting and squeezing basically means that the high curvature region from inside the horizon was moved to a larger radius, and the only way this makes sense is if it happens together with the shell. So I think effectively they take the shell from a small radius and match the small radius to a large radius while keeping the density fixed (they keep the curvature). This looks to me like they blow up the total mass of the shell, but keep in mind this is my interpretation, not theirs. If that was so however, then makes sense that the horizon forms at a larger radius if the shell collapses while its mass increases. This raises the question though why the heck the mass of the shell should increase and where that energy is supposed to come from.

This brings me to the second argument in the paper, which is supposed to explain why it is plausible to expect this kind of behavior. Let me first point out that it is a bold claim that quantum gravity effects kick in outside the horizon of a (large) black hole. Standard lore has it that quantum gravity only leads to large corrections to the classical metric if the curvature is large (in the Planckian regime). This happens always after horizon crossing (as long as the mass of the black hole is larger than the Planck mass). But once the horizon is formed, the only way to make matter bounce so that it can come out of the horizon necessitates violations of causality and/or locality (keep in mind their black hole is not evaporating!) that extend into small curvature regions. This is inherently troublesome because now one has to explain why we don’t see quantum gravity effects all over the place.

The way they argue this could happen is that small, Planck size, higher-order correction to the metric can build up over time. In this case it is not solely the curvature that is relevant for an estimate of the effect, but also the duration of the buildup. So far, so good. My first problem is that I can’t see what their estimate of the long-term effects of such a small correction has to do with quantum gravity. I could read the whole estimate as being one for black hole solutions in higher-order gravity, quantum not required. If it was a quantum fluctuation I would expect the average solution to remain the classical one and the cases in which the fluctuations build up to be possible but highly improbable. In fact they seem to have something like this in mind, just that they for some reason come to the conclusion that the transition to the solution in which the initially small fluctuation builds up becomes more likely over time rather than less likely.

What one would need to do to estimate the transition probability is to work out some product of wave-functions describing the background metric close by and far away from the classical average, but nothing like this is contained in the paper. (Carlo told me though, it’s in the making.) It remains to be shown that the process of all the matter of the shell suddenly tunneling outside the horizon and expanding again is more likely to happen than the slow evaporation due to Hawking radiation which is essentially also a tunnel process (though not one of the metric, just of the matter moving in the metric background). And all this leaves aside that the state should decohere and not just happily build up quantum fluctuations for the lifetime of the universe or so.

By now I’ve probably lost most readers so let me just sum up. The space-time that Haggard and Rovelli have constructed exists as a mathematical possibility, and I do not actually doubt that the tunnel process is possible in principle, provided that they get rid of the additional energy that has appeared from somewhere (this is taken care of automatically by the time-reversal). But this alone does not tell us whether this space-time can exist as a real possibility in the sense that we do not know if this process can happen with large probability (close to one) in the time before the shell reaches the Schwarzschild radius (of the classical solution).

I have remained skeptical, despite Carlo’s infinitely patience in explaining their argument to me. But if they are right and what they claim is correct, then this would indeed solve both the black hole information loss problem and the firewall conundrum. So stay tuned...

Sunday, July 20, 2014

I saw the future [Video] Making of.

You wanted me to smile. I did my best :p



With all the cropping and overlays my computer worked on the video mixdown for a full 12 hours and that in a miserable resolution. Amazingly, the video looks better after uploading it to YouTube. Whatever compression YouTube is using, it has nicely smoothened out some ugly pixelations that I couldn't get rid of.

The worst part of the video making is that my software screws up the audio timing upon export. Try as I might, the lip movements never quite seem to be in sync, even if they look perfectly fine before export. I am not sure exactly what causes the problem. One issue is that the timing of my camera seems to be slightly inaccurate. If I record a video with the audio running in the background and later add the same audio on a second track, the video runs too fast by about 100 ms over 3 minutes. That's already enough to note the delay and makes the editing really cumbersome. Another contributing factor seems to be simple errors in the data processing. The audio sometimes runs behind and then, with an ugly click, jumps back into place.

Another issue with the video is that, well, I don't have a video camera. I have a DSLR photo camera with a video option, but that has its limits. It does not for example automatically refocus during recording and it doesn't have a movable display either. That's a major problem since it means I can't focus the camera on myself. So I use a mop that I place in front of the blue screen, focus the camera on that, hit record, and then try to put myself in place of the mop. Needless to say, that doesn't always work, especially if I move around. This means my videos are crappy to begin with. They don't exactly get better with several imports and exports and rescalings and background removals and so on.

Oh yeah, and then the blue screen. After I noticed last time that pink is really a bad color for a background removal because skin tones are pink, not to mention lipstick, I asked Google. The internet in its eternal wisdom recommended a saturated blue rather than turquoise, which I had though of, and so I got myself a few meters of the cheapest royal blue fabric I could find online. When I replaced the background I turned into a zombie, and thus I was reminded I have blue eyes. For this reason I have replaced the background with something similar to the original color. And my eyes look bluer than they actually are.

This brings me to the audio. After I had to admit that my so-called songs sound plainly crappy, I bought and read a very recommendable book called "Mixing Audio" by Roey Izhaki. Since then I know words like multiband compressor and reverb tail. The audio mix still isn't particularly good, but at least it's better and since nobody else will do it, I'll go and congratulate myself on this awesomely punchy bass-kick loop which you'll only really appreciate if you download the mp3 and turn the volume up to max. Also note how the high frequency plings come out crystal clear after I figured out what an equalizer is good for.

My vocal recording and processing has reached its limits. There's only so much one can do without a studio environment. My microphone picks up all kinds of noise, from the cars passing by over the computer fan and the neighbor's washing machine to the church bells. I basically can't do recordings in one stretch, I have to repeat everything a few times and pick the best pieces. I've tried noise-removal tools, but the results sound terrible to me and, worse, they are not reproducible, which is a problem since I have to patch pieces together. So instead I push the vocals through several high-pass filters to get rid of the background noise. This leaves my voice sounding thinner than it is, so then I add some low-frequency reverb and a little chorus and it comes out sounding mostly fine.

I have given up on de-essing presets, they always leave me with a lisp on top of my German accent. Since I don't actually have a lot of vocals to deal with, I just treat all the 's' by hand in the final clip, and that sounds okay, at least to my ears.

Oh yeah, and I promise I'll not attempt again to hit a F#3, that was not a good idea. My voicebox clearly wasn't meant to support anything below B3. Which is strange as I evidently speak mostly in a frequency range so low that it is plainly unstable on my vocal cords. I do fairly well with everything between the middle and high C and have developed the rather strange habit of singing second and 3rd voices to myself when I get stuck on some calculation. I had the decency to remove the whole choir in the final version though ;)

Hope you enjoy this little excursion into the future. Altogether it was fun to make. And see, I even managed a smile, especially for you :o)

Saturday, July 19, 2014

What is a theory, What is a model?

During my first semester I coincidentally found out that the guy who often sat next to me, one of the better students, believed the Earth was only 15,000 years old. Once on the topic, he produced stacks of colorful leaflets which featured lots of names, decorated by academic titles, claiming that scientific evidence supports the scripture. I laughed at him, initially thinking he was joking, but he turned out to be dead serious and I was clearly going to roast in hell until future eternity.

If it hadn’t been for that strange encounter, I would summarily dismiss the US debates about creationism as a bizarre cultural reaction to lack of intellectual stimulation. But seeing that indoctrination can survive a physics and math education, and knowing the amount of time one can waste using reason against belief, I have a lot of sympathy for the fight of my US colleagues.

One of the main educational efforts I have seen is to explain what the word “theory” means to scientists. We are told that a “theory” isn’t just any odd story that somebody made up and told to his 12 friends, but that scientists use the word “theory” to mean an empirically well-established framework to describe observations.

That’s nice, but unfortunately not true. Maybe that is how scientist should use the word “theory”, but language doesn’t follow definitions: Cashews aren’t nuts, avocados aren’t vegetables, black isn’t a color. And a theory sometimes isn’t a theory.

The word “theory” has a common root with “theater” and originally seems to have meant “contemplation” or generally a “way to look at something,” which is quite close to the use of the word in today’s common language. Scientists adopted the word, but not in any regular way. It’s not like we vote on what gets called a theory and what doesn’t. So I’ll not attempt to give you a definition that nobody uses in practice, but just try an explanation that I think comes close to practice.

Physicists use the word theory for a well worked-out framework to describe the real world. The theory is basically a map between a model, that is a simplified stand-in for a real-world system, and reality. In physics, models are mathematical, and the theory is the dictionary to translate mathematical structures into observable quantities.


Exactly what counts as “well worked-out” is somewhat subjective, but as I said one doesn’t start with the definition. Instead, a framework that gets adapted by a big part of the community slowly lives up to deserve the title of a “theory”. Most importantly that means that the theory has to fulfil the scientific standards of the field. If something is called a theory it basically means scientists trust its quality.

One should not confuse the theory with the model. The model is what actually describes whatever part of the world you want to study by help of your theory.

General Relativity for example is a theory. It does not in and by itself describe anything we observe. For this, we have to first make several assumptions for symmetries and matter content to then arrive at model, the metric that describes space-time, from which observables can be calculated. Quantum field theory, to use another example, is a general calculation tool. To use it to describe the real world, you first have to specify what type of particles you have and what symmetries, and what process you want to look at; this gives you for example the standard model of particle physics. Quantum mechanics is a theory that doesn’t carry the name theory. A concrete model would for example be that of the Hydrogen atom, and so on. String theory has been such a convincing framework for so many that it has risen to the status of a “theory” without there being any empirical evidence.

A model doesn't necessarily have to be about describing the real world. To get a better understanding of a theory, it is often helpful to examine very simplified models even though one knows these do not describe reality. Such models are called “toy-models”. Examples are e.g. neutrino oscillations with only two flavors (even though we know there are at least three), gravity in 2 spatial dimensions (even though we know there are at least three), and the φ4 theory - where we reach the limits of my language theory, because according to what I said previously it should be a φ4 model (it falls into the domain of quantum field theory).

Phenomenological models (the things I work with) are models explicitly constructed to describe a certain property or observation (the “phenomenon”). They often use a theory that is known not to be fundamental. One never talks about phenomenological theories because the whole point of doing phenomenology is the model that makes contact to the real world. A phenomenological model serves usually one of two purposes: It is either a preliminary description of existing data or a preliminary prediction for not-yet existing data, both with the purpose to lead the way to a fully-fledged theory.

One does not necessarily need a model together with the theory to make predictions. Some theories have consequences that are true for all models and are said to be “model-independent”. Though if one wants to test them experimentally, one has to use a concrete model again. Tests of violations of Bell’s inequality maybe be an example. Entanglement is a general property of quantum mechanics, straight from the axioms of the theory, yet to test it in a certain setting one has to specify a model again. The existence of extra-dimensions in string theory may serve as another example of a model-independent prediction.

One doesn’t have to tell this to physicists, but the value of having a model defined in the language of mathematics is that one uses calculation, logical conclusions, to arrive at numerical values for observables (typically dependent on some parameters) from the basic assumptions of the model. Ie, it’s a way to limit the risk of fooling oneself and get lost in verbal acrobatics. I recently read an interesting and occasionally amusing essay from a mathematician-turned-biologist who tries to explain his colleagues what’s the point of constructing models:
“Any mathematical model, no matter how complicated, consists of a set of assumptions, from whichj are deduced a set of conclusions. The technical machinery specific to each flavor of model is concerned with deducing the latter from the former. This deduction comes with a guarantee, which, unlike other guarantees, can never be invalidated. Provided the model is correct, if you accept its assumptions, you must as a matter of logic also accept its conclusions.”
Well said.

After I realized the guy next to me in physics class wasn’t joking about his creationist beliefs, he went to length explaining that carbon-dating is a conspiracy. I went to length making sure to henceforth place my butt safely far away from him. It is beyond me how one can study a natural science and still interpret the Bible literally. Though I have a theory about this…

Saturday, July 12, 2014

Post-empirical science is an oxymoron.

Image illustrating a phenomenologist after
reading a philosopher go on about
empiricism.

3:AM has an interview with philosopher Richard Dawid who argues that physics, or at least parts of it, are about to enter an era of post-empirical science. By this he means that “theory confirmation” in physics will increasingly be sought by means other than observational evidence because it has become very hard to experimentally test new theories. He argues that the scientific method must be updated to adapt to this development.

The interview is a mixture of statements that everybody must agree on, followed by subtle linguistic shifts that turn these statements into much stronger claims. The most obvious of these shifts is that Dawid flips repeatedly between “theory confirmation” and “theory assessment”.

Theoretical physicists do of course assess their theories by means other than fitting data. Mathematical consistency clearly leads the list, followed by semi-objective criteria like simplicity or naturalness, and other mostly subjective criteria like elegance, beauty, and the popularity of people working on the topic. These criteria are used for assessment because some of them have proven useful to arrive at theories that are empirically successful. Other criteria are used because they have proven useful to arrive on a tenured position.

Theory confirmation on the other hand doesn’t exist. The expression is sometimes used in a sloppy way to mean that a theory has been useful to explain many observations. But you never confirm a theory. You just have theories that are more, and others that are less useful. The whole purpose of the natural sciences is to find those theories that are maximally useful to describe the world around us.

This brings me to the other shift that Dawid makes in his string (ha-ha-ha) of words, which is that he alters the meaning of “science” as he goes. To see what I mean we have to make a short linguistic excursion.

The German word for science (“Wissenschaft”) is much closer to the original Latin meaning, “scientia” as “knowledge”. Science, in German, includes the social and the natural sciences, computer science, mathematics, and even the arts and humanities. There is for example the science of religion (Religionswissenschaft), the science of art (Kunstwissenschaft), science of literature, and so on. Science in German is basically everything you can study at a university and for what I am concerned mathematics is of course a science. However, in stark contrast to this, the common English use of the word “science” refers exclusively to the natural sciences and does typically not even include mathematics. To avoid conflating these two different meanings, I will explicitly refer to the natural sciences as such.

Dawid sets out talking about the natural sciences, but then strings (ha-ha-ha) his argument along on the “insights” that string theory has lead to and the internal consistency that gives string theorists confidence their theory is a correct description of nature. This “non-empirical theory assessment”, while important, can however only be means to the end of an eventual empirical assessment. Without making contact to observation a theory isn’t useful to describe the natural world, not part of the natural sciences, and not physics. These “insights” that Dawid speaks of are thus not assessments that can ever validate an idea as being good to describe nature, and a theory based only on non-empirical assessment does not belong into the natural sciences.

Did that hurt? I hope it did. Because I am pretty sick and tired of people selling semi-mathematical speculations as theoretical physics and blocking jobs with their so-called theories of nothing specifically that lead nowhere in particular. And that while looking down on those who work on phenomenological models because those phenomenologists, they’re not speaking Real Truth, they’re not among the believers, and their models are, as one string theorist once so charmingly explained to me “way out there”.

Yeah, phenomenology is out there where science is done. To many of those who call themselves theoretical physicists today seem to have forgotten physics is all about building models. It’s not about proving convergence criteria in some Hilbert-space or classifying the topology of solutions of some equation in an arbitrary number of dimensions. Physics is not about finding Real Truth. Physics is about describing the world. That’s why I became a physicist – because I want to understand the world that we live in. And Dawid is certainly not helping to prevent more theoretical physicists get lost in math and philosophy when he attempts to validate their behavior claiming the scientific method has to be updated.

The scientific method is a misnomer. There really isn’t such a thing as a scientific method. Science operates as an adaptive system, much like natural selection. Ideas are produced, their usefulness is assessed, and the result of this assessment is fed back into the system, leading to selection and gradual improvement of these ideas.

What is normally referred to as “scientific method” are certain institutionalized procedures that scientists use because they have shown to be efficient to find the most promising ideas quickly. That includes peer review, double-blind studies, criteria for statistical significance, mathematical rigor, etc. The procedures and how stringent (ha-ha-ha) they are is somewhat field-dependent. Non-empirical theory assessment has been used in theoretical physics for a long time. But these procedures are not set in stone, they’re there as long as they seem to work and the scientific method certainly does not have to be changed. (I would even argue it can’t be changed.)

The question that we should ask instead, the question I think Dawid should have asked, is whether more non-empirical assessment is useful at the present moment. This is a relevant question because it requires one to ask “useful for what”? As I clarified above, I myself mean “useful to describe the real world”. I don’t know what “use” Dawid is after. Maybe he just wants to sell his book, that’s some use indeed.

It is not a simple question to answer how much theory assessment is good and how much is too much, or for how long one should pursue a theory trying to make contact to observation before giving up. I don’t have answers to this, and I don’t see that Dawid has.

Some argue that string theory has been assessed too much already, and that more than enough money has been invested into it. Maybe that is so, but I think the problem is not that too much effort has been put into non-empirical assessment, but that too little effort has been put into pursuing the possibility of empirical test. It’s not a question of absolute weight on any side, it’s a question of balance.

And yes, of course this is related to it becoming increasingly more difficult to experimentally test new theories. That together with self-supporting community dynamics that Lee so nicely called out as group-think. Not that loop quantum gravity is any better than string theory.

In summary, there’s no such thing as post-empirical physics. If it doesn’t describe nature, if it has nothing to say about any observation, if it doesn’t even aspire to this, it’s not physics. This leaves us with a nomenclature problem. How do you call a theory that has only non-empirical facts speaking for it and one that the mathematical physicists apparently don’t want either? How about mathematical philosophy, or philosophical mathematics? Or maybe we should call it Post-empirical Dawidism.

[Peter Woit also had a comment on the 3:AM interview with Richard Dawid.]

Sunday, July 06, 2014

You’re not a donut. And not a mug either.

A topologist, as the joke goes, is somebody who can’t tell a mug from a donut.

Topology is a field of mathematics concerned with the properties of spaces and their invariants. One of these invariants is the number of ways you can cut out slices of an object without it falling apart, known as the “genus”. You can cut a donut and it becomes an open ring, yet it is still one piece, and you can cut the handle of a mug and it won’t fall off. Thus, they’re topologically the same.

The genus counts essentially the number of holes though that can be slightly misleading. A representative survey among our household members for example revealed that the majority of people count four holes in a T-shirt, while its genus is actually 3. (Make it a tank top, cut open shoulders and down the front. If you cut any more, it will fall apart.)

Every now and then I read that humans are topologically donuts, with anus, excuse me, genus one. Yes, that is obviously wrong, and I know you’ve all been waiting for me to count the holes in your body.

To begin with the surface of the human body, as any other non-mathematical surface, is not impenetrable, and how many holes it has is a matter of resolution. For a neutrino for example you’re pretty much all holes.

Leaving aside subatomic physics and marching on to the molecular level, the human body possesses an intricate network of transport routes for essential nutrients, proteins, bacteria and cells, and what went in one location can leave pretty much anywhere else. You can for example absorb some things through your lungs and get rid of them in your sweat, and you can absorb some medications through your skin. Not to mention that the fluid you ingest passes through some cellular layers and eventually leaves though yet another hole.

But even above the molecular level, the human body has more than one hole. One of the most unfortunate evolutionary heritage we have is that our airways are conjoined with the foodways. As you might have figured out when you were 4 years old, you can drink through your nose, and since you have two nostrils that brings you up to genus three.

Next, the human eyes sit pretty loosely in their sockets and the nasal cavities are connected to the eye sockets in various ways. I can blow out air through my eyes, so I count up to genus 5 then. Alas, people tend to find this a rather strange skill, so I’ll leave it to you whether you want to count your eyes to uppen your holyness. And while we are speaking of personal oddities, should you have any body piercings, these will pimp up your genus further. I have piercings in my ears, so that brings my counting to genus 7.

Finally, for the ladies, the fallopian tubes are not sealed off by the ovaries. The egg that is released during ovulation has to first make it to the tube. It is known to happen occasionally that an egg travels to the fallopian tube on the other side, meaning the tubes are connected through the abdominal cavity, forming a loop that adds one to the genus.

This brings my counting to 5 for the guys, 6 for the ladies, plus any piercings that you may have.

And if you have trouble imagining a genus 6 surface, below some visual aid.

Genus 0.

Genus 1.

Genus 2.

Genus 3.

Genus 4.

Genus 5.

Genus 6.

Homework.

Sunday, June 29, 2014

The Inverse Problem. Sorry, we don’t have experimental evidence for quantum gravity.

The BICEP collaboration recently reported the first measurements of CMB polarization due to relic gravitational waves. It is presently unclear whether their result will hold up and eventually be confirmed by other experimental groups, or if they have screwed up their data analysis and there’s no signal after all. Ron Cowen at Nature News informs the reader carefully there’s “No evidence for or against gravitational waves.”

I’m not an astrophysicist and don’t have much to say about the measurement, but I have something to say about what this measurement means for quantum gravity. Or rather, what it doesn’t mean.

I keep coming across claims that BICEP is the first definite experimental evidence that gravity must be quantized. Peter Woit eg let us know that Andrew Strominger and Juan Maldacena cheerfully explain that quantum gravity is now an experimental subject, and Lawrence Krauss recently shared his “Viewpoint” in which he also writes that the BICEP results imply gravity must be quantized:

“Research by Frank Wilczek and myself [Lawrence Krauss], based on a dimensional analysis argument, suggests that, independently of the method used to calculate the gravitational wave spectrum, a quantum gravitational origin is required. If so, the BICEP2 result then implies that gravity is ultimately a quantum theory, a result of fundamental importance for physics.”
We previously discussed the argument by Krauss and Wilczek here. In a nutshell the problem is that one can’t conclude anything from and with nothing, and no conclusion is ever independent of the assumptions.

The easiest way to judge these claims is to ask yourself: What would happen if the BICEP result does not hold up and other experiments show that the relic gravitational wave background is not where it is expected to be?

Let me tell you: Nobody working on quantum gravity would seriously take this to mean gravity isn’t quantized. Instead, they’d stumble over each other trying to explain just how physics in the early universe is modified so as to not leave a relic background measureable today. And I am very sure they’d come up with something quickly because we have very little knowledge about physics in such extreme conditions, thus much freedom to twiddle with the theory.

The difference between the two situations, the relic background being confirmed or not confirmed, is that almost everybody expects gravity to be quantized, so right now everything goes as expected and nobody rushes to come up with a way to produce the observed spectrum without quantizing gravity. The difference between the two situations is thus one of confirmation bias.

What asking this question tells you then is that there are assumptions going into the conclusion other than perturbatively quantizing gravity, assumptions that quickly will be thrown out if the spectrum doesn’t show up as expected. But this existence of additional assumptions also tells you that the claim that we have evidence for quantum gravity is if not wrong then at least very sloppy.

What we know is this: If gravity is perturbatively quantized and nothing else happens (that’s the extra assumption) then we get a relic gravitational wave spectrum consistent with the BICEP measurement. This statement is equivalent to the statement that no relic gravitational wave spectrum in the BICEP range implies no perturbative quantization of gravity as long as nothing else happens. The conclusion that Krauss, Wilczek, Strominger, Maldacena and others would like to draw is however that the measurement of the gravitational wave spectrum implies that gravity must be quantized, leaving aside all other assumptions and possibly existing alternatives. This statement is not logically equivalent to the former. This non-equivalence is sometimes referred to as “the inverse problem”.

The inverse problem is finding the theory from the measurements, the inverse of calculating the data from the theory. Strictly speaking it is impossible to pin down the theory from the measurements since this would imply ruling out all alternative options but one, and there might always be alternative options - unknown unknowns - that we just did not think of. In practice then solving the inverse problem means to rule out all known alternatives. I don’t know of any alternative that has been ruled out.

The so-far best attempt at ruling out classical gravity is this paper by Ashoorioon, Dev, and Mazumdar. They show essentially that it’s the zero-point fluctuations of the quantized metric that seed the relic gravitational waves. Since a classical field doesn’t have these zero-point fluctuations, this seed is missing. Using any other known matter field with standard coupling as a seed would give a too small amplitude; this part of the argument is basically the same argument as Krauss’ and Wilczek’s.

There is nothing wrong with this conclusion, except for the unspoken words that “of course” nobody expects any other source for the seeds of the fluctuation. But you have practice now, so try turning the argument around: If there was no gravitational wave background at that amplitude, nobody would argue that gravity must be classical, but that there must be some non-standard coupling or some other seed, ie some other assumption that is not fulfilled. Probably some science journalist would call it indirect evidence for physics beyond the standard model! Neither the argument by Ashoorioon et al nor Krauss and Wilczek’s for example has anything to say about a phase-transition from a nongeometrical phase that might have left some seeds or some other non-perturbative effect.

There are more things in heaven and earth, Horatio, than annihilation and creation operators for spin-two fields.

The argument by Krauss and Wilczek uses only dimensional analysis. The strength of their argument is its generality, but that’s also its weakness. You could argue on the same grounds for example that the electron’s mass is evidence for quantum electrodynamics because you can’t write down a mass-term for a classical field without an hbar in it. That is technically correct, but it’s also uninsightful because it doesn’t tell us anything about what we actually mean with quantization, eg the commutation relations between the field and its conjugate. It’s similar with the Krauss’ and Wilczek argument. They show that, given there’s nothing new happening, you need an hbar to get the dimensions work out. This is correct but in and by itself doesn’t tell you what the hbar does to gravity. The argument by Ashoorioon et al is thus more concrete, but on the flipside less widely applicable.

Don’t get me wrong there. I have no reason to doubt that perturbatively quantized gravity is the right description at weak coupling, and personally I wouldn’t want to waste my time on a theory that leaves gravity unquantized. But the data’s relevance for quantum gravity is presently being oversold. If the BICEP result vanishes you’d see many people who make their living from quantum gravity backpedal very quickly.

Monday, June 23, 2014

What I do... [Video]

What I do when I don't do what I don't do. I'm the short one. The tall one is our soon-to-be-ex director Lárus Thorlacius, also known as the better half of black hole complementarity. Don't worry, I'm not singing.

Sunday, June 22, 2014

Book review: “The Island of Knowledge” by Marcelo Gleiser

The Island of Knowledge: The Limits of Science and the Search for Meaning
By Marcelo Gleiser
Basic Books (June 3, 2014)

In May 2010, I attended a workshop at Perimeter Institute on “The Laws of Nature: Their Nature and Knowability,” one of the rare events that successfully mixed physicists with philosophers. My main recollection of this workshop is spending half of it being sick in the women’s restroom, which probably had less to do with the philosophers and more with me being 4 weeks pregnant. Good thing that I’m writing a blog to remind me what else happened at this workshop, for example Marcelo Gleiser’s talk “What can we know of the world?” about which I wrote back then. Audio and Video here.

The theme of Gleiser’s 2010 talk – the growth of human knowledge through scientific inquiry and the limits of that growth – are content of his most recent book “The Island of Knowledge”. He acknowledges having had the idea for this book at the 2010 PI workshop, and I can see why. Back then my thought about Gleiser’s talk was that it was all rather obvious, so obvious I wasn’t sure why he was invited to give a talk to begin with. Also, why was I not invited to give a talk, mumble grumble grr, bathroom break.

Surprisingly though, many people in attendance had arguments to pick with Gleiser following his elaborations. That gave me something to think; apparently even practicing scientists don’t all agree on the purpose of scientific inquiry, certainly not philosophers, and the question what we do science for, and what we can ultimately expect to know, make a good topic for a popular-science book.

Gleiser’s point of view about science is pragmatic and process oriented, and I agree with pretty much everything in his book, so I am clearly biased to like it. Science is a way to construct models about the world and to describe our observations. In that process, we collect knowledge about Nature but this knowledge is necessarily limited. It is limited by the precision to which we can measure, and quite possibly limited also by the laws of nature themselves because they may fundamentally prevent us from expanding our knowledge.

The limits that Gleiser discusses in “The Island of Knowledge” are the limits set by the speed of light, by quantum uncertainty, Godel’s incompleteness theorems, and the finite capacity of the human brain that ultimately limits what we can possibly understand. Expanding on these topics, Gleiser guides the reader through the historical development of the scientific method, the mathematical description of nature, cosmology, relativity and quantum mechanics. He makes detours through alchemy and chemistry and ends with his thoughts on artificial intelligence and the possibility that we live in a computer simulation. Along the way, he discusses the multiverse and the quest for a theory of everything (the latter apparently topic of this previous book “A Tear at the Edge of Creation” which I haven’t read).

Since we can never know what we don’t know, we will never know whether our models are indeed complete and as good as can be, which is why the term “Theory of Everything”, when taken literally, is unscientific in itself. We can never know whether a theory is indeed a theory of everything. Gleiser is skeptic of the merits of the multiverse, which “stretch[es] the notion of testability in physics to the breaking point” and which “strictly speaking is untestable” though he explains how certain aspects of the multiverse (bubble collisions, see my recent post on this) may result in observable consequences.

Gleiser on the one hand is very much the pragmatic and practical scientist, but he does not discard philosophy as useless either, rather he argues that scientists have to be more careful about the philosophical implications of their arguments:
“[A]s our understanding of the cosmos has advanced during the twentieth century and into the twenty-first, scientists – at least those with cosmological and foundational interests – have been forced to confront questions of metaphysical importance that threaten to compromise the well-fortified wall between science and philosophy. Unfortunately, this crossover has been done for the most part with carelessness and conceptual impunity, leading to much confusion and misapprehension.”

Unfortunately, Gleiser isn’t always very careful himself. While he first argues that there’s no such thing as scientific truth because “The word “true” doesn’t have much meaning if we can’t ever know what is,” he later uses expressions like “the laws of Nature aim at universality, at uncovering behaviors that are true”. And his take on Platonism seems somewhat superficial. Most readers will probably agree that “mathematics is always an approximation to reality and never reality as it is” and that Platonism is “a romantic belief system that has little to do with reality”. Alas we do not actually know that this is true. There is nothing in our observations that would contradict such an interpretation, and for all I know we can never know whether it’s true or not, so I leave it to Tegmark to waste his time on this.

Gleiser writes very well. He introduces the necessary concepts along the way, and is remarkably accurate while using a minimum of technical details. Some anecdotes from his own research and personal life are nicely integrated with the narrative and he has a knack for lyrical imagery which he uses sparsely but well timed to make his points.

The book reads much better than John Barrows 1999 book “Impossibility: The Limits of Science and the Science of Limits” which took on a similar theme. Barrow’s book is more complete in that it also covers economical and biological limits, and more of what scientists presently believe is impossible and why, for example timetravel, complexity and the possible future developments of science which Gleiser doesn’t touch upon. But Barrow’s book also suffers from being stuffed with all these topics. Gleiser’s book aims less at being complete, he clearly leaves out many aspects of “The Limits of science and the Search for Meaning” as the book subtitle promises, but in his selectiveness Gleiser gets across his points much better. Along the way, the reader learns quite a bit about cosmology, relativity and quantum mechanics, prior knowledge not required.

I would recommend this book to anybody who wants to know where the current boundaries of the “Island of Knowledge” are, seen from the shore of theoretical physics.

[Disclaimer: Free review copy]

Sunday, June 15, 2014

Evolving dimensions, now vanishing

Vanishing dimensions.
Technical sketch.
Source: arXiv:1406.2696 [gr-qc]

Some years ago, we discussed the “Evolving Dimensions”, a new concept in the area of physics beyond the standard model. The idea, put forward by Anchordoqui et al in 2010, is to make the dimensionality of space-time scale-dependent so that at high energies (small distances) there is only one spatial dimension and at small energies (large distances) the dimension is four, or possibly even higher. In between – in the energy regime that we deal with in everyday life and most of our experiments too – one finds the normal three spatial dimensions.

The hope is that these evolving dimensions address the problem of quantizing gravity, since gravity in lower dimensions is easier to handle, and possibly the cosmological constant problem, since it is a long-distance modification that becomes relevant at low energies.

One of the motivations for the evolving dimensions is the finding that the spectral dimension decreases at high energies in various approaches to quantum gravity. Note however that the evolving dimensions deal with the actual space-time dimension, not the spectral dimension. This immediately brings up a problem that I talked about to Dejan Stojkovic, one of the authors of the original proposal, several times, the issue of Lorentz-invariance. The transition between different numbers of dimensions is conjectured to happen at certain energies: how is that statement made Lorentz-invariant?

The first time I heard about the evolving dimensions was in a talk by Greg Landsberg at our 2010 conference on Experimental Search for Quantum Gravity. I was impressed by this talk, impressed because he was discussing predictions of a model that didn’t exist. Instead of a model for the spacetime of the evolving dimensions, he had an image of yarn. The yarn, you see, is one-dimensional , but you can knit it to two-dimensional sheets, which you can then form to a three-dimensional ball, so in some sense the dimension of the yarn can evolve depending on how closely you look. It’s a nice image. It is also obviously not Lorentz-invariant. I was impressed by this talk because I’d never have the courage to give a talk based on a yarn image.

It was the early days of this model, a nice idea indeed, and I was curious to see how they would construct their space-time and how it would fare with Lorentz-invariance.

Well, they never constructed a space-time model. Greg seems not to have continued working on this, but Dejan is still on the topic. A recent paper with Niayesh Afshordi from Perimeter Institute still has the yarn in it. The evolving dimensions are now called vanishing dimensions, not sure why. Dejan also wrote a review on the topic, which appeared on the arxiv last week. More yarn in that.

In one of my conversations with Dejan I mentioned that the Causal Set approach makes use of a discrete yet Lorentz-invariant sprinkling, and I was wondering out aloud if one could employ this sprinkling to obtain Lorentz-invariant yarn. I thought about this for a bit but came to the conclusion that it can’t be done.

The Causal Set sprinkling is a random distribution of points in Minkowski space. It can be explicitly constructed and shown to be Lorentz-invariant on the average. It looks like this:

Causal Set Sprinkling, Lorentz-invariant on the average. Top left: original sprinkling. Top right: zoom. Bottom left: Boost (note change in scale). Bottom right: zoom to same scale as top right. The points in the top right and bottom right images are randomly distributed in the same way. Image credits: David Rideout. [Source]

The reason this discreteness is compatible with Lorentz-invariance is that the sprinkling makes use only of four-volumes and of points, both of which are Lorentz-invariant, as opposed to Lorentz-covariant. The former doesn’t change under boosts, the latter changes in a well-defined way. Causal Sets, as the name says, are sets. They are collections of points. They are not, I emphasize, graphs – the points are not connected. The set has an order relation (the causal order), but a priori there are no links between the points. You can construct paths on the sets, they are called “chains”, but these paths make use of an additional initial condition (eg an initial momentum) to find a nearest neighbor.

The reason that looking for the nearest neighbor doesn’t make much physical sense is that the distance to all points on the lightcone is zero. The nearest neighbor to any point is almost certainly (in the mathematical sense) infinitely far away and on the lightcone. You can use these neighbors to make the sprinkling into a graph. But now you have infinitely many links that are infinitely long and the whole thing becomes space-filling. That is Lorentz-invariant of course. It is also in no sensible meaning still one-dimensional on small scales. [Aside: I suspect that the space you get in this way is not locally identical to R^4, though I can’t quite put my finger on it, it doesn’t seem dense enough if that makes any sense? Physically this doesn’t make any difference though.]

So it pains me somewhat that the recent paper of Dejan and Niayesh tries to use the Causal Set sprinkling to save Lorentz-invariance:

“One may also interpret these instantaneous string intersections as a causal set sprinkling of space-time [...] suggesting a potential connection between causal set and string theory approaches to quantum gravity.”

This interpretation is almost certainly wrong. In fact, in the argument that their string-based picture is Lorentz-invariant they write:
“Therefore, on scales much bigger than the inverse density of the string network, but much smaller than the size of the system, we expect the Lorentz-invariant (3+1)-dimensional action to emerge.”
Just that Lorentz-invariance which emerges at a certain system size is not Lorentz-invariant.

I must appear quite grumpy going about and picking on what is admittedly an interesting and very creative idea. I am annoyed because in my recent papers on space-time defects, I spent a considerable amount of time trying to figure out how to use the Causal Set sprinkling for something (the defects) that is not a point. The only way to make this work is to use additional information for a covariant (but not invariant) reference frame, as one does with the chains.

Needless to say, in none of the papers on the topic of evolving, vanishing, dimensions one finds an actual construction of the conjectured Lorentz-invariant random lattice. In the review, the explanation reads as follows: “One of the ways to evade strong Lorentz invariance violations is to have a random lattice (as in Fig 5), where Lorentz-invariance violations would be stochastic and would average to zero...” Here is Fig 5:

Fig 5 from arXiv:1406.2696 [gr-qc]


Unfortunately, the lattice in this proof by sketch is obviously not Lorentz-invariant – the spaces are all about the same size, which is a preferred size.

The recent paper of Dejan Stojkovic and Niayesh Afshordi attempts to construct a model for the space-time by giving the dimensions a temperature-dependend mass, so that, as temperatures drop, additional dimensions open up. This begs the question though, temperature of what? Such an approach might make sense maybe in the early universe, or when there is some plasma around, but a mean field approximation clearly does not make sense for the scattering of two asymptotically free states, which is one of the cases that the authors quote as a prediction. A highly energetic collision is supposed to take place in only two spatial dimensions, leading to a planar alignment.

Now, don’t get me wrong, I think that it is possible to make this scenario Lorentz-invariant, but not by appealing to a non-existent Lorentz-invariant random lattice. Instead, it should be possible to embed this idea into an effective field theory approach, some extension of asymptotically safe gravity, in which the relevant scale that is being tested then depends on the type of interaction. I do not know though in which sense these dimensions then still could be interpreted as space-time dimensions.

In any case, my summary of the recent papers is that, unsurprisingly, the issue with Lorentz-invariance has not been solved. I think the literature would really benefit from a proper no-go theorem proving what I have argued above, that there exist no random lattices that are Lorentz-invariant on the average. Or otherwise, show me a concrete example.

Bottomline: A set is not a graph. I claim that random graphs that are Lorentz-invariant on the average, and are not space-filling, don’t exist in (infinitely extended) Minkowski space. I challenge you to prove me wrong.

Monday, June 09, 2014

Is Science the only Way of Knowing?

Fast track to wisdom: It isn’t.

One can take scientism too far. No, science is not “true” whether or not you believe in it, and science is not the only way of knowing, in no sensible definition of the words.

Unfortunately, the phrase “Science is not the only way of knowing” has usually been thrown at me, triumphantly, by various people in defense of their belief in omnipotent things or other superstitions. And I will admit that my reflex is to say you’ll never know anything unless it’s been scientifically proved to be correct, to some limited accuracy with appropriate error bars.

So I am writing this blogpost is to teach myself to be more careful in defense of science, and to acknowledge that other ways of knowing exist, though they are not, as this interesting list suggests, LSD, divination via oujia boards, Shamanic journeying, or randomly opening the Bible and reading something.

Before we can argue though, we have to clarify what we mean with science and knowledge.

The question “What is science?” has been discussed extensively and, philosophers being philosophers, I don’t think it will ever be settled. Instead of defining science, let me therefore just describe it in a way that captures reality very well: Science is what scientists do. Scientists form a community of practice that shares common ethics, ethics that aren’t typically written down, which is why defining science proper is so difficult. These common ethics are what is usually referred to as the scientific method, the formulation of hypothesis and the test against experiment. Science, then, is the process that this community drives.

This general scientific method, it must be emphasized, is not the only shared ethics in the scientific community. Parts of the community have their own guidelines for good scientific conduct, that are additional procedures and requirements which have shown to work well in advancing the process of finding and testing good hypotheses. Peer review is one such added procedure, guidelines for statistical significance or the conduct of clinical trials are others. While the scientific method does not forbid it, random hypothesis will generally not even be considered because of their low chances of success. Instead, a new hypothesis is expected to live up to the standards of the day. In physics this means for example that your hypothesis must meet high demands on mathematical consistency.

The success of science comes from the community acting as an adaptive system on the development of models of nature. There is a variation (the formulation of new hypothesis), a feedback (test of the hypothesis) and a response (discard, keep, or amend). This process of arriving at increasingly successful scientific theories is not unlike natural selection that results in increasingly successful life forms. It’s just that in science the variation in the pool of ideas is stronger regulated than the variation in the pool of genes.

That brings us to the question what we mean with knowledge. Strictly speaking you never know anything, except possible that you don’t know anything. The problem is not in the word ‘knowing’ but in the word ‘you’ – that still mysterious emergent phenomenon built of billions of interacting neurons. It takes but a memory lapse or a hallucination and suddenly you will question whether reality is what it seems to be. But let me leave aside the shortcomings of human information processing and the fickle notion of consciousness and knowledge becomes the storage of facts about reality, empirical facts.

You might argue that there are facts about fantasy or fiction, but the facts that we have about them are not facts about these fictions but about the real representations of that fiction. You do not know that Harry Potter flew on a broom, you know that a person called Rowling wrote about a boy called Harry who flew on broom. In a sense, everything you can imagine is real, provided that you believe yourself to be real. It is real as a pattern in your neural activity, you just have to be careful then in stating exactly what it is that you “know”.

Let us call knowledge “scientific knowledge” if it was obtained by the scientific method applied by what we refer to as scientists’ methods in the broader sense. Science is then obviously a way to arrive at knowledge, but it is also obviously not the only way. If you go out on the street, you know whether it is raining. You could make this into a scientific process with a documented random controlled trial and peer reviewed statistical analysis, but nobody in their right mind would do this. The reason is that the methods used to gather and evaluate the data (your sensorial processing) are so reliable most people don’t normally question them, at least not when sober.

This is true for a lot of “knowledge”, that you might call trivial knowledge, for example you know how to spell “knowledge”. This isn’t scientific knowledge, it’s something you learned in school together with hundreds of millions of other people, and you can look it up in a dictionary. You don’t formulate the spelling as a hypothesis that you test against data because there is very little doubt about it in your mind and in anybody’s mind. It isn’t an interesting hypothesis for the scientific community to bother with.

That then brings us to the actually interesting question, whether there is non-trivial knowledge that is not scientific knowledge. Yes, there is, because science isn’t the only procedure in which hypothesis are formulated and tested against data. Think again of natural selection. The human brain is pretty good for example at extrapolating linear motion or the trajectories of projectiles. This knowledge seems to be hardwired, even infants have it, and it contains a fair bit of science, a fair bit of empirical facts: Balls don’t just stop flying in midair and drop to the ground. You know that. And this knowledge most likely came about because it was of evolutionary advantage, not because you read it in a textbook.

Now you might not like to refer to it as knowledge if it is hardwired, but similar variation and selection processes take place in our societies all the time outside of science. Much of it is know-how, handcrafts, professional skills, or arts, that are handed down through generations. We take expert’s advice seriously (well, some of us, anyway) because we assume they have undergone many iterations of trial and error. The experts, they are not of course infallible, but we have good reason to expect their advice to be based on evidence that we call experience. Expert knowledge is integrated knowledge about many facts. It is real knowledge, and it is often useful knowledge, it just hasn’t been obtained in an organized and well-documented way as science would require.

You can count to this non-scientific knowledge for example also the knowledge that you have about your own body and/or people you live together with closely. This is knowledge you have gathered and collected over a long time and it is knowledge that is very valuable for your doctor should you need help. But it isn’t knowledge that you find in the annals of science. It is also presently not knowledge that is very well documented, though with all the personalized biotracking this may be changing.

Now these ways of knowing are not as reliable as scientific knowledge because they do not live up to the standards of the day – they are not carefully formulated and tested hypothesis, and they are not documented in written reports. But this doesn’t mean they are no knowledge at all. When your grandma taught you to make a decent yeast dough, the recipe hadn’t come from a scientific journal. It had come through countless undocumented variations and repetitions, hundreds of trials and errors – a process much like science and yet not science.

And so you may know how to do something without knowing why this is a good way to do it. Indeed, it is often such non-scientific knowledge that leads to the formulation of interesting hypotheses that confirm or falsify explanations of causal relations.

In summary: Science works by testing ideas against evidence and using the results as feedbacks to improvements. Science is the organized way of using this feedback loop to increase our knowledge about the real world, but it isn’t the only way. Testing ideas against reality and learning from the results is a process that is used in many other areas of our societies too. The knowledge obtained in this way is not as reliable as scientific knowledge, but it is useful and in many cases constitutes a basis for good scientific hypotheses.

Sunday, June 01, 2014

Are the laws of nature beautiful?

Physicists like to talk about the beauty and elegance of theories, books have been written about the beautiful equations, and the internet, being the internet, offers a selection of various lists that are just a Google search away.

Max Tegmark famously believes all math is equally real, but most physicists are pickier. Garrett Lisi may be the most outspoken example who likes to say that the mathematics of reality has to be beautiful. Now Garrett’s idea of beautiful is a large root diagram which may not be everybody’s first choice, but symmetry is a common ingredient to beauty.

Physicists also like to speak about simplicity, but simplicity isn’t useful as an absolute criterion. The laws of nature would be much simpler if there was no matter or if symmetries were never broken or if the universe was two dimensional. But this just isn’t our reality. As Einstein said, things should be made as simple as possible but not any simpler, and that limits the use of simplicity as guiding principle. When simplicity reaches its limits, physicists call upon beauty.

Personally, I value interesting over beautiful. Symmetry and order is to art what harmony and repetition is to music – it’s bland in excess. But more importantly, there is no reason why the sense of beauty that humans have developed during evolution should have any relevance for the fundamental laws of nature. Using beauty as guide is even worse than appealing to naturalness. Naturalness, like beauty, is a requirement based on experience, not on logic, but at least naturalness can be quantified while beauty is subjective, and malleable in addition.

Frank Wilczek has an interesting transcript of a talk about “Quantum Beauty” online in which he writes
“The Standard Model is a very powerful, very compact framework. It would be difficult... to exaggerate.. its beauty.”
He then goes on to explain why this is an exaggeration. The Standard Model really isn’t all that beautiful as with all these generations and families of particles and let’s not even mention Yukawa couplings. Frank thinks a grand unification would be much more beautiful, especially when supersymmetric:
“If [SUSY’s new particles] exist, and are light enough to do the job, they will be produced and detected at [the] new Large Hadron Collider – a fantastic undertaking at the CERN laboratory, near Geneva, just now coming into operation. There will be a trial by fire. Will the particles SUSY requires reveal themselves? If not, we will have the satisfaction of knowing we have done our job, according to Popper, by producing a falsifiable theory and showing that it is false.”
Particle physicists who have wasted their time working out SUSY cross-sections don’t seem to be very “satisfied” with the LHC no-show. In fact they seem to be insulted because nature didn’t obey their beauty demands. In a recent cover story for Scientific American Joseph Lykken and Maria Spiropulu wrote:
“It is not an exaggeration to say that most of the world’s particle physicists believe that supersymmetry must be true.”
That is another exaggeration of course, a cognitive bias known as the “false-consensus effect”. People tend to think that others share their opinion, but let’s not dwell on the sociological issues this raises. Yes, symmetry and unification has historically been very successful and these are good reasons to try to use it as a guide. But is it sufficient reason for a scientist to believe that it must be true? Is this something a scientist should ever believe?

Somewhere along the line theoretical physicists have mistaken their success in describing the natural world for evidence that they must be able to recognize truth by beauty, that introspection suffices to reveal the laws of nature. It’s not like it’s only particle physicists. Lee Smolin likes to speak about the “ring of truth” that the theory of quantum gravity must have. He hasn’t yet heard that ring. String theorists on the other hand have heard that bell of truth ringing for some decades and, ach, aren’t these Calabi-Yaus oh-so beautiful and these theorems so elegant etc. pp. One ring to rule them all.

But relying on beauty as a guide leads nowhere because understanding changes our perception of beauty. Many people seem to be afraid of science because they believe understanding will diminish their perception of beauty, but in the end understanding most often contributes to beauty. However, there seems to be an uncanny valley of understanding: When you start learning, it first gets messy and confused and ugly, and only after some effort do you come to see the beauty. But spend enough time with something, anything really, and in most cases it will become interesting and eventually you almost always find beauty.

If you don’t know what I mean, watch this classic music critic going off on 12 tone music. [Video embedding didn't work, sorry for the ad.]

Chances are, if you listen to that sufficiently often you’ll stop hearing cacophony and also start thinking of it as “delicate” and “emancipating”. The student who goes on about the beauty of broken supersymmetry with all its 105 parameters and scatter plots went down that very same road.

There are limits to what humans can find beautiful, understanding or not. I have for example a phopia of certain patterns which, if you believe Google, is very common. Much of it is probably due to the appearance of some diseases, parasites, poisonous plants and so on, ie, it clearly has an evolutionary origin. So what if space-time foam looks like a skin disease and quantum gravity is ugly as gooseshit? Do we have any reason to believe that our brains should have developed so as to appreciate the beauty of something none of our ancestors could possibly ever have seen?

The laws of nature that you often find listed among the “most beautiful equations” derive much of their beauty not from structure but from meaning. The laws of black hole thermodynamics would be utterly unremarkable without the physical interpretation. In fact, equations in and by themselves are unremarkable generally – it is only the context, the definition of the quantities that are put in relation by the equation that make an equation valuable. X=Y isn’t just one equation. Unless I tell you what X and Y are, this is every equation.

So, are the laws of nature beautiful? You can bet that whatever law of nature wins a Nobel prize will be called “beautiful” by the next generation of physicists who spend their life studying it. Should we use “beauty” as a requirement to construct a scientific theory? That, I’m afraid, would be too simple to be true.

Friday, May 30, 2014

Tuesday, May 27, 2014

Book Review: “The Cosmic Cocktail” by Katherine Freese

The Cosmic Cocktail: Three Parts Dark Matter
Katherine Freese
Princeton University Press (May 4, 2014)

Katherine Freese’s “Cosmic Cocktail” lays out the current evidence for dark matter and dark energy, and the status of the relevant experiments. The book excels in the chapter about indirect and direct detection of WIMPs, a class of particles that constitutes the presently best motivated and most popular dark matter candidates. “The Cosmic Cocktail” is is Freese’s first popular science book.

Freese is a specialist in the area of astroparticle physics, and she explains the experimental status for WIMP detection clearly, not leaving out the subtleties in the data interpretation. She integrates her own contributions to the field where appropriate; the balance between her own work and that of others is well met throughout the book.

The book also covers dark energy, and while this part is informative and covers the basics, it is nowhere near as detailed as that about dark matter detection. Along the way to the very recent developments, “The Cosmic Cocktail” introduces the reader to the concepts necessary to understand the physics and relevance of the matter composition of the universe. In the first chapters, Freese explains the time-evolution of the universe, structure formation, the evolution of stars, and the essentials of particle physics necessary to understand matter in the early universe. She adds some historical facts, but the scientific history of the field is not the main theme of the book.

Freese follows the advice to first say what you want to tell them, then tell them, then tell them what you just told them. She regularly reminds the reader of what was explained in earlier chapters, and repeats explanations frequently throughout the book. While this makes it easy to follow the explanations, the alert reader might find the presumed inattention somewhat annoying. The measure of electron volts, for example, is explained at least four times. Several sentences are repeated almost verbatim in various places, for example that “eventually galaxies formed… these galaxies then merged to make clusters and superclusters…” (p. 31) “…eventually this merger lead to the formation of galaxies and clusters of galaxies...” (p. 51) or “Because neutrons are slightly heavier than protons, protons are the more stable of the objects...” (p. 70), “neutrons are a tiny bit heavier than protons… Because protons are lighter, they are the more stable of the two particles.” (p. 76), “Inflation is a period of exponential expansion just after the Big Bang”. “inflationary cosmology… an early accelerating period of the history of the Universe” (p. 202), and so on.

The topics covered in the book are timely, but do not all contribute to the theme of the book, the “cosmic cocktail”. Freese narrates for example the relevance and discovery of the Higgs and the construction details of the four LHC detectors, but does only mention the inflaton in one sentence while inflation itself is explained in two sentences (plus two sentences in an endnote). She covers the OPERA anomaly of faster-than-light neutrino (yes, including the joke about the neutrino entering a bar) and in this context mentions that faster-than-light travel implies violations of causality, confusing readers not familiar with Special Relativity. On the other hand, she does not even name the Tully-Fisher relation, and dedicates only half a sentence to baryon acoustic oscillations.

The book contains some factual errors (3 kilometers are not 5 miles (p. 92), the radius of the Sun is not 10,000 kilometers (p. 95), Hawking radiation is not caused by quantum fluctuations of space-time (p.98), the HESS experiment is not in Europe (p. 170), the possible vacua in the string theory landscape do not all have a different cosmological constant (p. 201)). Several explanations are expressed in unfortunate phrases, eg: “[T]he mass of all galaxies, including our own Milky Way, must be made of dark matter.” (p. 20) All its mass? “Imagine drawing a circle around the [gravitational lens]; the light could pass through any point on that circle.” (p. 22). Circle in which plane?

The metaphors and analogies used by Freese’s are common in the popular science literature: The universe is an expanding balloon or a raisin bread, the Higgs field is “a crowded room of dancing people” or some kind of molasses (p.116). Some explanations are vague “The multiverse perspective is strengthened by theories of inflationary cosmology” (which?) others are misleading, eg, the reader may be left with the idea that Casimir energy causes cosmic acceleration (p. 196) or that “Only with a flat geometry can the universe grow old enough to create the conditions for life to exist.” (p. 44). One has to be very careful (and check the endnote) to extract that she means the spatial geometry has to be almost flat. Redshift at the black hole horizon is often illustrated with somebody sending light signals while falling through the horizon. Freese instead uses sound waves, which adds confusion because sounds needs a medium to travel.

These are minor shortcomings, but they do limit the target group that will benefit from the book. The reader who brings no background knowledge in cosmology and particle physics I am afraid will inevitably stumble at various places.

Freese’s writing style is very individual and breaks with the smooth – some may find too smooth – style that has come to dominate the popular science literature. It takes some getting used to her occasionally quite abrupt changes of narrative direction in the first chapters, but the later chapters are more fluently written. Freese interweaves anecdotes from her personal life with the scientific explanations. Some anecdotes document academic life, others seem to serve no particular purpose other than breaking up the text. The book comes with a light dose of humor that shows mostly in the figures, which contain a skull to illustrate the ‘Death of MACHO’s’, a penguin, and a blurry photo of a potted plant.

The topic of dark energy and dark matter has of course been covered in many books, one may mention Dan Hooper’s “Dark Cosmos” (Smithsonian Books, 2006) and Evalyn Gates “Einstein’s Telescope” (WW Norton, 2009). These two books are meanwhile somewhat out-of-date because the field has developed so quickly, making Freese’s book a relevant update. Both Gates’ and Hooper’s book are more easily accessible and have a smoother narrative than “The Cosmic Cocktail”. Freese demands more of the reader but also gets across more scientific facts.

I counted more than a dozen instances of the word “exciting” throughout the book. I agree that these are indeed exciting times for cosmology and astroparticle physics. Freese’s book is a valuable, non-technical and yet up-to-date review, especially on the topic of dark matter detection.

[Disclaimer: Free review copy. Page numbers in the final version might slightly differ.]

Wednesday, May 21, 2014

What is direct evidence and does the BICEP2 measurement prove that gravity must be quantized?

Fast track to wisdom: Direct evidence is relative and no, BICEP doesn’t prove that gravity must be quantized.

In the media storm following the BICEP announcement that they had measured the polarization of the cosmic microwave background due to gravitational waves, Chao-Lin Kuo, member of the BICEP team was widely quoted with saying:
“This is the first direct image of gravitational waves across the primordial sky.”

As of lately, it’s been debated whether BICEP has signals from the early universe at all, or whether their signal is mostly produced by matter in our own galaxy that hasn’t been properly accounted for. This isn’t my area of research and I don’t know the details of their data analysis. Let me just say that this kind of discussion is perfectly normal to have when data are young. Whether or not they actually have seen what they claimed, it is worthwhile to sort out exactly what it would mean if the BICEP claims correct, and that is the purpose of this post.

The BICEP2 results have variously been reported as the first direct evidence of cosmic inflation, direct proof of the theory of inflation, indirect evidence for the existence of gravitational waves, the first indirect detection of the gravitational wave background [emphasis theirs],the most direct evidence of Albert Einstein’s last major unconfirmed prediction, and evidence for the first detection of gravitational waves in the initial moments of the universe.

Confused already?

What is a direct measurement?

A direct measurement of a quantity X is if your detector measures quantity X.

One can now have a philosophical discussion about whether not human senses should account for as the actual detector. Then all measurements with external devices are indirect because they are inferred from secondary measurements, for example the reading off a display. However, for what physicists are concerned the reading of the detector by a human is irrelevant, so if you want to have this discussion, you can have it without me.

An indirect measurement is if your detector measures Y and you use a relation between X and Y to obtain X.

A Geiger-counter counts highly energetic particles as directly as it gets, but once you start thinking about it, you’ll note that we rarely measure anything directly. A common household thermometer for example does not actually measure temperature, it measures volume. A GPS device does not actually measure position, it measures the delay between signals received from different satellites and infers the position from that. Your microphone doesn’t actually measure decibel, it measures voltage. And so on.

One problem in distinguishing between direct and indirect measurements is that it’s not always so clear what is or isn’t part of the detector. Is the water in the Kamiokande tank part of the detector, or is the measurement only made in the photodetectors sourrounding the water? And is the Antarctic part of the IceCube detector?

The other problem is that in many cases scientists do not talk about quantities, they talk about concepts, ideas, hypotheses, or models. And that’s where things become murky.

What is direct evidence?

There is no clear definition for this.

You might want to extend the definition of a direct measurement to direct evidence, but this most often does not work. If you are talking about direct evidence for a particle, you can ask for the particle to hit the detector for it to be direct evidence. (Again, I am leaving aside that most detectors will amplify and process the signal before it is read out by a human because commonly the detector and data analysis are discussed separately.)

However, if you are measuring something like a symmetry violation or a decay time, then your measurement would always be indirect. What is commonly known as “direct” CP violation for example would then also be an indirect measurement since the CP violation is inferred from decay products.

In practice whether some evidence is called direct or indirect is a relative statement about the amount of assumptions that you had to use to extract the evidence. Evidence is indirect if you can think of a more direct way to make the measurement. There is some ambiguity in this which comes from the question whether the ‘more direct measurement’ must be possible in practice or in principle, but this is a problem that only people in quantum gravity and quantum foundations spend sleepless nights over...

BICEP2 is direct evidence for what?

BICEP2 has directly measured the polarization of CMB photons. Making certain assumptions about the evolution of the universe (and after subtracting the galactic foreground) this is indirect evidence for the presence of gravitational waves in the early universe, also called the relic gravitational wave background.

Direct measurement of gravitational waves is believed to be possible with gravitational wave detectors that basically measure how space-time periodically contracts and expands. The slowing down of the rotation period in pulsar systems is also indirect evidence for gravitational waves, which according to Einstein’s theory of General Relativity should carry away energy from the system. This evidence gave rise to a Nobel Prize in 1993.

Evidence for inflation comes from the presence of the gravitational wave background in the (allegedly) observed range. How can this evidence for inflation plausibly be called “direct” if it is inferred from a measurement of gravitational waves that was already indirect? That’s because we do not presently know of any evidence for inflation that would be more direct than this. Maybe one day somebody will devise a way to measure the inflaton directly in a detector, but I’m not even sure a thought experiment can do that. Until then, I think it is fair to call this direct evidence.

One should not mistake evidence for proof. We will never prove any model correct. We only collect support for it. Evidence – theoretical or experimental – is such support.

Now what about BICEP and quantum gravity?

Let us be clear that most people working on quantum gravity mean the UV-completion of the theory when they use the word ‘quantum gravity’. The BICEP2 data has the potential to rule out some models derived from these UV-completions, for example variants of string cosmology or loop quantum cosmology, and many researchers are presently very active in deriving the constraints. However, the more immediate question raised by the BICEP2 data is about the perturbative quantization of quantum gravity, that is the question whether the CMB polarization is evidence not only for classical gravitational waves, but for gravitons, the quanta of the gravitational field.

Since the evidence for gravitational waves was indirect already, the evidence for gravitons would also be indirect, though this brings up the above mentioned caveat about whether a direct detection must not only be theoretically possible, but actually be practically feasible. Direct detection of gravitons is widely believed to be not feasible.

There have been claims by Krauss and Wilzcek (which we discussed earlier here) and a 2012 paper by Ashoorioon, Dev, and Mazumdar that argues that, yes, the gravitational wave background is evidence for the quantization of gravity. The arguments in a nutshell say that quantum fluctuations of space-time are the only way the observed fluctuations could have been large enough to produce the measured spectrum.

The problems with the existing arguments is that they do not carefully track the assumptions that go into it. They do for example make assumptions about the coupling between gravity and matter fields being the usual coupling. That is plausible of course, but these are couplings at energy densities higher than we have ever tested. They also assume, rather trivially, that space-time exists to begin with. If one has a scenario in which space-time comes into being by some type of geometric phase transition, as is being suggested in some approaches to quantum gravity, one might have an entirely different mechanism for producing fluctuations. Many emergent and induced gravity approaches to quantum gravity tend not to have gravitons, which raises the question of whether these approaches could be ruled out with the BICEP data. Alas, I am not aware of any prediction for the gravitational wave background coming from these approaches, so clearly there is a knowledge gap here.

What we would need to make the case that gravity must have been perturbatively quantized in the early universe is a cosmic version of Bell’s theorem. An argument that demonstrates that no classical version of gravity would have been able to produce the observations. The power of Bell’s inequality is not in proving quantum mechanics right - this is not possible. The power of of Bell’s inequality (or measuring violations thereof respectively) is in showing that a local classical, ie “old fashioned”, theory can not account for the observations and something has to give. The present arguments about the CMB polarization are not (yet) that stringent.

This means that the BICEP2 result is strong support for the quantization of gravity, but it does not presently rule out the option that gravity is entirely classical. Though, as we discussed earlier, this option is hard to make sense of theoretically, it is infuriatingly difficult to get rid of experimentally.

Summary

The BICEP2 data, if it holds up to scrutiny, is indirect evidence for the relic gravitational wave background. It is not the first indirect evidence for gravitational waves, but the first indirect evidence for this gravitational wave background that was created in the early universe. I think it is fair to say that it is direct evidence for inflation, but the terminology is somewhat ambiguous. It is indirect evidence for the perturbative quantization of gravity, but cannot presently rule out the option that gravity was never quantized at all.