Wednesday, January 25, 2017

What is Physics? [Video]

I spent the holidays watching some tutorials for video animations and learned quite a few new things. The below video collects some exercise projects I did to answer a question Gloria asked the other day: “Was ist Phykik?” (What is phycics?). Embarrassingly, I doubt she learned more from my answer than how to correctly pronounce physics. It’s hard to explain stuff to 6-year olds if you’re used to dealing with brainy adults.

Thanks to all the tutorials, however, I think this video is dramatically better than the previous one. There are still a number of issues I'm unhappy about, notably the timing, which I find hard to get right. Also, the lip-synching is poor. Not to mention that I still can’t draw and hence my cartoon child looks like a giant zombie hamster.

Listening to it again, the voiceover seems too fast to me and would have benefited from a few breaks. In summary, there’s room for improvement.

Complete transcript:

The other day, my daughter asked me “What is physics?”

She’s six. My husband and I, we’re both physicists. You’d think I had an answer. But best I managed was: Physics is what explains the very, very small and very, very large things.

There must be a better explanation, I said to myself.

The more I thought about it though, the more complicated it got. Physics isn’t only about small and large things. And nobody uses a definition to decide what belongs into the department of physics. Instead, it’s mostly history and culture that marks disciplinary boundaries. The best summary that came to my mind is “Physics is what physicists do.”

But then what do physicists do? Now that’s a question I can help you with.

First, let us see what is very small and very large.

An adult human has a size of about a meter. Add some zeros to size and we have small planets like Earth with a diameter of some tenthousand kilometers, and larger planets, like Saturn. Add some more zeros, and we get to solar systems, which are more conveniently measured with the time it takes light to travel through them, a few light-hours.

On even larger scales, we have galaxies, with typical sizes of a hundred-thousand light years, and galaxy clusters, and finally the whole visible universe, with an estimated size of 100 billion light years. Beyond that, there might be an even larger collection of universes which are constantly newly created by bubbling out of vacuum. It’s called the ‘multiverse’ but nobody knows if it’s real.

Physics, or more specifically cosmology, is the only discipline that currently studies what happens at such large scales. This remains so for galaxy clusters and galaxies and interstellar space, which fall into the area of astrophysics. There is an emerging field, called astrobiology, where scientists look for life elsewhere in the universe, but so far they don’t have much to study.

Once we get to the size of planets, however, much of what we observe is explained by research outside of physics. There is geology and atmospheric science and climate science. Then there are the social sciences and all the life sciences, biology and medicine and zoology and all that.

When we get to scales smaller than humans, at about a micrometer we have bacteria and cells. At a few nanometers, we have large molecular structures like our DNA, and then proteins and large molecules. Somewhere here, we cross over into the field of chemistry. If we get to even smaller scales, to the size of atoms of about an Angstrom, physics starts taking over again. First there is atomic physics, then there is nuclear physics, and then there is particle physics, which deals with quarks and electrons and photons and all that. Beyond that... nobody knows. But to the extent that it’s science at all, it’s safely in the hands of physicists.

If you go down 16 more orders of magnitude, you get to what is called the Planck length, at 10^-35 meters. That’s where quantum fluctuations of space-time become important and it might turn out elementary particles are made of strings or other strange things. But that too, is presently speculation.

One would need an enormously high energy to probe such short distances, much higher than what our particle accelerators can reach. Such energies, however, were reached at the big bang, when our universe started to expand. And so, if we look out to very, very large distances, we actually look back in time to high energies and very short distances. Particle physics and cosmology are therefore close together and not far apart.

Not everything in physics, however, is classified by distance scales. Rocks fall, water freezes, planes fly, and that’s physics too. There are two reasons for that.

First, gravity and electrodynamics are forces that span over all distance scales.

And second, the tools of physics can be used also for stuff composed of many small things that behave similarly, like solids fluids and gases. But really, it could be anything from a superconductor, to a gas of strings, to a fluid of galaxies. The behavior of such large numbers of similar objects is studied in fields like condensed matter physics, plasma physics, thermodynamics, and statistical mechanics.

That’s why there’s more physics in every-day life than what the breakdown by distance suggests. And that’s also why the behavior of stuff at large and small distances has many things in common. Indeed, methods of physics can, and have been used, also to describe the growth of cities, bird flocking, or traffic flow. All of that is physics, too.

I still don’t have a good answer for what physics is. But next time I am asked, I have a video to show.

Thursday, January 19, 2017

Dark matter’s hideout just got smaller, thanks to supercomputers.

Lattice QCD. Artist’s impression.
Physicists know they are missing something. Evidence that something’s wrong has piled up for decades: Galaxies and galaxy clusters don’t behave like Einstein’s theory of general relativity predicts. The observed discrepancies can be explained either by modifying general relativity, or by the gravitational pull of some, so-far unknown type of “dark matter.”

Theoretical physicists have proposed many particles which could make up dark matter. The most popular candidates are a class called “Weakly Interacting Massive Particles” or WIMPs. They are popular because they appear in supersymmetric extensions of the standard model, and also because they have a mass and interaction strength in just the right ballpark for dark matter. There have been many experiments, however, trying to detect the elusive WIMPs, and one after the other reported negative results.

The second popular dark matter candidate is a particle called the “axion,” and the worse the situation looks for WIMPs the more popular axions are becoming. Like WIMPs, axions weren’t originally invented as dark matter candidates.

The strong nuclear force, described by Quantum ChromoDynamics (QCD), could violate a symmetry called “CP symmetry,” but it doesn’t. An interaction term that could give rise to this symmetry-violation therefore has a pre-factor – the “theta-parameter” (θ) – that is either zero or at least very, very small. That nobody knows just why the theta-parameter should be so small is known as the “strong CP problem.” It can be solved by promoting the theta-parameter to a field which relaxes to the minimum of a potential, thereby setting the coupling to the troublesome term to zero, an idea that dates back to Peccei and Quinn in 1977.

Much like the Higgs-field, the theta-field is then accompanied by a particle – the axion – as was pointed out by Steven Weinberg and Frank Wilczek in 1978.

The original axion was ruled out within a few years after being proposed. But theoretical physicists quickly put forward more complicated models for what they called the “hidden axion.” It’s a variant of the original axion that is more weakly interacting and hence more difficult to detect. Indeed it hasn’t been detected. But it also hasn’t been ruled out as a dark matter candidate.

Normally models with axions have two free parameters: one is the mass of the axion, the other one is called the axion decay constant (usually denoted f_a). But these two parameters aren’t actually independent of each other. The axion gets its mass by the breaking of a postulated new symmetry. A potential, generated by non-perturbative QCD effects, then determines the value of the mass.

If that sounds complicated, all you need to know about it to understand the following is that it’s indeed complicated. Non-perturbative QCD is hideously difficult. Consequently, nobody can calculate what the relation is between the axion mass and the decay constant. At least so far.

The potential which determines the particle’s mass depends on the temperature of the surrounding medium. This is generally the case, not only for the axion, it’s just a complication often omitted in the discussion of mass-generation by symmetry breaking. Using the potential, it can be shown that the mass of the axion is inversely proportional to the decay constant. The whole difficulty then lies in calculating the factor of proportionality, which is a complicated, temperature-dependent function, known as the topological susceptibility of the gluon field. So, if you could calculate the topological susceptibility, you’d know the relation between the axion mass and the coupling.

This isn’t a calculation anybody presently knows how to do analytically because the strong interaction at low temperatures is, well, strong. The best chance is to do it numerically by putting the quarks on a simulated lattice and then sending the job to a supercomputer.

And even that wasn’t possible until now because the problem was too computationally intensive. But in a new paper, recently published in Nature, a group of researchers reports they have come up with a new method of simplifying the numerical calculation. This way, they succeeded in calculating the relation between the axion mass and the coupling constant.

    Calculation of the axion mass based on high-temperature lattice quantum chromodynamics
    S. Borsanyi et al
    Nature 539, 69–71 (2016)

(If you don’t have journal access, it’s not the exact same paper as this but pretty close).

This result is a great step forward in understanding the physics of the early universe. It’s a new relation which can now be included in cosmological models. As a consequence, I expect that the parameter-space in which the axion can hide will be much reduced in the coming months.

I also have to admit, however, that for a pen-on-paper physicist like me this work has a bittersweet aftertaste. It’s a remarkable achievement which wouldn’t have been possible without a clever formulation of the problem. But in the end, it’s progress fueled by technological power, by bigger and better computers. And maybe that’s where the future of our field lies, in finding better ways to feed problems to supercomputers.

Friday, January 13, 2017

What a burst! A fresh attempt to see space-time foam with gamma ray bursts.

It’s an old story: Quantum fluctuations of space-time might change the travel-time of light. Light of higher frequencies would be a little faster than that of lower frequencies. Or slower, depending on the sign of an unknown constant. Either way, the spectral colors of light would run apart, or ‘disperse’ as they say if they don’t want you to understand what they say.

Such quantum gravitational effects are miniscule, but added up over long distances they can become observable. Gamma ray bursts are therefore ideal to search for evidence of such an energy-dependent speed of light. Indeed, the energy-dependent speed of light has been sought for and not been found, and that could have been the end of the story.

Of course it wasn’t because rather than giving up on the idea, the researchers who’d been working on it made their models for the spectral dispersion increasingly difficult and became more inventive when fitting them to unwilling data. Last thing I saw on the topic was a linear regression with multiple curves of freely chosen offset – sure way to fit any kind of data on straight lines of any slope – and various ad-hoc assumptions to discard data that just didn’t want to fit, such as energy cuts or changes in the slope.

These attempts were so desperate I didn’t even mention them previously because my grandma taught me if you have nothing nice to say, say nothing.

But here’s a new twist to the story, so now I have something to say, and something nice in addition.

On June 25 2016, the Fermi Telescope recorded a truly remarkable burst. The event, GRB160625, had a total duration of 770s and had three separate sub-bursts with the second, and largest, sub-burst lasting 35 seconds (!). This has to be contrasted with the typical burst lasting a few seconds in total.

This gamma ray burst for the first time allowed researchers to clearly quantify the relative delay of the different energy channels. The analysis can be found in this paper
    A New Test of Lorentz Invariance Violation: the Spectral Lag Transition of GRB 160625B
    Jun-Jie Wei, Bin-Bin Zhang, Lang Shao, Xue-Feng Wu, Peter Mészáros
    arXiv:1612.09425 [astro-ph.HE]

Unlike supernovae IIa, which have very regular profiles, gamma ray bursts are one of a kind and they can therefore be compared only to themselves. This makes it very difficult to tell whether or not highly energetic parts of the emission are systematically delayed because one doesn’t know when they were emitted. Until now, the analysis relied on some way of guessing the peaks in three different energy channels and (basically) assuming they were emitted simultaneously. This procedure sometimes relied on as little as one or two photons per peak. Not an analysis you should put a lot of trust in.

But the second sub-burst of GRB160625 was so bright, the researchers could break it down in 38 energy channels – and the counts were still high enough to calculate the cross-correlation from which the (most likely) time-lag can be extracted.

Here are the 38 energy channels for the second sub-burst

Fig 1 from arXiv:1612.09425


For the 38 energy channels they calculate 37 delay-times relative to the lowest energy channel, shown in the figure below. I find it a somewhat confusing convention, but in their nomenclature a positive time-lag corresponds to an earlier arrival time. The figure therefore shows that the photons of higher energy arrive earlier. The trend, however, isn’t monotonically increasing. Instead, it turns around at a few GeV.

Fig 2 from arXiv:1612.09425


The authors then discuss a simple model to fit the data. First, they assume that the emission has an intrinsic energy-dependence due to astrophysical effects which cause a positive lag. They model this with a power-law that has two free parameters: an exponent and an overall pre-factor.

Second, they assume that the effect during propagation – presumably from the space-time foam – causes a negative lag. For the propagation-delay they also make a power-law ansatz which is either linear or quadratic. This ansatz has one free parameter which is an energy scale (expected to be somewhere at the Planck energy).

In total they then have three free parameters, for which they calculate the best-fit values. The fitted curves are also shown in the image above, labeled n=1 (linear) and n=2 (quadratic). At some energy, the propagation-delay becomes more relevant than the intrinsic delay, which leads to the turn-around of the curve.

The best-fit value of the quantum gravity energy is 10q GeV with q=15.66 for the linear and q=7.17 for the quadratic case. From this they extract a lower limit on the quantum gravity scale at the 1 sigma confidence level, which is 0.5 x 1016 GeV for the linear and 1.4 x 107 GeV for the quadratic case. As you can see in the above figure, the data in the high energy bins has large error-bars owing to the low total count, so the evidence that there even is a drop isn’t all that great.

I still don’t buy there’s some evidence for space-time foam to find here, but I have to admit that this data finally convinces me that at least there is a systematic lag in the spectrum. That’s the nice thing I have to say.

Now to the not-so-nice. If you want to convince me that some part of the spectral distortion is due to a propagation-effect, you’ll have to show me evidence that its strength depends on the distance to the source. That is, in my opinion, the only way to make sure one doesn’t merely look at delays present already at emission. And even if you’d done that, I still wouldn’t be convinced that it has anything to do with space-time foam.

I’m skeptic of this because the theoretical backing is sketchy. Quantum fluctuations of space-time in any candidate-theory for quantum gravity do not lead to this effect. One can work with phenomenological models, in which such effects are parameterized and incorporated as new physics into the known theories. This is all well and fine. Unfortunately, in this case existing data already constrains the parameters so that the effect on the propagation of light is unmeasurably small. It’s already ruled out. Such models introduce a preferred frame and break Lorentz-invariance and there is loads of data speaking against it.

It has been claimed that the already existing constraints from Lorentz-invariance violation can be circumvented if Lorentz-invariance is not broken but instead deformed. In this case the effective field theory limit supposedly doesn’t apply. This claim is also quoted in the paper above (see end of section 3.) However, if you look at the references in question, you will not find any argument for how one manages to avoid this. Even if one can make such an argument though (I believe it’s possible, not sure why it hasn’t been done), the idea suffers from various other theoretical problems that, to make a very long story very short, make me think the quantum gravity-induced spectral lag is highly implausible.

However, leaving aside my theory-bias, this newly proposed model with two overlaid sources for the energy-dependent time-lag is simple and should be straight-forward to test. Most likely we will soon see another paper evaluating how well the model fits other bursts on record. So stay tuned, something’s happening here.

Sunday, January 08, 2017

Stephen Hawking turns 75. Congratulations! Here’s what to celebrate.

If people know anything about physics, it’s the guy in a wheelchair who speaks with a computer. Google “most famous scientist alive” and the answer is “Stephen Hawking.” But if you ask a physicist, what exactly is he famous for?

Hawking became “officially famous” with his 1988 book “A Brief History of Time.” Among physicists, however, he’s more renowned for the singularity theorems. In his 1960s work together with Roger Penrose, Hawking proved that singularities form under quite general conditions in General Relativity, and they developed a mathematical framework to determine when these conditions are met.

Before Hawking and Penrose’s work, physicists had hoped that the singularities which appeared in certain solutions to General Relativity were mathematical curiosities of little relevance for physical reality. But the two showed that this was not so, that, to the very contrary, it’s hard to avoid singularities in General Relativity.

Since this work, the singularities in General Relativity are understood to signal the breakdown of the theory in regions of high energy-densities. In 1973, together with George Ellis, Hawking published the book “The Large Scale Structure of Space-Time” in which this mathematical treatment is laid out in detail. Still today it’s one of the most relevant references in the field.

Only a year later, in 1974, Hawking published a seminal paper in which he demonstrates that black holes give off thermal radiation, now referred to as “Hawking radiation.” This evaporation of black holes results in the black hole information loss paradox which is still unsolved today. Hawking’s work demonstrated clearly that the combination of General Relativity with the quantum field theories of the standard model spells trouble. Like the singularity theorems, it’s a result that doesn’t merely indicate, but prove that we need a theory of quantum gravity in order to consistently describe nature.

While the 1974 paper was predated by Bekenstein’s finding that black holes resemble thermodynamical systems, Hawking’s derivation was the starting point for countless later revelations. Thanks to it, physicists understand today that black holes are a melting pot for many different fields of physics – besides general relativity and quantum field theory, there is thermodynamics and statistical mechanics, and quantum information and quantum gravity. Let’s not forget astrophysics, and also mix in a good dose of philosophy. In 2017, “black hole physics” could be a subdiscipline in its own right – and maybe it should be. We owe much of this to Stephen Hawking.

In the 1980s, Hawking worked with Jim Hartle on the no-boundary proposal according to which our universe started in a time-less state. It’s an appealing idea whose time hasn’t yet come, but I believe this might change within the next decade or so.

After this, Hawking tries several times to solve the riddle of black hole information loss that he posed himself, most recently in early 2016. While his more recent work has been met with interest in the community, it hasn’t been hugely impactful – it attracts significantly more attention by journalists than by physicists.

As a physicist myself, I frequently get questions about Stephen Hawking: “What’s he doing these days?” – I don’t know. “Have you ever met him?” – He slept right through it. “Do you also work on the stuff that he works on?” – I try to avoid it. “Will he win a Nobel Prize?” – Ah. Good question.

Hawking’s shot at the Nobel Prize is the Hawking radiation. The astrophysical black holes which we can presently observe have a temperature way too small to be measured in the foreseeable future. But since the temperature increases for smaller mass, lighter black holes are hotter, and could allow us to measure Hawking radiation.

Black holes of sufficiently small masses could have formed from density fluctuations in the early universe and are therefore referred to as “primordial black holes.” However, none of them have been seen, and we have tight observational constraints on their existence from a variety of data. It isn’t yet entirely excluded that they are around, but I consider it extremely unlikely that we’ll observe one of these within my lifetime.

For what the Nobel is concerned, this leaves the Hawking radiation in gravitational analogues. In this case, one uses a fluid to mimic a curved space-time background. The mathematical formulation of this system is (in certain approximations) identical to that of an actual black hole, and consequently the gravitational analogues should also emit Hawking radiation. Indeed, Jeff Steinhauer claims that he has measured this radiation.

At the time of writing, it’s still somewhat controversial whether Steinhauer has measured what he thinks he has. But I have little doubt that sooner or later this will be settled – the math is clear: The radiation should be there. It might take some more experimental tinkering, but I’m confident sooner or later it’ll be measured.

Sometimes I hear people complain: “But it’s only an analogy.” I don’t understand this objection. Mathematically it’s the same. That in the one case the background is an actually curved space-time and in the other case it’s an effectively curved space-time created by a flowing fluid doesn’t matter for the calculation. In either situation, measuring the radiation would demonstrate the effect is real.

However, I don’t think that measuring Hawking radiation in an analogue gravity system would be sufficient to convince the Nobel committee Hawking deserves the prize. For that, the finding would have to have important implications beyond confirming a 40-years-old computation.

One way this could happen, for example, would be if the properties of such condensed matter systems could be exploited as quantum computers. This isn’t as crazy as it sounds. Thanks to work built on Hawking’s 1974 paper we know that black holes are both extremely good at storing information and extremely efficient at distributing it. If that could be exploited in quantum computing based on gravitational analogues, then I think Hawking would be in line for a Nobel. But that’s a big “if.” So don’t bet on it.

Besides his scientific work, Hawking has been and still is a master of science communication. In 1988, “A Brief History of Time” was a daring book about abstract ideas in a fringe area of theoretical physics. Hawking, to everybody’s surprise, proved that the public has an interest in esoteric problems like what happens if you fall into a black hole, what happed at the Big Bang, or whether god had any choice when he created the laws of nature.

Since 1988, the popular science landscape has changed dramatically. There are more books about theoretical physics than ever before and they are more widely read than ever before. I believe that Stephen Hawking played a big role in encouraging other scientists to write about their own research for the public. It certainly was an inspiration for me.

So, Happy Birthday, Stephen, and thank you.

Tuesday, January 03, 2017

The Bullet Cluster as Evidence against Dark Matter

Once upon a time, at the far end of the universe, two galaxy clusters collided. Their head-on encounter tore apart the galaxies and left behind two reconfigured heaps of stars and gas, separating again and moving apart from each other, destiny unknown.

Four billion years later, a curious group of water-based humanoid life-forms tries to make sense of the galaxies’ collision. They point their telescope at the clusters’ relics and admire its odd shape. They call it the “Bullet Cluster.”

In the below image of the Bullet Cluster you see three types of data overlaid. First, there are the stars and galaxies in the optical regime. (Can you spot the two foreground objects?) Then there are the regions colored red which show the distribution of hot gas, inferred from X-ray measurements. And the blue-colored regions show the space-time curvature, inferred from gravitational lensing which deforms the shape of galaxies behind the cluster.

The Bullet Cluster.
[Img Src: APOD. Credits: NASA]


The Bullet Cluster comes to play an important role in the humanoids’ understanding of the universe. Already a generation earlier, they had noticed that their explanation for the gravitational pull of matter did not match observations. The outer stars of many galaxies, they saw, moved faster than expected, meaning that the gravitational pull was stronger than what their theories could account for. Galaxies which combined in clusters, too, were moving too fast, indicating more pull than expected. The humanoids concluded that their theory, according to which gravity was due to space-time curvature, had to be modified.

Some of them, however, argued it wasn’t gravity they had gotten wrong. They thought there was instead an additional type of unseen, “dark matter,” that was interacting so weakly it wouldn’t have any consequences besides the additional gravitational pull. They even tried to catch the elusive particles, but without success. Experiment after experiment reported null results. Decades passed. And yet, they claimed, the dark matter particles might just be even more weakly interacting. They built larger experiments to catch them.

Dark matter was a convenient invention. It could be distributed in just the right amounts wherever necessary and that way the data of every galaxy and galaxy cluster could be custom-fit. But while dark matter worked well to fit the data, it failed to explain how regular the modification of the gravitational pull seemed to be. On the other hand, a modification of gravity was difficult to work with, especially for handling the dynamics of the early universe, which was much easier to explain with particle dark matter.

To move on, the curious scientists had to tell apart their two hypotheses: Modified gravity or particle dark matter? They needed an observation able to rule out one of these ideas, a smoking gun signal – the Bullet Cluster.

The theory of particle dark matter had become known as the “concordance model” (also: ΛCDM). It heavily relied on computer simulations which were optimized so as to match the observed structures in the universe. From these simulations, the scientists could tell the frequency by which galaxy clusters should collide and the typical relative speed at which that should happen.

From the X-ray observations, the scientists inferred that the collision speed of the galaxies in the Bullet Cluster must have taken place at approximately 3000 km/s. But such high collision speeds almost never occurred in the computer simulations based on particle dark matter. The scientists estimated the probability for a Bullet-Cluster-like collision to be about one in ten billion, and concluded: that we see such a collision is incompatible with the concordance model. And that’s how the Bullet Cluster became strong evidence in favor of modified gravity.

However, a few years later some inventive humanoids had optimized the dark-matter based computer simulations and arrived at a more optimistic estimate of a probability of 4.6×10-4 for seeing something like the Bullet-Cluster. Briefly later they revised the probability again to 6.4×10−6.

Either way, the Bullet Cluster remained a stunningly unlikely event to happen in the theory of particle dark matter. It was, in contrast, easy to accommodate in theories of modified gravity, in which collisions with high relative velocity occur much more frequently.

It might sound like a story from a parallel universe – but it’s true. The Bullet Cluster isn’t the incontrovertible evidence for particle dark matter that you have been told it is. It’s possible to explain the Bullet Cluster with models of modified gravity. And it’s difficult to explain it with particle dark matter.

How come we so rarely read about the difficulties the Bullet Cluster poses for particle dark matter? It’s because the pop sci media doesn’t like anything better than a simple explanation that comes with an image that has “scientific consensus” written all over it. Isn’t it obvious the visible stuff is separated from the center of the gravitational pull?

But modifying gravity works by introducing additional fields that are coupled to gravity. There’s no reason that, in a dynamical system, these fields have to be focused at the same place where the normal matter is. Indeed, one would expect that modified gravity too should have a path dependence that leads to such a delocalization as is observed in this, and other, cluster collisions. And never mind that when they pointed at the image of the Bullet Cluster nobody told you how rarely such an event occurs in models with particle dark matter.

No, the real challenge for modified gravity isn’t the Bullet Cluster. The real challenge is to get the early universe right, to explain the particle abundances and the temperature fluctuations in the cosmic microwave background. The Bullet Cluster is merely a red-blue herring that circulates on social media as a shut-up argument. It’s a simple explanation. But simple explanations are almost always wrong.

Monday, January 02, 2017

How to use an "argument from authority"

I spent the holidays playing with the video animation software. As a side-effect, I produced this little video.



If you'd rather read than listen, here's the complete voiceover:

It has become a popular defense of science deniers to yell “argument from authority” when someone quotes an experts’ opinion. Unfortunately, the argument from authority is often used incorrectly.

What is an “argument from authority”?

An “argument from authority” is a conclusion drawn not by evaluating the evidence itself, but by evaluating an opinion about that evidence. It is also sometimes called an “appeal to authority”.

Consider Bob. Bob wants to know what follows from A. To find out, he has a bag full of knowledge. The perfect argument would be if Bob starts with A and then uses his knowledge to get to B to C to D and so on until he arrives at Z. But reality is never perfect.

Let’s say Bob wants to know what’s the logarithm of 350,000. In reality he can’t find anything useful in his bag of knowledge to answer that question. So instead he calls his friend, the Pope. The Pope says “The log is 4.8.” So, Bob concludes, the log of 350,000 is 4.8 because the Pope said so.

That’s an argument from authority – and you have good reasons to question its validity.

But unlike other logical fallacies, an argument from authority isn’t necessarily wrong. It’s just that, without further information about the authority that has been consulted, you don’t know how good the argument it is.

Suppose Bob hadn’t asked the Pope what’s the log of 350,000 but instead he’d have asked his calculator. The calculator says it’s approximately 5.544.

We don’t usually call this an argument from authority. But in terms of knowledge evaluation it’s the same logical structure as exporting an opinion to a trusted friend. It’s just that in this case the authority is your calculator and it’s widely known to be an expert in calculation. Indeed, it’s known to be pretty much infallible.

You believe that your friend the calculator is correct not because you’ve tried to verify every result it comes up with. You believe it’s correct because you trust all the engineers and scientists who have produced it and who also use calculators themselves.

Indeed, most of us would probably trust a calculator more than our own calculations, or that of the Pope. And there is a good reason for that – we have a lot of prior knowledge about whose opinion on this matter is reliable. And that is also relevant knowledge.

Therefore, an argument from authority can be better than an argument lacking authority if you take into account evidence for the authority’s expertise in the subject area.

Logical fallacies were widely used by the Greeks in their philosophical discourse. They were discussing problems like “Can a circle be squared?” But many of today’s problems are of an entirely different kind, and the Greek rules aren’t always helpful.

The problems we face today can be extremely complex, like the question “What’s the origin of climate change?” “Is it a good idea to kill off mosquitoes to eradicate malaria?” or “Is dark matter made of particles?” Most of us simply don’t have all the necessary evidence and knowledge to arrive at a conclusion. We also often don’t have the time to collect the necessary evidence and knowledge.

And when a primary evaluation isn’t possible, the smart thing to do is a secondary evaluation. For this, you don’t try to answer the question itself, but you try to answer the question “Where do I best get an answer to this question?” That is, you ask an authority.

We do this all the time: You see a doctor to have him check out that strange rush. You ask your mother how to stuff the turkey. And when the repair man says your car needs a new crankshaft sensor, you don’t yell “argument from authority.” And you shouldn’t, because you’ve smartly exported your primary evaluation of evidence to a secondary system that, you are quite confident, will actually evaluate the evidence *better* than you yourself could do.

But… the secondary evidence you need is how knowledgeable the authority is on the topic of question. The more trustworthy the authority, the more reliable the information.

This also means that if you reject an argument from authority you claim that the authority isn’t trustworthy. You can do that. But it’s here’s where things most often go wrong.

The person who doesn’t want to accept the opinion of scientific experts implicitly claims that their own knowledge is more trustworthy. Without explicitly saying so, they claim that science doesn’t work, or that certain experts cannot be trusted – and that they themselves can do better. That is a claim which can be made. But science has an extremely good track record in producing correct conclusions. Questioning that it’s faulty therefore carries a heavy burden of proof.

So. To use an argument from authority correctly, you have to explain why the authority’s knowledge is not trustworthy on the question under consideration.

But what should you do if someone dismisses scientific findings by claiming an argument from authority?

I think we should have a name for such a mistaken use of the term argument from authority. We could call it the fallacy of the “omitted knowledge prior.” This means it’s a mistake to not take into account evidence for the reliability of knowledge, including one’s own knowledge. You, your calculator, and the pope aren’t equally reliable when it comes to evaluating logarithms. And that counts for something.

Sunday, January 01, 2017

The 2017 Edge Annual Question: Which Scientific Term or Concept Ought To Be More Widely Known?

My first thought when I heard the 2017 Edge Annual Question was “Wasn’t that last year's question?” It wasn’t. But it’s almost identical to the 2011 question, “What scientific concept would improve everybody’s cognitive toolkit.” That’s ok, I guess, the internet has an estimated memory of 2 days, so after 5 years it’s reasonable to assume nobody will remember their improved toolkit.

After that first thought, the reply that came to my mind was “Effective Field Theory,” immediately followed by “But Sean Carroll will cover that.” He didn’t, he went instead for “Bayes's Theorem.” But Lisa Randall went for “Effective Theory.”

I then considered, in that order, “Free Will,” “Emergence," and “Determinism,” only to discard them again because each of these would have required me to first explain effective field theory. You find “Emergence” explained by Garrett Lisi, and determinism and free will (or its absence, respectively), is taken on by Jerry A. Coyne, whom I don’t know, but I entirely agree with his essay. My argument would have been almost identical, you can read my blogpost about free will here.

Next I reasoned that this question calls for a broader answer, so I thought of “uncertainty” and then science itself, but decided that had been said often enough. Lawrence Krauss went for uncertainty. You find Scientific Realism represented by Rebecca Newberger Goldstein, and the scientist by Stuart Firestein.

I then briefly considered social and cognitive biases, but was pretty convinced these would be well-represented by people who know more about sociology than me. Then I despaired for a bit over my unoriginality.

Back to my own terrain, I decided the one thing that everybody should know about physics is the principle of least action. The name hides its broader implications though, so I instead went for “Optimization.” A good move, because Janna Levin went for “The Principle of Least Action.”

I haven’t read all essays, but it’ll be a nice way to start the new year by browsing them. Happy New Year everybody!