This guest blog post was written by Stephen Feeney.
The cosmic microwave background (CMB) forms the cornerstone of the concordance cosmological model, ΛCDM, providing the largest and, arguably, cleanest-to-interpret picture of our Universe currently available; however, this model is buttressed on all sides with measurements of cosmologically relevant quantities derived using a huge variety of other astrophysical objects. Examples include:
the local expansion rate of the Universe (i.e., the constant in Hubble’s Law relating the distance of an object and its recession speed), measured using so-called standard candles — things we (think we!) know the inherent brightness of — such as Cepheid variable stars and Type Ia supernovae;
the age of the Universe, which must be greater than the ages of the oldest things we can see, like globular clusters and metal-poor stars; and
the shape or amplitude of the matter power spectrum, i.e., how much matter, (both normal and dark) is bound up in structures of different sizes. This can be measured in a number of different ways, including galaxy surveys, weak-gravitational-lensing surveys (which exploit the fact that matter bends the path of light, and hence changes the shape of faraway objects, to “weigh” the Universe) and counts of galaxy clusters.
Using our cosmological model, we can extrapolate our CMB observations — observations of the Universe when it was only 400,000 years old — to predict how quickly we think the Universe should be expanding now (billions of years later), how old it should be, and how many galaxy clusters we should see of a given mass. If our model of cosmology (and crucially astrophysics — more on this below) is correct, and we truly understand our instruments, then all of our measurements should agree with the CMB predictions. Conversely, if our measurements don’t agree then we might have something wrong! With releases of stunning new cosmological data popping up more regularly than a London bus — ain’t being a 21st century cosmologist great? — there’s no time like the present to ask just how concordant is our concordance model?
Now, the first thing to note is that our measurements very nearly do agree. Let’s take the expansion rate as an example. Extrapolating the Planck satellite’s CMB observations to the present day, we expect that the expansion rate should be 67.3 ± 1.2 km/s/Mpc (the slightly funky units reduce to 1/s, the same as any old rate). The Hubble Space Telescope’s dedicated mission to measure this quantity, which used observations of some 600 Cepheid variables and over 250 supernovae, concluded that the expansion rate is 73.8 ± 2.4 km/s/Mpc. The level of agreement between these two values is pretty mind-blowing, particularly when you consider we’re talking about modelling a few billion years of cosmological evolution. We’re comparing predictions based on observations of the Universe when it was a soup of photons, protons, electrons, a few alpha particles and not much more to measurements of pulsating and exploding stars! Put simply: it looks like we’re doing a good job!
When we compare the difference between these two values with the errors on the measurements, however, things look a little less rosy. Even though the values are close, their error-bars are now so small that this level of disagreement indicates we don’t understand something important, either about the data or the model. Or, of course, both. Looking elsewhere, we see similar issues popping up. Though our measurements of the age of the Universe look ok compared to the CMB predictions, estimates of the amount of matter from cluster counts (and weak-lensing measurements) suggest that there isn’t as much small-scale (cosmologically speaking) structure as we’d expect from extrapolating the CMB.
So far, so mysterious. Luckily though there are (plenty of!) models waiting to explain the discrepancy. What we’re looking for here are processes that can make the current Universe deviate from the predictions of the early Universe, things like massive neutrinos, dark energy and the like, whose effects only show up once the Universe has reached a certain age. Massive neutrinos have been receiving a lot of attention recently, with several papers reporting tentative detections of the existence of an additional massive sterile neutrino species (“sterile” here meaning that the neutrinos don’t even interact via the weak nuclear force: these are some seriously snooty particles…). Where does the evidence for these claims come from? Well, the effects of sterile neutrinos on cosmology can be described by their temperature and mass, which govern when the neutrinos are cool enough to stop behaving like radiation (pushing up the expansion rate and damping small-scale power) and instead start behaving like warm dark matter. Warm dark matter, unlike its cold cousin, moves too quickly to cluster on small scales, and thus suppresses the formation of cosmological structure. If a population of sterile neutrinos was produced in the Big Bang, and it became non-relativistic after the CMB was formed, the matter power spectrum on these scales would be smaller than that predicted by the CMB.
Okay, so it looks like an extra sterile neutrino could explain the paucity of clusters in the Universe. Nobel Prizes all round! Except that’s not the whole story. A Universe with three normal neutrino species and an extra sterile one should have a low Hubble expansion rate as well as low cluster counts: that is not what is observed. Quite the opposite, in fact, as we’ve seen: a range of local measurements of the Hubble constant point to a value higher than expected. Thus, it appears that the sterile neutrino model does not provide the new concordance we all crave. This point has been very nicely illustrated in a recent paper by Boris, Hiranya and Licia, who demonstrate (using the data mentioned above and more) that adding a sterile neutrino to the standard cosmological model can not reconcile the high local measurement of the Hubble rate and the low cluster counts with the predictions of higher-redshift data from CMB observations (which by themselves don’t seem to want anything to do with massive neutrinos). In scientific terms, the datasets remain in tension, and we all know what happens when we combine data that are in tension: if there is only a small overlap between the conclusions of each dataset, we end up with artificially small ranges of allowed parameter values.
What else could explain the discrepancy then? Well, firstly and most excitingly, we could have the wrong model. Perhaps there is another physical process taking place on cosmological scales that perfectly predicts all of our observables. If this is the case, this is where Bayesian model selection comes into its own: once we have the predictions of this model, we can easily test whether the model or its competitors is most favoured by the data. (Of course, Bayesian model selection already has a part to play: it’s a more naturally cautious method than parameter estimation, and even using the current data in tension it shows that the sterile neutrino model isn’t favoured over ΛCDM.)
The other possibility is that there are undiagnosed systematic errors in one or more of the datasets involved. This is not an outlandish possibility (and historically this is where confirmation bias raises its ugly head): the measurements we’re discussing here are all hard to do, and are rarely free from interference from astrophysical contaminants. It’s very easy to only focus on the cosmological quantities derived from these observations, and just chuck them into your analysis as a number with some error-bars, but it’s important to remember that each of these numbers is itself the distillate of a complicated astrophysical whodunit. Like detectives using clues and logic to piece together the most likely story of what happened, we use observations and physical theory to figure out the most probable values of the cosmological parameters. The values of the parameters therefore depend on how well we understand both our data and the objects we observe. Measurements of the local expansion rate rely on standard candles: we need to understand the physical processes that determine the brightness of these objects, and how dust or metals in their surroundings might affect that brightness. To derive measurements of the matter power spectrum we need to understand how galaxies (and not only galaxies in general, but the specific ones we see in our surveys) map onto the dark matter distribution, how all of this evolves as structures collapse under gravity, how the amount of X-rays emitted by a cluster relates to its mass, how the shapes of galaxies are warped by intervening matter (and are distributed in the first place!). For age measurements we need to understand how stars evolve and impact the formation of new stars in their surroundings, etc., etc. Are we sure, for example, that our conversion from cluster counts (or the amount of X-rays radiated by clusters) into the amplitude of the matter power spectrum is exactly correct? How confident are we that the standard candles used to measure the local expansion rate are truly standard (i.e. could variations in these objects mean some are actually inherently brighter than others), or that the calibrations between different candles are correct? And what if our understanding of the instruments aboard the Planck satellite is not perfect? Tweaking any one of these could cause our measurements of the cosmological parameters to shift (and in any direction: there’s no guarantee they will move into agreement!). We need to dig around our data to determine whether any instruments are misbehaving, and test both the physical principles and statistical tools used to convert our data into constraints on cosmological parameters to be sure that this isn’t the source of the discrepancy.
So, this is where we find ourselves today. It seems recently that everyone’s re-examining everyone else’s data: the expansion rate calculations have been revisited by Planck people, and the Planck power spectrum analysis has been tweaked by WMAP people, each potentially resulting in small but important shifts in parameter values. And the good news is that no punches are being thrown (yet!). I find this to be very cool, and very exciting: we’re not all sitting here agreeing; neither are we all flatly claiming our data are error-free; nor are we blindly accepting that claims of new physics are true, as awesome as it would be if they were. The next year or so, in which these re-examinations are in turn examined and, more importantly, new temperature and polarisation data from the Planck satellite appear, will subject our cosmological models to extremely exacting scrutiny to truly determine whether concordance, be it new or old, can be found.
You can read more here:
Planck 2013 Results. XVI. Cosmological Parameters
H. Bond, E. Nelan, D. VandenBerg, G. Schaefer and D. Harmer
HD 140283: A Star in the Solar Neighborhood that Formed Shortly After the Big Bang
R. Battye and A. Moss
Evidence for Massive Neutrinos from CMB and Lensing Observations
C. Dvorkin, M. Wyman, D. Rudd and W. Hu
Neutrinos Help Reconcile Planck Measurements with Both Early and Local Universe
B. Leistedt, H. Peiris and L. Verde
No New Cosmological Concordance with Massive Sterile Neutrinos
S. Feeney, H. Peiris and L. Verde
Is There Evidence for Additional Neutrino Species from Cosmology?
D. Spergel, R. Flauger and R. Hlozek
Planck Data Reconsidered