This blog post was written by Boris Leistedt.
Devoured from within by supermassive black holes, quasars are among the most energetic and brightest objects in the universe. Their light sometimes travels several billion years before reaching us, and by looking at how they cluster in space, cosmologists are able to test models of the large-scale structure of the universe. However, being compact and distant objects, quasars look like stars and can only be definitively identified using high-resolution spectroscopic instruments. However, due to the time and expense of taking spectra, not all star-like objects can be examined with these instruments, and quasar candidates first need to be identified in a photometric survey and then confirmed or dismissed by taking follow-up spectra. This approach has led to the identification and study of tens of thousands of quasars, greatly enhancing our knowledge of the physics of these extreme objects.
Current catalogues of confirmed quasars are too small to study the large-scale structure of the universe at sufficient precision. For this reason, cosmologists use photometric catalogues of quasar candidates in which each object is only characterised by a small set of photometric colours. Star-quasar classification is difficult, yielding catalogues that in fact contain significant fractions of stars. In addition, the unavoidable variations in the calibration of instruments and in the observing conditions over time, create fluctuations in the number and properties of star-like objects detected on the sky. These observational issues combined with stellar contamination result in distortions in the data that can be misinterpreted as anomalies or hints of new physics.
In recent work we investigated these issues, and demonstrated techniques to address them. We considered the photometric quasars from the Sloan Digital Sky Survey (SDSS) and selected a subsample of objects where 95% of objects were expected to be actual quasars. We then constructed sky masks to remove the areas of the sky which were the most affected by calibration errors, fluctuations in the observing conditions, and dust in our own Galaxy. We exploited a technique called “mode projection” to obtain robust measurements of the clustering of quasars, and compared them with theoretical predictions. Using this, we found a remarkable agreement between the data and the prediction from the standard model of cosmology. Previous studies of such data argued that they were not suitable for cosmological studies, but we were able to identify a sample of objects that appear clean. In the future, we will use these techniques to analyse future photometric data, for example in the context of the Dark Energy Survey in which UCL is deeply involved.
Continuing EarlyUniverse@UCL’s tradition of recognition by the Royal Astronomical Society, Stephen Feeney has been named runner-up for the Michael Penston Prize 2012, awarded for the best doctoral thesis in astronomy and astrophysics. Stephen’s PhD thesis (“Novel Algorithms for Early Universe Cosmology”) focused on constraining the physics of the very early Universe — processes such as eternal inflation and the formation of topological defects — using novel Bayesian source-detection techniques applied to cosmic microwave background data. Stephen is extremely happy and completely gobsmacked to have been recognised!
This blog post was written by Aurélien Benoit-Lévy.
In my previous post, I mentioned CMB lensing and said that it was going to be a big thing. And indeed, CMB lensing has been presented as one of the main scientific results of the recent data release from the Planck Collaboration. So what is CMB lensing? Put succinctly, CMB lensing is the deflection of CMB photons as they pass clumps of matter on the way from the last scattering surface to our telescopes. These deflections generate a characteristic signature in the CMB that can be used to map out the distribution of all of the matter in our Universe in the direction of each incoming photon. Let me now describe these last few sentences in greater detail.
I am sure you are familiar with images of distorted and multiply-imaged galaxies observed around massive galaxy clusters. All of these images are due to the bending of light paths by changes in the distribution of matter, an effect generally known as gravitational lensing. The same thing happens with the CMB: the trajectories of photons coming from the last scattering surface are modified by gradients in the distribution of matter along the way: i.e. the large-scale structure of our Universe.
The main effect is that the CMB we observe is sightly modified: the temperature we measure in a certain direction is actually the temperature we would have measured in a slightly different direction if there were no matter in the Universe. These deflections are small — about two arcminutes, or the size of a pixel in the full-resolution Planck map — and can hardly be distinguished by eye. Indeed, if you look at the nice animation by Stephen Feeney, it not possible to say which is the lensed map and which is the unlensed map. But there’s one thing we can see, and that’s that the deflections are not random. If you concentrate on one big spot (either blue or red) you’ll see that it moves coherently in one single direction. The coherence of these arcminute deflections over a few degrees is extremely important as it enables us to estimate a quantity known as the lensing potential: the sum of all the individual deflections experienced by a photon as it travels from the last scattering surface. Although we can only measure the net deflection, rather than the full list of every deflection felt by the photon, the lensing potential still represents the deepest measurement we can have of the matter distribution as it probes the whole history of structure formation!
Now, how can we extract this lensing potential from a CMB temperature map? As I mentioned earlier, CMB lensing generates small deflections (a few arcminutes) but correlated on larger scales (a few degrees). This mixing of scales (small and large) results from small non-Gaussianities induced by CMB lensing. More precisely, the CMB temperature and its gradient become correlated, and this correlation is given precisely by the lensing potential. We can therefore measure the correlation between the temperature and the gradient of a CMB map to provide an estimate of the lensing potential. Of course, this operation is not straightforward and there are quite a lot of complications due to the fact that the data are not perfect. But we can model all of these effects, and, as they are largely independent of CMB lensing, they can be easily estimated using simulations and then simply removed from the final results.
It’s as simple as that! However, there’s much more still to come as I haven’t yet spoken of the various uses of this lensing potential! But that’s another story for another time…
This blog post was written by Hiranya Peiris.
There was great excitement at EarlyUniverse@UCL this week due to the first cosmology data release from the Planck satellite! Andrew Jaffe has a nice technical guide to the results here, and Phil Plait has a great, very accessible summary here.
Planck’s results bring us much closer to understanding the origin of structure in the universe, and its subsequent evolution. In the past few months, Jason McEwen, Aurélien Benoit-Lévy and I have been working extremely hard on the Planck analyses studying the implications of the data for a range of cosmological physics. Now we can finally talk publicly about this work, and in coming weeks we will be blogging about these topics; but in the meantime the technical papers are linked below!
The Planck results received wide media coverage, including the BBC, The Guardian, the Financial Times, the Economist, etc. But as a former part-time New Yorker, the most thrilling moment of the media circus for me was seeing Planck’s CMB map taking up most of the space above the fold, on the front page of the New York Times!
You can read more here:
Planck Collaboration (2013):
This guest blog post was written by Aurélien Benoit-Lévy.
The cosmic microwave background (CMB) is the furthest light we can observe. Since it is the furthest, bright galaxies and other compact objects can pollute in a nasty way the superb observations of the CMB that forthcoming experiments will soon deliver. To get rid of these point sources, we need to mask them. This is of course not harmless, as the resulting map is punched with holes just like emmental!
Holes in a map are unfortunate as they considerably complicate the spectral properties of the signal under consideration. But if there is a problem somewhere, it means that there is a solution, and I’m sure you would have guessed the solution: inpainting! Inpainting is just like cod, it’s good when it’s well made. The basic idea is to fill the holes with some fake data that would mimic some properties of the original signal. There are plenty of methods to inpaint a hole. For instance, the simplest one would be to fill the hole with the mean of all the surrounding pixels, but we can do much better than that.
CMB has a nice property: it’s correlated. This means that the probability that a given pixel has a given value is not independent of the value of the other pixels. And the CMB is also (very very close to) Gaussian, so we can use those two properties to generate random constrained Gaussian realizations that mimic the lost signal, but preserve the statistical structure of the map. Basically, we can use the information contained in the non-contaminated region to guess the most probable value in the holes. This technique has been known for quite a while, but sometimes requires heavy computations to accurately take into account all the information available in the map.
In recent work, my collaborators and I have demonstrated that it is not necessary to account for the whole map to inpaint a small hole. Using only the neighboring pixels is enough. With this method, we are able to inpaint a realistic CMB map, with a realistic point source mask in a realistic time! And the statistical properties of the inpainted maps are virtually unchanged! We applied this locally-constrained Gaussian realization in the special case of CMB lensing reconstruction and showed that this tool is very efficient, as it preserves the two-point statistic, and – above all – does not generate a spurious four-point signal. Of course, I now need to tell you about CMB lensing and four-point statistics, which is a very big thing and we’ll hear a lot about it in the coming months. But that will be for another post!
You can read more here:
Aurélien Benoit-Lévy et al.
This guest blog post was written by Stephen Feeney.
Neutrinos are puzzling little particles. The Standard Model of particle physics tells us that there are three different types, or “flavours”, of neutrinos, each of which has no mass. Particle physics experiments, however, tell a different story. Firstly, the fact that neutrinos produced in the Sun (and nuclear reactors) are observed to oscillate between flavours requires them to have mass. Furthermore, unexpected results corroborated by a number of experiments measuring the rate of oscillations hint at the existence of more than three neutrino flavours.
This is all very interesting, and it seems that the Standard Model is ripe for revision, but what has it got to do with cosmology? Quite a lot, as it happens! Neutrinos, along with all fundamental particles, are produced in the early Universe. As they are so light, the neutrinos are extremely relativistic, and hence very energetic: so much so that, along with the photons, they dominate the dynamics of the early Universe. If there are four neutrinos rather than the standard three, the Universe initially expands more quickly than the current cosmological model (LambdaCDM) predicts, and cold dark matter becomes important at later times. Furthermore, neutrinos are so energetic that they are able to escape from the density perturbations set up in the early Universe — they “free-stream” a certain distance which is dependent on their mass — and hence affect the amount of structure that forms.
All of this has observable consequences for cosmology, affecting the relative amounts of simple elements produced in the Universe’s first few minutes, the amplitude and scale of the oscillations in the cosmic microwave background (CMB), and the numbers of, for example, galaxy clusters. We can therefore try to determine the number of neutrino species and their masses from cosmology! In fact, the strength of the effects of neutrinos in the early Universe, coupled with the quality of cosmological data, means that cosmological tests of neutrino physics are more powerful than laboratory experiments.
That’s what we (Hiranya, Licia Verde and I) have been doing. People have, of course, already started looking for hints of non-standard neutrinos in cosmology, and have found weak indications that the most likely number of neutrino species is greater than three. These analyses have, however, relied on parameter estimates: finding the most likely number of neutrino species given the data, and claiming evidence of new physics if this is not the standard value.
What we really want to do is to determine the support for a model with, for example, extra neutrinos, and compare this to the support for the standard cosmological model. This is what we’ve done: we have used the Bayesian evidence to compare the relative probabilities of LambdaCDM and models with extra neutrino flavours and massive neutrinos, given cosmological data. The datasets we consider include CMB power spectra from WMAP and the South Pole Telescope, measurements of the Universe’s recent expansion, and measurements of the weak gravitational lensing of the CMB by intervening large-scale structure. Our model-selection results indicate that the standard cosmological model is favoured over models with additional neutrinos and/or neutrino mass. There is therefore no requirement from cosmology to change our view of the neutrino. We’re going to need better cosmological data — such as the Planck data coming in a couple of months’ time — to make progress on this problem.
You can read more here:
Stephen M. Feeney, Hiranya V. Peiris and Licia Verde
Voids are all the large, empty spaces in the Universe. Until recently they have been mostly ignored as astronomers focused on all the bright, hot, and glowy stuff out there: stars, galaxies, and clusters. And we can’t really blame the astronomers: stars, galaxies, and clusters are easy to spot, easy to collect information from, have rich dynamics…in other words, they’re interesting.
Voids, on the other hand, are hard to identify, hard to collect information from, and have relatively uninteresting dynamics. In fact, they are so boring that their structure and evolution pretty much echoes the early universe with very little modification. This yields a surprising result: if we can reliably identify voids, then we have an almost perfectly clear window into the early universe and the nature of dark energy!
Galaxy surveys such as the Sloan Digital Sky Survey are mapping large volumes of the Universe, revealing many structures large and small, including cosmic voids. Working with Ben Wandelt and Guilhem Lavaux, I’ve adapted a void finder to work with real observational data.
The figure shows the distribution of medium-sized voids in our nearby universe. In the figure, the points are individual galaxies. With the additional help of David Weinberg, I’ve developed a public catalog of these voids. This catalog is accelerating many kinds of innovative void-based science.
Written by visiting collaborator and guest blogger Matt Johnson.
Computer simulations of the early universe allow us to explore scenarios that are just not possible to examine using pencil-and-paper calculations. I was recently in London working on how to use computer simulations to predict the observable signatures of eternal inflation. In the standard story of eternal inflation, our universe is purported to be enclosed inside one bubble among many, each expanding into an inflating background space-time. Our bubble undergoes collisions with others, providing a possible observational window on this bizarre scenario (read about the latest tests in this paper).
The field configuration making up our bubble determines the properties of our cosmology. For example, the symmetry of a single bubble fixes the universe to be an open Friedmann-Robertson-Walker (FRW) space-time. However, the collision between bubbles upsets this symmetry, meaning that if we live inside a bubble which underwent collisions, there will be deviations from a purely open FRW space-time.
The equations governing the evolution of the fields making up the bubbles and gravity are highly non-linear, making computer simulations a necessary tool for finding the exact space-time resulting from the collision between bubbles. Setting our sights on understanding the collision between pairs of bubbles, there is enough symmetry to evolve in 1 space and 1 time dimension. Last year, we wrote a code to simulate the collision between two bubble universes (you can read about our first results in this paper). One shortcoming of this original work was that the coordinates we used prevented us from evolving for longer than a few e-folds of inflation. We’ve fixed that by introducing a more appropriate set of coordinates, and now the challenge is to extract the cosmology contained inside one of the bubbles.
For colliding bubbles, one no longer has a purely homogeneous and isotropic cosmology. However, if collisions are to be consistent with what we observe, there had better be regions where the space-time is approximately FRW. So, how does one identify which regions are approximately FRW from the simulation data? How does one then quantify the deviation from FRW, so that quantitative predictions can be made? Our current method tracks the behaviour of geodesics evolved through the simulation. Because we know how geodesics would evolve through a purely FRW universe, we can determine the qualitative and quantitative differences. This is depicted schematically in the figure below, where I’ve shown the results of a simulation, and some of the geodesics evolving in the background. The geodesics entering the affected region pile up, indicating a deviation from pure FRW. We hope to polish the final code off soon, and extract the first observational signatures from our simulations!
It has been a very busy autumn for Early Universe @UCL, but we took time out to celebrate Halloween with a pumpkin carving competition! Jason, the esteemed judge, says:
So, the results from the pumpkin carving competition are now in…
We have two categories: best geek homage and best teeth. Entries were received from Hiranya and Boris.
Hiranya’s entry: “Ghostly moon landing” in honour of Neil Armstrong.
Boris’ entry: “Miam miam”, which according to Boris translates as “yum yum”.
In the best geek homage category, Boris’ entry comes a respectable 3rd and the grand prize goes to… Hiranya!
In the best teeth category, Hiranya’s entry comes a respectable 3rd and the grand prize goes to… Boris!
Surprisingly, it’s almost as if the participants tailored their entries to each category! In any case, it turns out everyone’s a winner! Yay!
Last May, Boris Leistedt and Jason McEwen finished developing the first exact harmonic and wavelet transforms in 3D spherical coordinates. The paper has now been peer-reviewed and will be published in IEEE Trans. Sig. Proc. in December. The code is available on www.flaglets.org.
Why exact? Exact transforms are essential in information sciences because they enable one to capture all the information contained in a complex signal, and process it without any loss of information. Thanks to this theoretical property, the accuracy achieved by the transform is only limited by the representation of floating-point numbers on the computer!
Why wavelets? Wavelet transforms are increasingly successful in various disciplines (including astrophysics and geophysics) because they separate scale-dependent, localised features in the signals of interest, which is very suited to the extraction of patterns in complex data. As an example they were used by Stephen Feeney and Hiranya Peiris to search for potential signatures of topological defects in the cosmic microwave background.
Why in three dimensions? In astrophysics, spherical wavelets are naturally suited to the analysis of signals on the sky, and cosmologists have already started to apply them to galaxy surveys, for instance to help dealing with the systematics or to isolate structures at different physical scales of interest. However galaxy surveys are 3D datasets where the angular position is accurately measured on the sky but the radial information (such as photometric redshift or distance) is often estimated from color information and therefore subject to uncertainty. Not only one should use 3D wavelets to properly treat such data, but these wavelets should furthermore separate the radial and angular features to avoid mixing of radial and tangential systematic uncertainties. The accuracy would also be critical because most modern pipelines apply these transforms repeatedly (to go back and forth from pixel to wavelet space), leading to a loss of information if the transform is not exact.
What is the outcome? To address these problems, Boris and Jason designed a 3D harmonic transform combined with a sampling theorem on the ball, namely a 3D pixelization scheme that captures all the harmonic information in a finite set of samples on the ball. Then, they defined wavelet kernels, called “flaglets”, to extract scale-dependent, localised features in signals defined on the ball. The flaglets isolate patterns in the radial and angular dimensions separately and are very promising for analysing galaxy surveys and treating their systematics.
The code is fast, accurate, and is publicly available on www.flaglets.org.
Cosmic bubble collisions provide an important observational window on the dynamics of eternal inflation. In eternal inflation, our observable universe is contained in one of many bubbles formed from an inflating metastable vacuum. The collision between bubbles can leave a detectable imprint on the cosmic microwave background (CMB) radiation. Although phenomenological models of the observational signature have been proposed, to make the theory fully predictive one must determine the bubble collision spacetime, and thus the cosmological observables, from a scalar field theory giving rise to eternal inflation. Numerical simulations of bubble collisions in full General Relativity (GR) are needed to make a direct connection between a scalar field model giving rise to eternal inflation and the signature of bubble collisions expected to be present in the CMB. Numerics are important because the collision between bubbles is a highly non-linear process.
We recently took the first steps towards this goal, in collaboration with Matt Johnson and Luis Lehner at the Perimeter Institute in Canada. We simulated collisions between two bubbles in full GR. These simulations allowed us to accurately determine the outcome of bubble collisions, and examine their effect on the cosmology inside a bubble universe. Studying both vacuum bubbles and bubbles containing a realistic inflationary cosmology, we confirmed the validity of a number of approximations used in previous analytic work, and identified qualitatively new features of bubble collision spacetimes, such as oscillons. We identified the constraints on the scalar field potential that must be satisfied in order to obtain collisions that are consistent with our observed cosmology, yet leave detectable signatures.
The figures below show a couple of examples of bubble collisions in our simulations.
Wavelets are a powerful signal analysis tool due to their ability to localise signal content in scale and position simultaneously. The generic nature of wavelets means that they are suitable for many problems, but by the same token, for certain problems they are not necessarily optimal. In some instances, where we have a good prior knowledge of the signal we are interested in, we can create an optimal filter to look for it.
When looking for the signatures of bubble collisions in cosmic microwave background (CMB) observations, we have a good theoretical prediction of the expected form of the signal. To detect likely bubble collisions candidates in the CMB, we developed a source detection algorithm using optimal filters.
Since the bubble collision signatures that we expect to see in the CMB vary with size and position on the sky, wavelets are a powerful method for the candidate source detection, and indeed have been shown to work very well.However, since we have a good prediction of the bubble collision signature, we can exploit this knowledge by constructing optimal filters that are matched to the expected signal, and also to the background stochastic process in which the signal is embedded (in this case CMB fluctuations). The optimal filters can thus look quite different to the wavelets that would be used for the same purpose (see plots).
The image below shows the signal-to-noise ratio (SNR) achieved by wavelets and optimal filters for detecting bubble collisions in the CMB. We see that optimal filters outperform wavelets since they are closely adapted to the profile of both the bubble collision signature and also the background stochastic process (the CMB).
For the detection of cosmic bubble collisions, optimal filters allow us to exploit our knowledge of the expected signal. However, wavelets are suitable for a wider class of problems. As always, the best analysis technique to adopt depends heavily on the problem at hand.
Our results on a search for textures in the cosmic microwave background have been featured in Science‘s Editor’s Choice. The BBC website also published an article about our work!
FQXi talked to Stephen Feeney and Hiranya Peiris about cosmic textures, and more generally about how we know what we know about the origin and evolution of the Universe, in their podcast. Our co-author Matt Johnson blogged about our search for textures, also at FQXi.
The 2012 Gruber cosmology prize has been awarded to Chuck Bennett and the Wilkinson Microwave Anisotropy Probe (WMAP) team. Hiranya is thrilled to share this honour with such an amazing group of people!
The citation reads:
The Gruber Foundation proudly presents the 2012 Cosmology Prize to Charles Bennett and the Wilkinson Microwave Anisotropy Probe team for their exquisite measurements of anisotropies in the relic radiation from the Big Bang—the Cosmic Microwave Background. These measurements have helped to secure rigorous constraints on the origin, content, age, and geometry of the Universe, transforming our current paradigm of structure formation from appealing scenario into precise science.
Other Members of the WMAP team are:
Chris Barnes, Rachel Bean, Olivier Doré, Joanna Dunkley, Benjamin M. Gold, Michael Greason, Mark Halpern, Robert Hill, Gary F. Hinshaw, Norman Jarosik, Alan Kogut, Eiichiro Komatsu, David Larson, Michele Limon, Stephan S. Meyer, Michael R. Nolta, Nils Odegard, Lyman Page, Hiranya V. Peiris, Kendrick Smith, David N. Spergel, Greg S. Tucker, Licia Verde, Janet L. Weiland, Edward Wollack, and Edward L. (Ned) Wright.
See the Gruber Foundation website for further details.
Theories of the primordial Universe predict the existence of knots in the fabric of space – known as cosmic textures – which could be identified by looking at light from the cosmic microwave background (CMB), the relic radiation left over from the Big Bang.
Using data from NASA’s Wilkinson Microwave Anisotropy Probe (WMAP) satellite, Stephen Feeney and Hiranya Peiris, with collaborators Matt Johnson (Perimeter Institute, Canada) and Daniel Mortlock (Imperial College London) have just performed the first search for textures on the full CMB sky, finding no evidence for such knots in space.
As the Universe cooled it underwent a series of phase transitions, analogous to water freezing into ice. Many transitions cannot occur consistently throughout space, giving rise in some theories to imperfections in the structure of the cooling material known as cosmic textures.
If produced in the early Universe, textures would interact with light from the CMB to leave a set of characteristic hot and cold spots. If detected, such signatures would yield invaluable insight into the types of phase transitions that occurred when the Universe was a fraction of a second old, with drastic implications for particle physics.
The new study, published in Physical Review Letters, places the best limits available on theories that produce textures, ruling out at 95% confidence theories that produce more than six detectable textures on our sky.
You can read the paper below!
Stephen M. Feeney, Matthew C. Johnson, Daniel J. Mortlock, Hiranya V. Peiris