Date:31 August 2017
A new neural network analyses images of gravitational waves in seconds, which means telescopes like Hubble could see deeper into the universe than ever before.
By Avery Thompson
Researchers at SLAC National Accelerator Laboratory and Stanford University developed a neural network that can analyse images of gravitational lensing 10 million times faster than conventional techniques, which could dramatically extend the range and resolution of telescopes like Hubble and provide crucial information on galaxy clusters and dark matter.
Most of the universe is very far away, which makes it hard to study. Some of the most interesting galaxies are billions of light years from us. Even with our strongest telescopes it can be hard to make them out. Fortunately, physics provides a way for us to see further, using a phenomenon called gravitational lensing.
The gravity from the center galaxy causes the light from a more distant galaxy to distort and bend as it travels past it. This is known as gravitational lensing.
Einstein predicted gravitational lensing with his theory of relativity published in 1915. His theory said that large masses like stars and galaxies actually curve light around them. Like a regular lens, a gravitational lens can focus and magnify distant objects. This means astronomers can use them to see further than they normally would.
But unlike a regular lens, a gravitational lens doesn’t focus the object into a single sharp point. Instead, astronomers have to work backward from a smeared and spread-out image. Using complex computer simulations they try to recreate the original. This is a process that can take months to complete, so most astronomers don’t take advantage of the extra range that gravitational lensing can provide.
Researchers at the Kavli Institute for Particle Astrophysics and Cosmology (KIPAC), a joint institute of SLAC and Stanford, trained neural network algorithms to analyse gravitational lensing images. These algorithms were able to completely analyse images from Hubble in only a few seconds that would have taken humans weeks or months to do.
This means we’ll be able to learn from gravitational lensing images much faster than before. This is crucial, because over the next few years several next-generation telescopes will come online. These new telescopes could find many more gravitational lenses. The number of gravitational lensing images is expected to increase from only a few hundred right now to tens of thousands in a few years. So we’ll need a much faster way to analyse them.
“We won’t have enough people to analyse all these data in a timely manner with the traditional methods,” said study author Perreault Levasseur. “Neural networks will help us identify interesting objects and analyse them quickly. This will give us more time to ask the right questions about the universe.”
These gravitational lenses can let us peer deep into the distant universe and make out galaxies and other objects we wouldn’t otherwise be able to see. But gravitational lenses can also tell us a great deal about the stars and galaxies doing the lensing. Astronomers can tell the size and mass of a galaxy based on how strongly it curves light around it, which makes gravitational lenses perfect ways to study dark matter.
The researchers hope that their neural networks will become the go-to method for analysing gravitational lensing. They believe there are many more applications for using this type of algorithm in astronomy and other sciences. There are all kinds of problems that neural networks are well-suited to solving. With any luck future neural networks might soon make those problems disappear.
Video credit: PBS Space Time
From: PM USA