Detecting very small earthquakes is notoriously difficult. The churning of the ocean, a passing car or even the wind can feel a lot like a minor quake to the sensors that blanket seismically active parts of the U.S.
That’s a problem for scientists who rely on data about all the earthquakes in a region to study what triggers the biggest, most destructive ones.
Now, a team of scientists says it has found a way to accurately detect tiny earthquakes, and it has published a new, more comprehensive list of quakes that occurred over a recent 10-year period in Southern California. The work was published Thursday in the journal Science.
The team relied on data from a network of about 400 seismic sensors in California, spread from the U.S.-Mexico border up through the southern part of the state. Those sensors continuously measure movement in the Earth’s crust, looking for evidence of quakes. During the decade from 2008 to 2017, scientists had already identified 180,000 earthquakes in the region.
“They have a robust seismic network in Southern California,” explains Daniel Trugman, a seismologist at Los Alamos National Laboratory and an author of the study. But while 180,000 might seem like a large number of quakes, there were many, many more hiding undetected in the data.
When Trugman and his collaborators re-analyzed the data using a powerful array of computer processors, they found evidence of 10 times as many earthquakes — 1.81 million temblors in a decade, or roughly one tiny earthquake every three minutes or so.
“You don’t feel them happening all the time,” Trugman says. But “they’re happening all the time.”
Most quakes detected in the study are so small, their magnitude falls below zero. It’s not impossible for humans to feel such subtle trembling in the rock beneath our feet, but it is unlikely. Trugman equates it to a table being kicked over in your kitchen. If you are standing in the kitchen when it happens, you’ll notice the table hitting the ground. But if you’re up the street when it happens, or even just outside, you’re likely to miss it.
To detect the tiny quakes without mistaking them for nonquake vibrations (like a passing truck), Trugman and scientists at the California Institute of Technology and the University of California, San Diego used computers to search a decade’s worth of data for patterns that resembled known earthquakes.
The analysis was made possible by advances in computer processors over the past decade or so. Even so, it took tens of thousands of hours for a group of 200 graphics processors housed at Caltech — basically souped-up versions of the graphics cards in laptop computers — to search through all the data and pinpoint potential quakes, and hundreds of thousands of hours more for other computers to finish the analysis.
The ability to measure more, smaller earthquakes will hopefully help scientists answer some of the most intriguing questions about how, where and why earthquakes happen.
In California, many communities rely on fault maps showing where earthquakes are most likely to happen to help make decisions about infrastructure, building codes and emergency plans. Having more complete information about quakes in the region could make those maps more complete and could help identify what are known as blind faults, which aren’t visible on the surface but have the potential to shift underground.
A blind thrust fault was responsible for the 1994 Northridge earthquake in Southern California that killed more than 50 people, injured thousands of others and caused billions of dollars in damage. In fact, the network of seismic sensors that made the new study possible was put in place in the aftermath of that disaster.
It also might be possible to use data from similar sensor networks in other parts of the U.S. — for example, in the Pacific Northwest — to create more comprehensive catalogs of quakes in those regions.
The study’s authors hope to use the data to look at how large earthquakes are triggered and what role small earthquakes play in that process. Understanding that complex relationship could eventually help seismologists predict earthquakes.
“We’re going to be looking at a lot of these questions in a lot more detail,” says lead author Zachary Ross, a geophysicist at Caltech.
“The holy grail of earthquake seismology has always been prediction,” explains Trugman. In recent years, for example, the U.S. government has rolled out an earthquake warning system along the West Coast, which uses the same networks of sensors that scientists are studying. Having a deeper understanding of the seismic information that is fed into that early alert system could help make it more accurate.
“I’m cautiously optimistic that we’ll make progress on earthquake prediction,” Trugman says.