The authors and others say the work shows their app can accurately collect sleep data.
The approach has potential, says Daniel Forger, professor of mathematics and computational medicine and bioinformatics at the University of Michigan and one of the authors. "You could call a million people up, most of those people won't answer your survey, and spend millions of dollars doing this," he says. "But what was so remarkable was that — for almost no amount of money and almost instantaneously — we could collect this kind of data."
Typically, sleep researchers get data from controlled lab environments or large studies with subjects reporting back on what they did, says Jamie Zeitzer of the Stanford Center for Sleep Sciences and Medicine. An app bridges the two, he says.
"This is real life, this is what's actually happening and ... especially in something like sleep, there's a dichotomy between what people remember they do ... and what they actually do," Zeitzer adds. "This fits quite nicely into that gap and helps us understand actual behavioral patterns."
Other researchers wonder about generalizations drawn from the data. Among them: Diane Lauderdale, chair of the department of public health sciences at the University of Chicago and a sleep researcher, who says that while the patterns in this paper are plausible, she's not sure the data support some of the findings.
Her concern is that people who use the app — they have a smartphone, agree to send data back, presumably travel quite a bit — may not be representative of the people in their countries.
"There's a general challenge as we move into big data sources about how to weigh the attractiveness of using these ... to answer questions ... that we have not been able to answer and the real limitation that we don't really have control over, or knowledge about, exactly who we're getting the data from," she says. "It's not just unique to this study."
Michigan's Walch agrees, and she says she has spent nights stressing out over this potential selection bias. She points out the patterns they describe match what sleep researchers have previously established in more controlled studies.
"The very first wave of data analysis we did, I was almost distraught," she says. "A lot of our things are confirmatory, and I came to realize, 'No, this is great that they're confirmatory of these smaller studies with fewer people, because it tethers us to reality.' But then, stepping back, it's still a problem."
She says she hopes the spread of the technology and improvements in its ease of use will help.
Ida Sim, co-director of biomedical informatics at the University of California, San Francisco Clinical and Translational Sciences Institute, shares Lauderdale's concern about selection bias and adds that big data researchers have to go out of their way to get a random representative sample.
They need to hold people's hands through tech problems, and motivate them to report good data, she said. As an example of something that could meet that standard, she points to the Precision Medicine Initiative from the National Institutes of Health, which aims to recruit 1 million or more people in the U.S. to study treatments that take into account different genes, environments and lifestyles. The president called for $215 million in 2016 for this program.
Sim says another issue is making sure that researchers are measuring the same thing. For instance, an app that reports a blood glucose value isn't very useful to other researchers unless they know whether it's fasting blood glucose level, a random level, an average over the past week or a single reading.
"It's like if people are speaking different languages, and they all use a slightly different word ... and it turns out everybody's talking about the same thing, but the words are slightly different and so communication is impeded."
She co-founded a nonprofit called Open mHealth that aims to develop open common standards, taking inspiration from the Internet's open architecture, as explained in a 2010 article in Science.
As for the study on sleep patterns, she says "the findings weren't that earth-shattering, but the methods and approach are illustrative" and that we can expect more big data research like this in the future.