The fall of a prominent food and marketing researcher may be a cautionary tale for scientists who are tempted to manipulate data and chase headlines.
Brian Wansink, the head of the Food and Brand Lab at Cornell University, announced last week that he would retire from the university at the end of the academic year. Less than 48 hours earlier, JAMA, a journal published by the American Medical Association, had retracted six of Wansink's studies, after Cornell told the journal's editors that Wansink had not kept the original data and the university could not vouch for the validity of his studies.
In an internal review spurred by a wide range of allegations of research misconduct, a Cornell faculty committee reported a litany of faults with Wansink's work, including "misreporting of research data, problematic statistical techniques, failure to properly document and preserve research results, and inappropriate authorship." Cornell apologized for Wansink's "academic misconduct," removed him from his teaching and research posts, and obligated him to spend the remainder of his time there "cooperating with the university in its ongoing review of his prior research."
It was a stunning fall from grace for Wansink, who had become famous for producing pithy, palatable studies that connected people's eating habits with cues from their environment. Among his many well-known findings: People eat more when they're served in large bowls, and when they're watching an action movie, and when they sit close to the buffet at an all-you-can-eat restaurant. His work was cited in national news outlets, including NPR, and he had a hand in developing the new U.S. dietary guidelines in 2010.
Wansink's perch at the top of his field began to wobble in early 2017. That's when Tim van der Zee, a doctoral student in educational psychology at Leiden University in the Netherlands, went public with the results of an investigation that began when he stumbled across a blog post Wansink had written on his personal website the year prior.
The post, since removed from Wansink's site but accessible today as a cached version, was aimed at aspiring academics. The most promising postdoctoral students, Wansink wrote, "unhesitatingly say 'Yes'" to research projects, "even if they are not exactly sure how they'll do it."
But van der Zee was more interested in Wansink's description of the work he was assigning to his postdocs. Those descriptions, van der Zee says, appeared to contain a "strange admission" of "highly questionable research practices."
The gold standard of scientific studies is to make a single hypothesis, gather data to test it, and analyze the results to see if it holds up. By Wansink's own admission in the blog post, that's not what happened in his lab.
Instead, when his first hypothesis didn't bear out, Wansink wrote that he used the same data to test other hypotheses. "He just kept analyzing those datasets over and over and over again, and he instructed others to do so as well, until he found something," van der Zee says.
That's not necessarily bad, says Andrew Althouse, a statistician at the University of Pittsburgh who has followed the controversy around Wansink's research methods. "There's nothing wrong with having a lot of data and looking at it carefully," Althouse says. "The problem is p-hacking."
To understand p-hacking, you need to understand p-values. P-values are how researchers measure the likelihood that a result in an experiment did not happen due to random chance. They're the odds, for example, that your new diet is what caused you to lose weight, as opposed to natural background fluctuations in myriad bodily functions.
P-hacking is when researchers play with the data, often using complex statistical models, to arrive at results that look like they're not random.
Large datasets can be prone to p-hacking, Althouse says. "Let's say you flip a coin a million times. At some point you're going to get 10 heads in a row." That does not mean the coin is weighted, even though looking at that sliver of data makes a random result look like it's not due to chance.
Indeed, Wansink's lab collected reams of information in its research, often from pencil-and-paper surveys, logging everything from participants' age and gender to where they sat in a restaurant, the size of their group and whether they ordered alcohol.
Then they analyzed that data to find connections to what, and how much, people ate. As BuzzFeed News reporter Stephanie Lee found in a trove of emails released through various records requests, Wansink encouraged his students to dig through the numbers to find results that would "go virally big time."
Wansink seemed to admit to this practice in his 2016 blog post. "He, in a very honest manner, describes how he was actually doing the studies," van der Zee says. Wansink's blog post pulled back the curtain on dozens of failed analyses that never showed up in his published articles.
Van der Zee and two other early-career researchers, Jordan Anaya and Nick Brown, piqued by what they saw as Wansink's acknowledgement of p-hacking, dug deeper into his work starting late in 2016.
The team found 150 problems with data collection and statistical analysis in the first four of Wansink's papers they scrutinized. The team's findings were validated earlier this month when Cornell reported the conclusions of its yearlong internal probe to JAMA, resulting in the journal's retractions of Wansink's work.
While Wansink is perhaps the most prominent researcher in recent history to be brought down by allegations of p-hacking, this type of academic malpractice is not specific to one lab at one university, say van der Zee and Althouse. And it may be because there is a rush to publish. "Science has become faster than is healthy," van der Zee says.
Cornell agrees. "Van der Zee is right to note that as the pace and reach of news has become instant and global, there may be a temptation" for universities and researchers to get caught up in a race for the next attention-grabbing conclusion, says Joel Malina, vice president for university relations at Cornell.
And in this media climate, food and nutrition science in particular has come under scrutiny — some have called it a "credibility crisis."
Nonetheless, Malina says, "We believe that the overwhelming majority of scientists are committed to rigorous and transparent work of the highest caliber."
Wansink says he stands by his studies and is confident that his lab's results will be validated by other groups. "I thought we had all of this nailed," Wansink wrote to his colleagues after getting news of the retractions, in an email he shared with NPR, suggesting that he felt the information he shared would clear him of wrongdoing.
He acknowledged some of the errors in a 2017 statement and says he provided as much information as he could to help the Cornell faculty committee corroborate his work. "We never kept the surveys once their data was entered into spreadsheets. None of us have ever heard that a person was expected to keep all of those old surveys," Wansink told NPR in an email last week.
Despite the questions surrounding Wansink's work and the unraveling of his academic career, some of his findings — such as the suggestion to use smaller bowls — can be useful to people with healthy relationships with food, says Jean Fain, a psychotherapist affiliated with Harvard Medical School who has contributed to NPR on dieting topics in the past.
But, she adds, "they can be dangerous to people with diagnosable eating disorders, who, in following Wansink's advice to a T, are more apt to ignore their internal experience of hunger and fullness, satisfaction and nourishment, and focus exclusively on externals, like plate and portion size."
"We can't simply reduce our portion sizes and stop overeating," she says. "In fact, restricting food in the short-term is one of the best ways to predict out-of-control eating in the future."
For all of Wansink's influence in the field of food and marketing, though, Althouse says he worries that the lessons of Wansink's mistakes will not be a wakeup call to the broader scientific community.
"I would love to send out a survey right now, right this minute, to all the faculty at my institution, and ask how many people have heard of this, because I bet you it's not that many," Althouse says.
But he's hoping that changes. "This should be the cautionary tale that gets brought up in Research Methods 101 across a number of disciplines," he says.