Researchers famously tried to replicate 100 prominent psyhology studies and reported last year in Science that only about 40 percent could be. This scandalous result has recently provoked a defensive backlash from other practitioners who asserted that the Science replicability study itself was flawed and that the reported results of psychological research are, in fact, stronger than its critics have claimed. Even as the controversy has been playing out, a new study finds that one of the more robust results of psychological research may be bunkum. Specifically, the new study to be published in Perspectives on Psychological Science challenges the finding that an individual's willpower declines as he or she resists various immediate temptations.
During the past two decades, the ego depletion effect has been confirmed by scores of psychological studies using all manner of experimental techniques. However, according to Slate, a new study using 2,000 subjects in experiments across the world that tried to replicate ego depletion has found absolutely no such effect. This is a huge bombshell. As Slate explains:
No sign that the human will works as it's been described, or that these hundreds of studies amount to very much at all. …
For scientists and science journalists, this back and forth [over replicability] is worrying. We'd like to think that a published study has more than even odds of being true. The new study of ego depletion has much higher stakes: Instead of warning us that any single piece of research might be unreliable, the new paper casts a shadow on a fully-formed research literature. Or, to put it another way: It takes aim not at the single paper but at the Big Idea.
Jon Coller
[Roy] Baumeister's theory of willpower, and his clever means of testing it, have been borne out again and again in empirical studies. The effect has been recreated in hundreds of different ways, and the underlying concept has been verified via meta-analysis. It's not some crazy new idea, wobbling on a pile of flimsy data; it's a sturdy edifice of knowledge, built over many years from solid bricks.
And yet, it now appears that ego depletion could be completely bogus, that its foundation might be made of rotted-out materials. That means an entire field of study—and significant portions of certain scientists' careers—could be resting on a false premise. If something this well-established could fall apart, then what's next? That's not just worrying. It's terrifying.
In his seminal 2005 article, "Why Most Published Research Findings Are False," Stanford University statistician John Ioannides concluded that "for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias."
Might I suggest that psychology and, in fact, the rest of science would benefit from some really focused research on the problem of confirmation bias?
For more background, see my Reason feature "Broken Science."
Start your day with Reason. Get a daily brief of the most important stories and trends every weekday morning when you subscribe to Reason Roundup.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com
posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary
period.
Subscribe
here to preserve your ability to comment. Your
Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the
digital
edition and archives of Reason magazine. We request that comments be civil and on-topic. We do
not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments
do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and
ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
I knew them fancypants collegeboys didn't know nothing nohow. Them "scientists" say one thing one day and another thing the next day. They're obviously covering up the truth of phlogiston and haruspication to line their own pockets and because they hate America!
"In the olden days there was a craft to running an experiment. You worked with people, and got them into the right psychological state and then measured the consequences. There's a wish now to have everything be automated so it can be done quickly and easily online."
This from the guy whose studies are now in question.
Uh, sounds a bit like instilling bias to me. Care to double-blind that 'getting them into the right psychological state'?
Yes, the full Slate article mentions this as really an indication of how fragile the Big Idea is. A true Big Idea is robust enough to withstand all sorts of variations and perturbations.
I wonder how psychology can even be considered a science at all, along with most of the other social sciences. I'm willing to cut economics some slack for the basics, like supply and demand, but I wonder, if college educations were entirely private and didn't have politicians pushing the idea that everybody must have a degree, would any of the social sciences even survive? It seems as if social science Ph.D.s are only useful to get a job teaching.
They would become much more rigorous if government money was taken out of the picture. There is a very large and specific place for these types of experiments. Advertising alone relies on understanding how the human being will react to a given situation. This is not what most of these people want to actual spend their lives doing. Besides, it's so much more fun not to be questioned on your conclusions.
In the hard sciences controlling for researcher bias is a big part of our training. Confirmation bias is a very powerful effect - and counterintuitively these self-deceptions can be more powerful the smarter you are. Humans are very good at fooling themselves.
In most of the physical sciences and biological sciences it is fairly easy to control for this, because the effects being tested are usually very discrete and well-defined. As biological sciences move toward the more human scale in areas like epidemiology, you start to see much more impact from undetected biases.
Probably the biggest problem is the degrees of freedom a researcher allows himself. In behavioral sciences it is very common to have many data points being collected simultaneously. With multiple variables it is easy to pick through the data and find something interesting that stands up to preliminary statistical significance.
This is where pretty much every press-release science fad comes from. Acai berries, dark chocolate.... all the super-food stuff looks like this. There's one small study that shows X food prevents Y disease - but when you look closer they were looking at dozens or even hundreds of variables to pull out the interesting ones, claiming those as hits. Of course the follow-up studies, if they are ever done, show there is no effect. Because the entire effect was due to random noise.
In most of the physical sciences and biological sciences it is fairly easy to control for this, because the effects being tested are usually very discrete and well-defined. As biological sciences move toward the more human scale in areas like epidemiology, you start to see much more impact from undetected biases.
It's easier to control for this objectively. As Feynman and countless others (legions of statisticians) point out, you still get cargo cults and people nudging certain values to and from truth and biases. As biological science moves away from take penicillin - get cured, the biases, known and unknown, become much more difficult to detect and isolate.
When I was at the Yerkes Primate Center at Emory University I sat in on some of the behavioral scientists lab meetings. There were some great people doing really interesting work. They worked really hard to design experiments with good controls and blinding.
But as an outside pair of eyes, I usually saw major problems with their methodology. They were just too close to the problem. One researcher was studying "barter" among chimpanzees. She had a major paper published in a top journal on the topic at the time. As it was going to press, she presented her findings to her colleagues at the primate center. Everyone loved the study. Except the two idiots from outside the field, myself being one of the idiots.
Her study looked at research assistants trading treats for food-tube toys that the chimps had already emptied. They had nice video of the chimps handing over the toys in exchange for treats. Clear evidence of barter, right? Our criticism was that they had basically trained the chimp to bring them a toy in exchange for a reward. Like you would do with a dog, or a bird, or a dolphin... you get the idea.
But the researchers in the field didn't. They all said this was a standard methodology and the results were very strong. Our protestations that we could get our dog to do even better at this test fell on deaf ears.
It is really, really, really hard to design a psychology study that isn't bullshit. It is even harder to detect when you were the designer
Clear evidence of barter, right? Our criticism was that they had basically trained the chimp to bring them a toy in exchange for a reward. Like you would do with a dog, or a bird, or a dolphin... you get the idea.
Ok, but how would that differ from a neo-Watsonian behavioralist's premise for explaining why we humans engage in trade?
It is really, really, really hard to design a psychology study that isn't bullshit. It is even harder to detect when you were the designer
I think a lot of this stems from Milgram's results. People were so shocked at the results, they simply labelled the studies "unethical". (As if authorities always behave ethically.) Too many researchers are now more interested in "ethical" replicability than in actually learning anything - so you get billions of dollars wasted on studies of the kind you mentioned. Your protestations MUST fall on deaf ears if those ears to continue receiving grant money.
[Roy] Baumeister's theory of willpower, and his clever means of testing it, have been borne out again and again in empirical studies. The effect has been recreated in hundreds of different ways, and the underlying concept has been verified via meta-analysis. It's not some crazy new idea, wobbling on a pile of flimsy data; it's a sturdy edifice of knowledge, built over many years from solid bricks.
And yet, it now appears that ego depletion could be completely bogus, that its foundation might be made of rotted-out materials. That means an entire field of study?and significant portions of certain scientists' careers?could be resting on a false premise. If something this well-established could fall apart, then what's next? That's not just worrying. It's terrifying.
You mean even when we think the science is settled on something it may actually not be so?!
To figure out what went wrong, Carter reviewed the 2010 meta-analysis?the study using data from 83 studies and 198 experiments. The closer he looked at the paper, though, the less he believed in its conclusions. First, the meta-analysis included only published studies, which meant the data would be subject to a standard bias in favor of positive results. Second, it included studies with contradictory or counterintuitive measures of self-control. One study, for example, suggested that depleted subjects would give more money to charity while another said depleted subjects would spend less time helping a stranger.
Wow. Damn. Those are some pretty glaring flaws. I always felt skeptical about the idea of ego depletion, having read about it briefly in in Dave Grossman's On Killing, and later on in several other places, but I figured my anecdotes were just outliers. Seems interesting now that the conclusions may be bogus.
During the past two decades, the luminiferous aether has been confirmed by scores of phyics studies using all manner of experimental techniques. However, according to the American Journal of Science, a new study by Albert A. Michelson and Edward W. Morley that tried to detect the luminiferous aether has found absolutely no such effect. This is a huge bombshell.
You can actually do experiments that prove that it is indeed both - at the same time.
The double slit experiment - where you pass light through two thin slits and the light creates an interference pattern due to the wave nature of the particles - can be set up with a light source so attenuated that only a single photon at a time can be passing through the apparatus. This can be confirmed by using digital detectors to measure each photon as they hit the target.
Yet the interference pattern of the wave will still show up. Even though there cannot be a second particle/wave passing through the other slit at the time. Very spooky.
Classically impossible. I still favor the many worlds interpretation which tends to make it a little less spooky and more classical--aside from all the spontaneously created universes.
I always liked Feynman's discussion of Newton's Rings, because it is so simple: take two surfaces, vary the distance between them, and measure what is reflected. Intuitively, if physically, light is composed of discrete particles, then causality should dictate reflections from the first surface cannot be influenced by the placement (or even the existence of) the second, but in reality, even Newton observed that they are, centuries ago.
And for what it's worth, I might shy away from saying that light is both a particle and a wave; the probabilities involved in its behavior end up being wavelike, but to say that light itself is a wave, I think tends to confuse things, since a) it is experimentally shown to consist of particles, and b) to say it is a wave, can once again imply something like the aether, at least for those who wouldn't be predisposed to assume the term is being used to describe its probabilistic behavior.
Kind of off topic, but I can't wait for the next round of studies on Phrenology. The secrets of life that we could understand if there was only moar greater funding.
"Might I suggest that psychology and, in fact, the rest of science would benefit from some really focused research on the problem of confirmation bias?"
Bailey why do you deny the Science? We have 100's of papers confirming this. There's a consensus. Clearly this last paper is just the work of a Denier!
I'm just kidding of course. But it does indicate that Science can follow a path of Falsity just as it can follow a path of Truth.
Maybe the first step would be to stop publishing papers done by soft science grad students. We know they don't have the funding to get enough subjects to do a decent study, so stop treating them like one. Yes, in theory they should be doing work worthy of the field, but reality doesn't match up with that assumption.
'science would benefit from some really focused research on the problem of confirmation bias'
The problem with this idea is that focused research on confirmation bias is simply going to show confirmation bias in everything, even when it really isn't there.
Might I suggest that you missed a comma?
It's like you're not even trying anymore.
We found these in the back of your car -
,,,,,,,,,,,,,,, ,,,,,,,,,,,,,,,,, ,,,,,,,,,,, ,,,,,,,,,,,,, ............... '''''''''''''''' ;;;;;;;;;
Have you been punctuating without a licence?
(PS, reason, what's with the character limit per 'word'?)
Great you stole all my punctuation Now I have to go without for the rest of the day
All your ampersands are belong to us.
Looks like what you're saying is that No Means... Yes?
A little still she strove, and much repented
And whispering 'I will ne'er consent' ? consented.
BUT my p-value < .05... it must be true
I knew them fancypants collegeboys didn't know nothing nohow. Them "scientists" say one thing one day and another thing the next day. They're obviously covering up the truth of phlogiston and haruspication to line their own pockets and because they hate America!
They proved that 60 percent of those studies never existed?
I'll give that sentence a pass. Putting "replicated" at the end strikes me as an unnecessary redundancy.
Chippenhaus 1, Dean 0.
All 100 studies were indeed replicated. The results could or could not be replicated.
"In the olden days there was a craft to running an experiment. You worked with people, and got them into the right psychological state and then measured the consequences. There's a wish now to have everything be automated so it can be done quickly and easily online."
This from the guy whose studies are now in question.
Uh, sounds a bit like instilling bias to me. Care to double-blind that 'getting them into the right psychological state'?
Yes, the full Slate article mentions this as really an indication of how fragile the Big Idea is. A true Big Idea is robust enough to withstand all sorts of variations and perturbations.
I wonder how psychology can even be considered a science at all, along with most of the other social sciences. I'm willing to cut economics some slack for the basics, like supply and demand, but I wonder, if college educations were entirely private and didn't have politicians pushing the idea that everybody must have a degree, would any of the social sciences even survive? It seems as if social science Ph.D.s are only useful to get a job teaching.
It seems as if social science Ph.D.s are only useful to get a job teaching.
Hey, that's a good hypothesis for a dissertation!
Dayum. Would that be a meta-Ph.D.?
It already kind of exists in other academic ranking systems.
But you only get the Ph.D if your dissertation says that social science is science
Winner!
They would become much more rigorous if government money was taken out of the picture. There is a very large and specific place for these types of experiments. Advertising alone relies on understanding how the human being will react to a given situation. This is not what most of these people want to actual spend their lives doing. Besides, it's so much more fun not to be questioned on your conclusions.
In the hard sciences controlling for researcher bias is a big part of our training. Confirmation bias is a very powerful effect - and counterintuitively these self-deceptions can be more powerful the smarter you are. Humans are very good at fooling themselves.
In most of the physical sciences and biological sciences it is fairly easy to control for this, because the effects being tested are usually very discrete and well-defined. As biological sciences move toward the more human scale in areas like epidemiology, you start to see much more impact from undetected biases.
Probably the biggest problem is the degrees of freedom a researcher allows himself. In behavioral sciences it is very common to have many data points being collected simultaneously. With multiple variables it is easy to pick through the data and find something interesting that stands up to preliminary statistical significance.
This is where pretty much every press-release science fad comes from. Acai berries, dark chocolate.... all the super-food stuff looks like this. There's one small study that shows X food prevents Y disease - but when you look closer they were looking at dozens or even hundreds of variables to pull out the interesting ones, claiming those as hits. Of course the follow-up studies, if they are ever done, show there is no effect. Because the entire effect was due to random noise.
In most of the physical sciences and biological sciences it is fairly easy to control for this, because the effects being tested are usually very discrete and well-defined. As biological sciences move toward the more human scale in areas like epidemiology, you start to see much more impact from undetected biases.
It's easier to control for this objectively. As Feynman and countless others (legions of statisticians) point out, you still get cargo cults and people nudging certain values to and from truth and biases. As biological science moves away from take penicillin - get cured, the biases, known and unknown, become much more difficult to detect and isolate.
The Big Splash by Luis A Frank is a great example of this. That dude endured some SHIT!
When I was at the Yerkes Primate Center at Emory University I sat in on some of the behavioral scientists lab meetings. There were some great people doing really interesting work. They worked really hard to design experiments with good controls and blinding.
But as an outside pair of eyes, I usually saw major problems with their methodology. They were just too close to the problem. One researcher was studying "barter" among chimpanzees. She had a major paper published in a top journal on the topic at the time. As it was going to press, she presented her findings to her colleagues at the primate center. Everyone loved the study. Except the two idiots from outside the field, myself being one of the idiots.
Her study looked at research assistants trading treats for food-tube toys that the chimps had already emptied. They had nice video of the chimps handing over the toys in exchange for treats. Clear evidence of barter, right? Our criticism was that they had basically trained the chimp to bring them a toy in exchange for a reward. Like you would do with a dog, or a bird, or a dolphin... you get the idea.
But the researchers in the field didn't. They all said this was a standard methodology and the results were very strong. Our protestations that we could get our dog to do even better at this test fell on deaf ears.
It is really, really, really hard to design a psychology study that isn't bullshit. It is even harder to detect when you were the designer
Ok, but how would that differ from a neo-Watsonian behavioralist's premise for explaining why we humans engage in trade?
I thought that was settled. For the satisfaction derived from fucking others over.
If I had been drinking a glass of milk at the moment I read this, the snarf would have been epic.
It is really, really, really hard to design a psychology study that isn't bullshit. It is even harder to detect when you were the designer
I think a lot of this stems from Milgram's results. People were so shocked at the results, they simply labelled the studies "unethical". (As if authorities always behave ethically.) Too many researchers are now more interested in "ethical" replicability than in actually learning anything - so you get billions of dollars wasted on studies of the kind you mentioned. Your protestations MUST fall on deaf ears if those ears to continue receiving grant money.
You mean even when we think the science is settled on something it may actually not be so?!
Half-life of knowledge
R: Do you know who else wrote about the half-life of knowledge?
Sweet, Ron. When the revolution comes, I'll kill you last.
Samuel Arbesman?
From the Slate article:
Wow. Damn. Those are some pretty glaring flaws. I always felt skeptical about the idea of ego depletion, having read about it briefly in in Dave Grossman's On Killing, and later on in several other places, but I figured my anecdotes were just outliers. Seems interesting now that the conclusions may be bogus.
Then there's light as particle or wave, which is it? which bounced back and forth before the quantum guys gave up and said both.
You can actually do experiments that prove that it is indeed both - at the same time.
The double slit experiment - where you pass light through two thin slits and the light creates an interference pattern due to the wave nature of the particles - can be set up with a light source so attenuated that only a single photon at a time can be passing through the apparatus. This can be confirmed by using digital detectors to measure each photon as they hit the target.
Yet the interference pattern of the wave will still show up. Even though there cannot be a second particle/wave passing through the other slit at the time. Very spooky.
Here is an image of the experiment being done with electrons.
This is physically impossible.... yet it is how reality works. Very cool.
Classically impossible. I still favor the many worlds interpretation which tends to make it a little less spooky and more classical--aside from all the spontaneously created universes.
I always liked Feynman's discussion of Newton's Rings, because it is so simple: take two surfaces, vary the distance between them, and measure what is reflected. Intuitively, if physically, light is composed of discrete particles, then causality should dictate reflections from the first surface cannot be influenced by the placement (or even the existence of) the second, but in reality, even Newton observed that they are, centuries ago.
And for what it's worth, I might shy away from saying that light is both a particle and a wave; the probabilities involved in its behavior end up being wavelike, but to say that light itself is a wave, I think tends to confuse things, since a) it is experimentally shown to consist of particles, and b) to say it is a wave, can once again imply something like the aether, at least for those who wouldn't be predisposed to assume the term is being used to describe its probabilistic behavior.
Kind of off topic, but I can't wait for the next round of studies on Phrenology. The secrets of life that we could understand if there was only moar greater funding.
It's a cover-up. How many phrenologists have there been at neuroscience conferences in the last 20 years? What is it that they don't want us to know?
Young ladies, indelibly fix this shape in your head.
😉
Do not feed the (potential) researchers.
"Might I suggest that psychology and, in fact, the rest of science would benefit from some really focused research on the problem of confirmation bias?"
Bailey why do you deny the Science? We have 100's of papers confirming this. There's a consensus. Clearly this last paper is just the work of a Denier!
I'm just kidding of course. But it does indicate that Science can follow a path of Falsity just as it can follow a path of Truth.
The obvious question is why I should believe that the new ego depletion study is right?
"The obvious question is why I should believe that the new ego depletion study is right?"
You should reduce your confidence in the original studies. IE, it's not a binary state, it's a probabilistic state.
I thought the obvious question is why anyone would trust Slate in any area of science?
Maybe the first step would be to stop publishing papers done by soft science grad students. We know they don't have the funding to get enough subjects to do a decent study, so stop treating them like one. Yes, in theory they should be doing work worthy of the field, but reality doesn't match up with that assumption.
'science would benefit from some really focused research on the problem of confirmation bias'
The problem with this idea is that focused research on confirmation bias is simply going to show confirmation bias in everything, even when it really isn't there.
You know, because of confirmation bias.