Study Finds: Studies Are Wrong
A major project to reproduce study results from psychology journals found that more than half could not be replicated.

One of the bedrock assumptions of science is that for a study's results to be valid, other researchers should be able to reproduce the study and reach the same conclusions. The ability to successfully reproduce a study and find the same results is, as much as anything, how we know that its findings are true, rather than a one-off result.
This seems obvious, but in practice, a lot more work goes into original studies designed to create interesting conclusions than into the rather less interesting work of reproducing studies that have already been done to see whether their results hold up.
That's why efforts like the Reproducibility Project, which attempted to retest findings from 100 studies in three top-tier psychology journals, are so important. As it turns out, findings from the majority of the studies the project attempted to redo could not be reproduced. The New York Times reports on the new study's findings:
Now, a painstaking yearslong effort to reproduce 100 studies published in three leading psychology journals has found that more than half of the findings did not hold up when retested. The analysis was done by research psychologists, many of whom volunteered their time to double-check what they considered important work. Their conclusions, reported Thursday in the journal Science, have confirmed the worst fears of scientists who have long worried that the field needed a strong correction.
This is a serious problem for psychology, and for social science more broadly. And it's one that, as the Times points out, observers in and around the field have been increasingly worried about for some time.
Why is psychology research (and, it seems likely, social science research generally) so stuffed with dubious results? Let me suggest three likely reasons:
A bias towards research that is not only new but interesting: An interesting, counterintuitive finding that appears to come from good, solid scientific investigation gets a researcher more media coverage, more attention, more fame both inside and outside of the field. A boring and obvious result, or no result, on the other hand, even if investigated honestly and rigorously, usually does little for a researcher's reputation. The career path for academic researchers, especially in social science, is paved with interesting but hard to replicate findings. (In a clever way, the Reproducibility Project gets around this issue by coming up with the really interesting result that lots of psychology studies have problems.)
An institutional bias against checking the work of others: This is the flipside of the first factor: Senior social science researchers often actively warn their younger colleagues—who are in many cases the best positioned to check older work—against investigating the work of established members of the field. As one psychology professor from the University of Southern California grouses to the Times, "There's no doubt replication is important, but it's often just an attack, a vigilante exercise."
This is almost exactly what happened in an incident earlier this year when a couple of grad students first started to find discrepencies in a major study about attitudes toward gay marriage. The study, which claimed to find that attitudes on gay marriage could be quickly made more positive by a 20 minute chat with someone who is gay, turned out to be built on fake data. The grad student who uncovered the fakes has said that, over the course of his investigation, he was frequently warned off from his work by advisers, who told him that it wasn't in his career interest to dig too deeply.
Small, unrepresentative sample sizes: In general, social science experiments tend to work with fairly small sample sizes—often just a few dozen people who are meant to stand in for everyone else. Researchers often have a hard time putting together truly representative samples, so they work with subjects they can access, which in a lot of cases means college students.
For example, this study on how physical distance influences emotional perception, which is one of the studies that the Reproducibility Project tried and failed to replicate, relied on three experiments, one using 73 undergraduates, another using 42 undergatuates, and another using 59 adults. These are obviously pretty far from the large, randomly controlled study groups that researchers would ideally use. That's not to say that studies using small groups of individuals aren't valuable at all, or that the researchers behind them aren't doing rigorous work. It's just harder to know if results generated from studies like this are widely generalizable.
By that same logic, however, we should be careful about over-interpreting the results from the Reproducibility Project's efforts. These were one-time replication attempts using, in many cases, the same small subject groups as the original studies. As Marcia McNutt, the editor of the journal Science, is quoted saying by the Times, "I caution that this study should not be regarded as the last word on reproducibility but rather a beginning."
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Things were better here when Postrel had a take on the reproductivity of social "science" experiment results.
As boring and riddled with actual fraud as possible.
I wonder if this carries any implications with regard to the DSM?
This would seem to call all kinds of 'progressive' work into question; autism, aspergers, homosexuality...
Not saying all of it's bunk, but there have been some rather care blanch overhauls based, rather loudly, on 'consensus' with little/no data.
The proglodyte agenda can never be rolled back, it can only be slowed.
They would certainly like to think so, but I can point to examples.
Who cares whether the science is right when we can all be wrong together?
They just make it up as they go along anyway. I think there is some utility to psychiatry, but it's not really a scientific discipline.
I have seen into the dark underbelly of the DSM. Some of it is straightfoward and useful. Other parts are fraud on an epic scale, resulting in the vast overmedication of our children and seniors, just for a start.
You did your brother-in-law a favor.
I'm not aware that it was ever supposed to be anything other than a consensus and a judgment call by mental health workers.
From a libertarian point of view, what difference does it make what the DSM says? The DSM only matters if you believe that the state has some sort of special authority over people who are non-violent but "not normal".
A major project to reproduce study results from psychology journals found that more than half could not be replicated.
And that's just the half they attempted to reproduce.
True. This is the cream of the crop! They didn't bother to reproduce the 'average' or 'lower-grade' stuff.
And if I am remembering this correctly, the once they were able to reproduce showed half the effect that was claimed in the accepted study, or less, in practically every case, meaning that some serious bias and/or exageration of results was going on.
Whenever I hear of a "study", I always look to see who conducted it.
Almost invariably it is done by: "The group, whose intent it is to prove what we have just told you is the result of our study".
Even assuming honest intent, one problem with social science that is inherent is controlling variables vs sample size. Design an experiment with real live, honest to goodness human beings in which you really can control for most variables and your sample size is too small. Do a large scale study, and I don't care what kind of statistical black magic one does, too many variables have not properly accounted for.
This why the AGW crowd isn't doing science. Computer models don't equal science. Sure trying to collect evidence is, but when they cherry pick tree-ring data and ice core samples it is simply confirmation bias. There are no true reproducible results (unless they do want to share all their data), and there is no truly falsifiable hypothesis.
Science can be seen as the process of making models. The laws of physics are a model of how the universe works.
But when doing proper science, you see if your model makes good predictions, and if it doesn't you don't keep claiming that your models are making useful predictions. Especially when you are dealing with a highly chaotic dynamic system like climate, which is something we aren't very good at yet.
I agree with you. Perhaps I should have said "computer models which bear little resemblance to reality are not science." These models take little account of solar cycles, either in terms of net energy output, or magnetic field strength which has a large effect on charged particles which in turn has an effect on cloud generation.
It is like anything else: GIGO.
Is there a way to donate to this group, so they can afford to do more of these reproducibility studies? Or is this all over.
I'd donate to them if they'd take on the IPCC "analyses"...
All three of Peter's suggested causes are correct.
First, one sadly can't build an academic reputation with replications. Despite all their hand-wringing about how more replications are needed, top journals don't as far as I know ever publish them. So time spent doing potentially high-impact work is spent on lower-impact (by conventional metrics) work. Most scientists will feel pressure from their administrators not to do this work.
(contd)
Second, all these fields are small, incestuous and examples of "old boy networks". For example if I were to pick apart a powerful senior person in my field who does terrible work (of which there is one), there would be retaliation. I would expect it to be the last paper I publish in that field. So I try to find ways to contribute that do not cross that person.
But it's the statistics where Suderman is only on the tip of the iceberg. P values do not mean what people think they mean. In basically all cases, they should be replaced with power calculations. This is assuming that the assumptions about the distribution of variance needed to run a t test are even remotely true.
(contd)
For example, we make medical diagnoses about the brain all the time based on the assumption that such and such brain feature has a bell curve distribution in the population. But almost nothing about the brain shows a bell-curved distribution in the population. So parametric tests can't really be used. But we use them all the time anyway.
Finally there is p-hacking. My research doesn't hit the right p value so I find reasons to exclude subjects whose data don't look the way I want them to look, until I get the p value. This is a vast semi-fraudulent gray area.
(contd)
never mind it's enough.
I was finding your perspective interesting. I'm on the outside looking in, so I can really only speculate on a lot of this stuff.
I was just going to say that there are some interesting ideas for how to reform it.
One is the classification of an article as "published" but not yet "replicated". Then if someone else replicates it, this is also published in Nature and the status changes to "replicated". This idea solves several problems together as the replicator gets a high-impact article which is a good incentive.
The other is the open acceptance of ongoing, open source peer review through commenting on pre-prints and also post-publication peer review. But this doesn't solve the incentive problem really, it just gives another opportunity for checks and balances.
I like the first idea. It puts the incentives in the right place. The question is what happens when the study is done again but the results aren't replicated? I could see some really nasty back room politics going down (course, we'd still probably be better of than we are currently).
Another Ive heard is have masters thesis be replications.
..you talk like a fag, and your shit's all retarded....
now off for some refreshing Brawndo... it's got electrolytes....
Clarification question, please? Replicated Experiments implies to me (engineer) that the Experiment (Test) was repeated, but does not inherently imply that the same results were seen!
How modifiers to the listings... like Replicated-Results-Confirmed, Replicated-Results-Not-Confirmed, or something like that?
Not just dropping 'outliers', but also trying a whole bunch of things, discarding the results that don't reach significance and publishing the ones that do. The obvious problem is at a 95% confidence level, one out of 20 trials will reach significance even though there's no actual effect. This is why one of the proposed remedies is that all studies must be pre-registered. The journal accepts the paper based on the research proposal, before any data is collected, and the experimenter commits to publishing the results regardless of outcome.
Yes I forgot about that idea, which has a lot of momentum right now, and would also be a big improvement.
That seems like an implementation nightmare if the journal only accepts studies that were pre-registered.
Also, how does this work with the idea of peer review? If your proposal was good, but the study itself was shoddy or your report on the study claims things unproven, what do you do in the pre-registration scenario?
Good question. I wonder if it could all be addressed in post-publication peer review.
The p-value is not a good way to determine if something is good or not, and it was never intended to be. Part of the problem is ignorance of statistics.
I like this guy's idea. Continue to use p-values, but use Bayesian statistics to guide interpretations so that outlier studies don't skew practice too much.
I've read that article before, it is quite good. He is right, we end up with field wide multiple comparison problems from using p values.
The p-value is not a hoax and it has value, but 0.05 is too high a cutoff. It should be lowered after I published my results that are dependent on the 0.05 cutoff.
The problem with fiddling with the statistical measure is that it can lead people to overlook more basic issues that can lead to bad science even though you get replication of statistically significant results with any number of measures.
Psycotropic drugs can make you feel better, but they are often compared to an inert substance. In many cases, symptoms can be relieved with drugs whose mechanism of action is not the same as hypothesized. (Eg. Psychosis can be treated with opiates, even though the behavior of antipsychotics can be used to validate dopamine as a cause of schizophrenia). Sometimes the researchers know who the controls are, or fail to account for properties of tolerance and the withdrawl from dependence (Eg. taking one group off a drug to prove that it works better than no treatment.)
Also, the definition of "long term" often means a year. Measuring actual long term outcomes, such as with stimulants or anti-depressants, often results in negligible benefits and definite downsides.
I have every confidence psychiatry can use a variety of statistical methods. What I am not confident in is the soundness of their underlying theory of what constitutes "mental illness", the consequent difficulty of controlling for relevant variables and their institutional integrity.
1) If you try to run a parametric test on a non-normally distributed population, the program will tell you that it's no good. Reporting a parametric test value where inappropriate is fraud in my view.
2) There's nothing 'gray' about p-hacking. That's fraud.
3) Another problem: the 0.05 cutoff is too high. Too soft. We should implement the 0.01 cutoff as the highest value right after I get my 0.05-dependent results published.
2) your testing a new drug. One of your guys gets hit by lightning. Is discarding his death as a data point fraud?
That's not p-hacking! P-hacking is working backwards from the p-value you want to justifying excluding data points.
Does the drug change your total body charge?
Drug companies spend twice as much on marketing as they do on research. Maybe they can afford to include the lightning strikes if it helps them figure out that roughing people up, holding them down and injecting them with chemicals isn't the way to heal psychical wounds.
Just to be clear, I mean, if it takes away their excuses to overlook relevant variables, it's better to include the data.
Read a bit yesterday about nutrition, and using scientific papers in marketing. It included an interview with a researcher, who was repeatedly told to not worry about the content or quality of the studies, as long as the quote or title can support the product.
Scientific studies lend credence, and increase sales. So of course they're going to be fraught with major issues, both unintentional, and distinctly intentional.
A first rate university should not let its industrial sponsors pull that nonsense. I used to work an industrial contract. We had to go by the book, and did research plain and simple.
Sailer has another explanation.
So, let's assume for a moment that Bargh's success in the early 1990s at getting college students to walk slow wasn't just fraud or data mining for a random effect among many effects. He really was priming early 1990s college students into walking slow for a few seconds.
Is that so amazing?
Other artists and marketers in the early 1990s were priming sizable numbers of college students into wearing flannel lumberjack shirts or dancing the Macarena or voting for Ross Perot, all of which seem, from the perspective of 2013, a lot more amazing.
Overall, it's really not that hard to prime young people to do things. They are always looking around for clues about what's cool to do.
But it's hard to keep them doing the same thing over and over. The Macarena isn't cool anymore, so it would be harder to replicate today an event in which young people are successfully primed to do the Macarena.
So, in the best case scenario, priming isn't science, it's art or marketing.
This has been driving me nuts for years. I see links constantly that say ridiculous things like "study shows using one backpack strap only can cause cancer." It's complete garbage and should never see the light of day until it has been adequately vetted and peer reviewed.
It does a great disservice to science because it allows people to argue against things that legitimate studies (note the plural) have found to be true. Lets just say, oh I don't know, human activity's effect on climate change for example...
Peer reviewed =/= correct or valid.
No, but it is another important step in the process.
It's an important part of quality control, but it's just as important to understand what it ISN'T.
And then there is peer review fraud, plus general peer retaliation concerns as MedPhysGuy noted above.
http://www.washingtonpost.com/.....ic-papers/
You don't even know what psychology is, yet you claim some kind of credibility on climate change. Yeah, you're a fucking idiot.
I wasn't talking about psychology specifically, so I don't even know what you're flipping out about.
The effect is clear: we're warming the Earth and not to a degree that merits worry.
The second part of that statement is irrelevant if the first part is not supported by unbiased data... (and climate models.)
The models get modified to not be out of whack with other models' results. Data get sifted out if they push the model's output into the realm of consensus-disagreement.
Maybe the concept of 'climate forecasting is a "science" ' needs to be shed.
So, we're not all going to die?
"In the long-run we are all dead."
-Keynes
There's a problem with a lot of utter shit getting published, but I see that as less of a problem than the state of science journalism. Even shitty scientists are careful to say "A is associated with an x% increase of B, subject to conditions C and assumptions D, with a p-value < 0.05", or whatever. But then that immediately becomes "A CAUSES B HOLY SHIT BAN B" in every newspaper and magazine.
The only thing worse than science reporting is reporting on anything related to guns.
I heard a story yesterday about Walmart stopping selling AR-15 type guns. The reporter stated that their reason was to have more space for "lower caliber" hunting rifles that their customers are more interested in. Yeah, lots of people take a deer with .22.
Gell-Mann's Paradox is something like, whenever you see a news story about something you know something about, you're amazed at how the reporters don't know anything, and then you read the next story and assume the reporter knows what he's talking about.
Have there been good clearance sales at Walmarts? I've been meaning to stop in and see if I can maybe pick up a couple ARs and sell some, and maybe profit enough to pay a good chunk of for one for me. But I'm probably too late if they were discounting them a decent amount.
I saw that article as well. My favorite part was in the comments, someone asked what the "AR" in "AR-15" meant (doubtlessly trying to get an answer of "OMG ASSAULT RIFLE BAN THEM ALL!!!!). They got told where to shove it.
Yeah, I've seen porgies think AR = Assault Rifle.
And then the inverse pyramid of competence ends with a bunch of ignoramuses who see seemingly contradictory headlines about science and go GODDAMN FAGGOT SCIENTISTS DON'T KNOW SHIT FUCK SCIENCE FUCK BOOKS
It does, but this study looked at the journals and source material itself. Most could not be reproduced, and the minority that could had grossly overstated their results. The journalists are lazy and sloppy, but the stuff should never have been published but the academic journals in the first place. There are perverse incentives at play.
Oh yes, absolutely. But I think, and this is based on nothing but my intuition, that the shit studies mostly get ignored. At least they do by the scientists who are worth anything. And the nice thing about science is that the process eventually takes care of shitty science by itself. A is A, after all.
I think its field specific. Certain fields are more prone to these types of problems than others.
Truth is truth. It didn't matter how much support Lysenko got in the end.
But yes, the social sciences are much more prone to this than, say, mathematics.
It's true of any field that studies humans, including biology. Ethically, you can't properly control the work, so the results are noisy. The controls people use are at best half-assed.
Oh my god IRB is a huge pain in the ass. It's why I gave up on the brain-computer interface shit I was working on years ago.
Also the field of paleontologists trying to figure out how Pterodactyls flew. Seriously, had a grad level mechanical engineering course were we dug through one of those journals and reviewed those studies (they used physical models to test theories). All but one had big glaringly obvious flaws. We basically determined that paleontologists have no fucking clue how to run a study, and that included the peers reviewing their work.
I would imagine math is the least susceptible field to this sort of thing. A proof is either valid or it isn't and for the most part peer review will get it right.
As soon as someone is making money off the bad science, the feedback loop gets broken. The bad science can perpetuate itself until the whole field is toxic.
"And the nice thing about science is that the process eventually takes care of shitty science by itself."
That's how it should work. In reality, this takes far too long. Ex 'lipid hypothesis'
Yes, but it works eventually. Better that the lipid hypothesis lasted 40 years too long than that it never died at all.
Just because they hadn't already been replicated? That's not how it's supposed to work.
There are reasons that the vast majority of studies did not have replicable results or could only get half or less of the stated results. You are talking about like 1/6th of all published studies that have results that can be replicated.
There's small sample sizes, pressure to publish, sloppiness, and bias all at play here. Sorry, but what exactly is the peer review process for if it can't weed out bad science?
What peer review is intended to do is to weed out stuff that is incompletely documented or has obvious errors. That's it. In reality, it sometimes does that, but sometimes only makes sure that the format is correct for the journal or grant committee (e.g., margins, typeface, cite style, page limit), especially if the author and the journal editor are buddies.
I dealt with one egregiously incorrect paper (I'd go so far as to say "dishonest"), where we couldn't understand at all why it got published, and how it evaded basic requirements like disclosure of support. It turned out that the lead author, who runs a product defense firm and was paid by a client (undisclosed!) to write this piece of shit was a golfing partner of the journal editor. He got the contract to write the paper by assuring the company involved that his connections to the journal were tight and he could put in whatever it is they wanted him to do.
^^^This.
A while back I was in Venice Beach and there was some kind of street fair going on. There was a guy at a Greenpeace table telling everyone that a new study showed fracking caused earthquakes, and that as a Californian he was not in favor of MORE earthquakes, so we should ban fracking.
I asked him what the mechanism of action for that was because it seemed pretty obviously wrong, but he didn't know. I went home and read the study and it was something like "in areas where fracking has occurred there's somewhat increased seismicity" in other words, you're slightly more likely to feel existing seismic events where they've pumped a shit ton of water into the ground.
That's funny, because I'm for fracking specifically because I want California to sink into the ocean and take its sodomites and Mexicans with it.
Noooooooooo, but the tacos al pastor...
Won't somebody please think of the tacos?
That's funny, because I'm for fracking specifically because I want California to sink into the ocean and take its sodomites and Mexicans with it.
The actual sodomites, the symbolic ones, or the politically euphemistic ones?
In any event, seems like lots of Californians would be eager to have more water pumped into the ground about now.
The pot too?
You're a monster. I love avocados and garlic, so California should stay where it is. Just under new management.
Yeah, I'd expect that from SoCal folks... and especially my stereotypical Greenpeacer who has little real education in science.
When an earthquake occurs, it relieves stress at points on a fault line where the stress has built up high enough to overcome friction between the two 'halves of the fault.'
Fracking injects lubricant, accelerating that 'stress relief' far beyond what might happen if only 'natural' causes were at work.
The good news is: stress is relieved and Right There the chances of a bigger future quake are lessened. The 'bad?' news is that the stress relief is literally moved to somewhere else along the fault, and if it doesn't relieve itself immediately with a quake, it sets up that OTHER location for a future quake... although maybe no bigger than what would happen there 'on its own, due to natural stress buildup'!
A Freakonomics Approach might be to proactively inject fluids into ALL the Major Seismic Faults, deliberately provoking quakes and the consequential relief of stress.
Hell, entire communities could better prepare for a 'more likely now' quake than they would for a 'maybe some time in our lifetimes' event. Go figure.
Would it work? Maybe. Would it happen? NFWay! Nobody would sign up for the Possible Immediate Risk; Humans are SO funny that way.
I call "But then that immediately becomes "A CAUSES B HOLY SHIT BAN B" in every newspaper and magazine." the Catastrophization Tendency in current (and recent) media 'reporting.'
My theory is that humans are adrenaline addicts and since there aren't as many mammoths and saber-toothed tigers to get us excited, suppliers have sprung up to meet the need/demand...
Witness: evening news, roller-coasters and other similar park rides, loud mufflers and overpowered cars...
... and I LOVE high-powered cars!
And all the Austrian Economists laugh. Good Economics is based on logic and reason, not studies.
You can make a study way whatever you want it to.
I looked on Google Translate, but they didn't have Gibberish - English. Can someone help me out, please?
Austrian Economists don't really rely on studies, but on logic and reason. Any study would rely on too many assumptions in order to work reliably in the realm of economics.
Also, you can make a study say whatever you want, kind of like the Connecticut "gun laws make us safer" study. Cherry-picked data and all that.
You can't make a good study say what you want though.
But you can totally make an understudy say what you want.
"Who's your daddy?"
Medieval philosophy also relied exclusively on logic and reason as opposed to studies. The shortcomings of said system resulted directly in the development of the scientific method.
When you start with faulty premises, logic doesn't give you good results. Unfortunately, this all lead to the point of view that "logic is useless" and now we have Keynesians who tell us all kinds of things that not only are stupid, but simply cannot be true.
Studies have their uses, but without logic and reason, they are useless. In the case of economics, logic is much more useful than "measuring" output (impossible to do any way you look at it).
I wonder what the results would be if this study was done in sociology studies?
I have studied your study and am recommending that your study be studied further as well as my study of your study.
Here's my replicability anecdote:
I was a post doc for a soon-to-be Nobel laureate. We received a paper to referee that he assigned to me. The results of it looked... odd to me. My boss insisted that it must be right because he knew the senior author, and that guy was good and reliable. I pointed out that what he was claiming to have done was well known to fuck up the process involved. We finally agreed that I should try to replicate it, a VERY rare thing in the peer review process. When I ran the experiment, I got the result I expected- the process got fucked up (for chem geeks, this involved adding CO as a comonomer in a Ziegler Natta polymerization).
My boss was convinced I was doing things wrong, and had me try several different experimental conditions (over my objections). Nope, still didn't work.
On a hunch, I called the senior author and found out that he was out of the country, and had been for at least 6 months. The paper had actually been entirely conceived and written by a grad student, who had submitted it under his boss's name. So the only reason this got caught was because I did something not normally done AND I breached the normal anonymity of refereeing. 99.9% of the time, this paper would have gotten into the literature.
Is this about the time that you ruined cold fusion because you're in the pocket of Big Jewish Oil?
It was that Fleischmann guy. BLAME THAT JEW.
You'd have to expeller press a lot of goy babies to satisfy modern energy needs, Warty.
What are you supposed to do with the husks once you've made the pastries? Waste not, want not.
I refuse to believe Hamantaschen are a naturally fat free food.
"GEFILTE FISH IS PEOPLE!"
On a hunch, I called the senior author and found out that he was out of the country, and had been for at least 6 months. The paper had actually been entirely conceived and written by a grad student, who had submitted it under his boss's name.
Published or not, questioned or not, did the Grad student keep his head? Not to unduly legitimize any part of your story, but many of the PIs I worked around, reputable or not, would've, at a minimum, had this kid working on his doctorate for *at least* the next decade if they tried this.
Maybe for being too honest.
The senior author could have done this with other papers?
Study Finds: Studies Are Wrong
Now I feel like one of the robot chicks in I, Mudd!
If the study finds studies are wrong, then it must be wrong, and studies are right, but if studies are right, the the study must be right and studies are wrong ... *bzzt* *crackle*
Re: Cloudbuster,
Your statement would make sense if a) you were, in fact, a chick and b) the rest of us do not remember or are unaware that the robot who fizzed out because of a logical paradox was male.
Maybe you secretly want to be a chick, robot or otherwise...
+500 Stella
Before everyone starts making generalization, remember this is social sciences, not hard science. This doesn't mean that gravity or evolution is wrong.
Also, social science is dominated by academia, which is a bastion of liberal nonsense. A lot of hard science is done by industry, where nobody cares about your feelings, only the truth.
Unfortunately for all of us a lot of policy with serious real life consequences - a.k.a fuck you in the ass laws - seems to come from these "social sciences' studies and the political class that supports the feelings nonsense these studies push.
Hard science aren't immune to fraud though, as the SCIgen scandal and the Lancet vaccine-autism article demonstrate.
But looking over retractions, it does appear that medicine is the most common field for fraud.
"Your study could be wrong, too"
"That proves the point!"
What an extraordinary thing to say, because it sounds like she's implying that, before this study, scientists simply assumed all previous studies were reproducible.
This is one reason I dropped a psych major decades ago. Once I got into upper division the science stopped and an old-boy-network of citation swapping began. The last straw was when I got a poor grade because my paper showed a negative result. That was the explicit reason given, the paper got an F because it did not have a positive result.
There's nothing that can't be settled by surveying five dentists, including the philosophy of science. We need to track down those Trident dentists and form a star chamber around them. And I like my Top Men to have nice teeth.
There are, incidentally, some major conflicts of interest involved in psychiatric research: http://involuntarytransformati.....eBZgZeHl_k
There was a big scandal a few years ago in CS-related fields when it was discovered over 120 papers written by an academic paper generator called SCIgen had been published by the IEEE and Springer Media.
These papers had survived the Peer Review process, exposing one defect not mentioned in the article: That Peer Reviewers will approve papers they understand because they don't want to be regarded as too dumb to understand "cutting edge" research along with the other Peer Review process defects mentioned above ? a desire not to rock the boat by pointing out defects in a well-regarded researcher's work, etc.
Null hypothesis significance testing does not tell us the probability of the null being true. It is assumed that the null hypothesis is true, i.e. Pr(y|Ho) or, in other words, the probability of the data given the null.
What people are really interested in is the probability of the null being true given the data, or Pr(Ho|y). This is the posterior distribution and is the reason why so many people are switching to Bayesian statistics and econometrics.
http://www.indiana.edu/~kruschke/AnOpenLetter.htm
Google pay 97$ per hour my last pay check was $8500 working 1o hours a week online. My younger brother friend has been averaging 12k for months now and he works about 22 hours a week. I cant believe how easy it was once I tried it out.
This is wha- I do...... ?????? http://www.Times-Report.com
If they thought those "studies" were foul, wait until they start to seriously look into those of Michael Mann, Phil Jones, John Holdren, and Paul Ehrlich - et al.
It's not just psychology. It is one of the biggest shames in our society that nearly half of all published scientific research is bogus. Either the study is too small, or poorly constructed, or the outcome is a statistical zero, but considered relevant anyway. And yes, some of the data is out-and-out faked. I have read, and paid to read, published studies that, had I submitted them as a grad student, I would have been laughed out of the department. (My area of study was the construction and analysis of proposed studies before any assets were allotted them. I studied studies.)
A few minutes of searching out reports on the non-viability of research today is enough to make one cringe, and very, very sad.
The Solomon Asch experiment on "Opinions and Social Pressure" (1955), on the other hand, shows the importance of dissent in neutralizing groupthink.
Every research paper deserve a good beginning. Learn how to write a good research paper introduction!