The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"A Big Study on Honesty Turns Out to Be Based on Fake Data"
So reports a very interesting article at BuzzFeed News (Stephanie M. Lee), based on research at Data Colada. One of the original researchers has stated that he is "completely convinced by the analyses provided by Simonsohn, Simmons, and Nelson and their conclusion that the field experiment (Study 3) in Shu, Mazar, Gino, Ariely, and Bazerman (2012) contains fraudulent data; as a result, Shu, Gino, and I contacted PNAS [Proceedings of the National Academy of Sciences] to request retraction of the paper on July 22, 2021." (Or, wait, what if those are all forgeries? How can anyone really know?)
Here's an excerpt from the Data Colada post:
In 2012, Shu, Mazar, Gino, Ariely, and Bazerman published a three-study paper in PNAS (.htm) reporting that dishonesty can be reduced by asking people to sign a statement of honest intent before providing information (i.e., at the top of a document) rather than after providing information (i.e., at the bottom of a document). In 2020, Kristal, Whillans, and the five original authors published a follow-up in PNAS entitled, "Signing at the beginning versus at the end does not decrease dishonesty" (.htm). They reported six studies that failed to replicate the two original lab studies, including one attempt at a direct replication and five attempts at conceptual replications.
Our focus here is on Study 3 in the 2012 paper, a field experiment (N = 13,488) conducted by an auto insurance company in the southeastern United States under the supervision of the fourth author. Customers were asked to report the current odometer reading of up to four cars covered by their policy. They were randomly assigned to sign a statement indicating, "I promise that the information I am providing is true" either at the top or bottom of the form. Customers assigned to the 'sign-at-the-top' condition reported driving 2,400 more miles (10.3%) than those assigned to the 'sign-at-the-bottom' condition.
The authors of the 2020 paper did not attempt to replicate that field experiment, but they did discover an anomaly in the data: a large difference in baseline odometer readings across conditions, even though those readings were collected long before – many months if not years before – participants were assigned to condition. The condition difference before random assignment (~15,000 miles) was much larger than the analyzed difference after random assignment (~2,400 miles) ….
In trying to understand this, the authors of the 2020 paper speculated that perhaps "the randomization failed (or may have even failed to occur as instructed) in that study" (p. 7104).
On its own, that is an interesting and important observation. But our story really starts from here, thanks to the authors of the 2020 paper, who posted the data of their replication attempts and the data from the original 2012 paper (.htm). A team of anonymous researchers downloaded it, and discovered that this field experiment suffers from a much bigger problem than a randomization failure: There is very strong evidence that the data were fabricated.
We'll walk you through the evidence that we and these anonymous researchers uncovered, which comes in the form of four anomalies contained within the posted data file. The original data, as well as all of our data and code, are available on ResearchBox (.htm).
Very interesting, and sobering. Science, whether social, medical, or physical, is tremendously important to sound decisionmaking, both societal and personal. We can't expect it to be personal, but when done right, it's much better than the alternative (which is generally intuition and limited and poorly remembered and analyzed observation). But scientists are humans, with all the faults that humans have; and we've seen lots of examples of them committing a wide range of human errors that have cast serious doubts on a wide range of scientific findings.
It's not clear that any of the lead authors were actually complicit in the fraudulent data, even if the claims of fraud are correct. But one way or another, it appears that the 2012 study can't be trusted.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
So we can't trust "scientific" studies, peer reviewed or not.
Nothing to see here, move along.
I don't get it; why aren't researchers held to the same legal standards as accountants? See: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2669118
I criticize the lawyer profession for its supernatural doctrines, the absence of validation, and for its failure to meet any of its goals.
We have fake news, fake polls, now fake studies. Is the effect big, enough to feel at the gut level? That usually means a difference of 30% or more. Smaller differences are not worth bothering with. Did someone else using the same methods get the same result? All medical studies are flawed and have to be redone. Social science research is mostly leftist propaganda, and dismissed.
It's a problem.
Sure we can.
If, and only if, every single bit of data, tools, and methods from the paper is provided as an on-line addendum that anyone can download and verify
Which is why we can't trust most "climate science", and why only a religious zealot believes in "climate change": because they don't do that
It's why, for example, you can't trust NOAA's "historical record of temperatures", because they don't state why it is that they keep this station and drop that station, etc.
Bottom line: scientists are human
And only an idiot takes the unvalidated word of a human on matters of money or prestige for that human.
When anyone asks you "don't you trust me?" The only reasonable answer is "no"
So we can’t trust “scientific” studies, peer reviewed or not.
That is absolutely correct - single studies should not be taken as proof of anything, unless they're like a math paper.
That does not mean the enterprise of science itself cannot be trusted. As can be seen here, even in the face of fraud, science marches on.
Particle physics isn't so bad, they tend to use a five sigma standard for statistical significance.
Social science is still using two sigma, which is a joke.
Even HEP studies are not to be trusted by themselves. That muon anomalous magnetic moment experiment was exciting, but everyone was saying wait till it's replicated.
The things to be worried about are not rare probability outliers.
At five sigma you don't have to worry about rare probability outliers.
At two sigma, you're going to get 'rare' probability outliers one time in twenty.
Except the thing to worry about is not that you hit a jackpot, it's hidden variables or unknown phenomena.
Your worries are too narrow 🙂
Let's assume that:
1)misconduct (e.g. p-hacking, inter alia) never happens
2)no hidden variables
3)no unknown phenomena
You still have one in twenty studies reporting either 'coffee is good for you' or 'coffee is bad for you'.
The man in the street who hears bad...good...bad...good...bad over and over is going to stop trusting anything that starts with 'A study says...'. That's bad - studies with p=.95 and 50 datapoints are not the same as studies with p=.999 and 200k datapoints. But that distinction isn't being made, and the man in the street is starting to think scientists don't know what they are talking about.
The scientific method, when followed, can be trusted
Which is to say:
Every single thing you used for the paper must be made publicly available, so your worst enemy can recreate your experiment and find any problems with it
"The enterprise of science itself" is no more trustworthy than any other enterprise. All of them are run by humans. All are subject to the principle-agent problem, all are subject to greed, ego, envy, fear, and every other problem humanity is subject to.
Stop confusing methods and results, they're not the same thing
Yeah, science is done by humans. But it's also not where you go for money or power or glory, so while it has all the general human foibles, the amoral and ambitious are more likely to end up elsewhere.
As for the release of data, that would require an overhaul of IP law, especially with the rise of machine learning. But in the meantime, check out the DATA Act for some good news.
I'm talking only about results in this comment thread, not methods.
"Yeah, science is done by humans. But it’s also not where you go for money or power or glory"
???
Really? You think the people in charge of the IPCC aren't making a lot of money off of that? How cute.
You think the bureaucrats who fund climate "research" aren't directing the funds to the people whose "research" says "those bureaucrats need more power"? That's really cute.
You think that "scientists" who get their "groundbreaking research" published and lauded don't get grants, speaking fees, and fame from doing so?
Where the hell have you been?
"As for the release of data, that would require an overhaul of IP law,"
Nope.
When I was in grad school, I was making a program to do X. A different lab published a paper on their tool for doing X. So I wrote them, and essentially said "Hi, I'm making a competitor to your program. Please send me the test data and results you used for your paper in X, because my PI wants to compare my results to yours."
So they sent me all their data.
Because that's what you do when you're doing science.
Other than a very narrow range of questions in the medical field, where patient privacy is at stake, there's no excuse for not sharing your data, other than scientific misconduct
Yes, ltbf, you shouldn't trust any science.
Please continue to live your life in a carefree, science-free life.
How could the authors not be complicit? Did someone collect data for no reason and then the authors stumbled across it and wrote a paper? That would seem even more odd to me than the authors coming up with an idea, collecting data, and writing a paper. And if the former was the case, who writes a paper without doing a little digging into whether the data you have in front of you is actually legitimate.
Perhaps they hired somebody to do the scut work of collecting the data, and were cheated by them making it up, instead?
Ariely is fourth author (so not "lead") and the data fabrication seems to have been his doing. It's unknown if the other authors knew.
Most social "science" studies are bunk.
Thank you, Jimmy for highlighting "science". So called "social science" is not a science at all, since it does not involve the physical or natural world.
Humans are very much part of the natural world.
When you can make an objective measure of a human's emotions, then the study of that has the chance to become a science.
"Subjective" == "not subject to the scientific method" == "not science"
That is why Social science tends to study behavior, not emotions. It's almost as though you don't know what you're talking about.
Emotions shows up mostly in neuroscience, where they are groping towards some objective characterization based on cycles of neuron activation.
Oh, you mean like whether signing an "honesty pledge" at the top or the bottom makes a difference? Yeah, we saw how well that worked out
Quick question: which "groundbreaking social science research" in the last ten years has actually been replicated by people unassociated with the original authors?
Back when I was an assistant professor at UCLA, at an informal faculty lunch, after someone mentioned some fraudulent conduct by bankers that made the front page of the LA Times, the Chair said something like, "I guess they don't weed out the dishonest ones the way we do in science." I laughed, and asked when do we weed out the dishonest ones in science.
That issue became real when I started publishing papers with students and postdocs who worked in my UCLA lab, based on data they collected. (Previously, I had only published papers with my PhD adviser, for which I collected all the data.)
I had no way of knowing if their data were authentic, or were falsified or fabricated.
My point is that for anyone who co-authors a paper for which (s)he didn't collect the data, publishing requires an act of faith that whoever collected the data didn't fabricate or falsify it.
Um, make them show you how they got their data. Did they do surveys? Spot check them. Did they hire surveyors? Ask to see their budget, and list of surveyors. Contact a few of them. Ask them what they did.
Learn how to use R. Plot their data, is their anything weird about it? If there is, you've now got some good things to discuss with the person ("weird" doesn't always equal "fraudulent". Sometimes it means "this is special, we should follow up on this!").
You don't have to put much effort, into making it so that a fraudster has to put more effort into faking the data than he would have had to spend just getting valid data in the first place.
Greg J, you missed the very first, threshold question about reliability. Do you know the name of the source? Can you get in touch to ask those questions you mention, or is the information coming from someone hiding behind a pseudonym?
Stephen,
The subject here was "That issue became real when I started publishing papers with students and postdocs who worked in my UCLA lab, based on data they collected."
If I tell you to "trust me, this is true", you obviously won't believe me. Which is why for controversial claims I try to include links (Reason won't always let me)
The scientific method is about facts, reason, and logic, not personalities. Unlike "science"
Greg J, setting aside the question of, "personalities," opens to view various questions which relate to persons—for instance, whether they hide behind pseudonyms. For instance, why would anyone pay much attention to a person who hides his identity as he demands openness from others?
Gee, maybe because I'm not writing a scientific paper here?
Maybe because I make arguments that stand on their own merit, rather than saying "trust me, I'm an expert"?
Maybe because I'm not an utter moron, butthurt because I was called out for saying something really stupid, on the order of "I can't understand this, so neither can anyone else"?
I'm just spitballing here, but maybe you might want to try pulling your head out of your ass, and figure out the differences between publishing a scientific paper where you're encouraging people to change their actions because of your results, and publishing an argument based on reason, logic, and referenced publicly accessible data.
Or, you can continue whining like a little baby. Hey, whatever floats your boat
If this was Facebook, Twitter, or YouTube there’s a reasonable likelihood that we wouldn’t see this information. Anything that questions a very specific understanding of the world may be censored by those platforms.
Not about data fraud, but here’s a good article on food science from last year: https://www.scmp.com/magazines/post-magazine/long-reads/article/3076863/food-science-should-we-believe-anything-we-read
The gist is that anything you see reported about food science is essentially misleading. Most effects are barely significant at best and the study methodologies aren’t reliable enough to support any firm conclusions.
Maybe, but this is the only food science item that folks should see at least once:
https://www.youtube.com/watch?v=tGg4njImm0Y
Julia Child with the assist. Go food! 🙂
Don't say foolish things. Guess what I found on Twitter in 30 seconds:
https://twitter.com/DataColada/status/1427645912474587151
From what I can see presented here, I can't make heads or tails of whether the data criticized is fraudulent, whether the researchers who relied on the data were fraudulent, whether the critics are fraudulent, or whether nothing is fraudulent. But I am willing to bet that commenters here will confidently assert that something is fraudulent.
Did you look at the histograms and the "twin" analysis? That spoke pretty strongly of falsified data to me. The implausibly uniform distribution of miles driven is an example. The discrepancy between initial odometer readings between the two treatment groups would probably be sufficient grounds to withdraw the paper by itself, although wouldn't point towards fraud without other red flags.
Would you trust a commenter here who reports reproducing the suspicious metrics from the original spreadsheet? Do you think that the (purported) responses from the original authors are forged?
I think more evidence is needed before accusing any specific person of fraud in this case, but the Data Colada post makes a very strong case for the data being largely fabricated.
The relevant author, Ariely, stated that he agrees with the analysis. Specifically, he has stated, "I agree with the conclusions and I also fully agree that posting data sooner would help to improve data quality and scientific accuracy ."
See http://datacolada.org/98 and http://datacolada.org/storage_strong/DanBlogComment_Aug_16_2021_final.pdf
"From what I can see presented here, I can’t make heads or tails"
Don't assume that your incompetence is shared by the rest of us
Don't worry, Greg J, your incompetence is never in question.
Sure thing, Mr "I can't tell if 2^2 equals four, so neither can anyone else!"
Note: everything you said up to your last sentence was fine. No one is expect to know or understand everything
It's the part where you say "But I am willing to bet that commenters here will confidently assert", with the strong implication that, since you can't do it, neither can anyone else, that earns you the kicks to the teeth
Greg J, why do I deserve kicks in the teeth for my accurate prediction of what these commenters would do?
Because your "prediction" illegitimately carried the strongly implied claim that the commenters statements would be based upon dishonest motives.
When the claims are quite reasonable given the supplied data and analysis, your whine appears totally assinine.
Because it is.
Here, let me help you:
In the first sign of something amiss, the 13,488 drivers in the study reported equally distributed levels of driving over the period of time covered in the study. In other words, just as many people racked up 500 miles as those who drove 10,000 miles as 40,000-milers. Also, not a single one went over 50,000. This pattern held for all the drivers’ cars (each could report mileage for up to four vehicles).
Now, are you really so ignorant that you can't see why that's a screaming red flag?
If you "can't make heads or tails" about the validity of the data, you shouldn't be wasting your time (and ours) commenting about it. You should instead be studying some basic math so you can make informed decisions about such studies.
Funny Rossami, I don't recall commenting at all about the validity of the data. Is it your suggestion that publication of invalid data is proof of fraud? Maybe you should be studying the most basic test of critical thinking—learning to ask the question, "How do I know that," before publishing stuff.
Here, Stevie, let's consider this:
In the first sign of something amiss, the 13,488 drivers in the study reported equally distributed levels of driving over the period of time covered in the study. In other words, just as many people racked up 500 miles as those who drove 10,000 miles as 40,000-milers. Also, not a single one went over 50,000. This pattern held for all the drivers’ cars (each could report mileage for up to four vehicles).
Now, this is data that ought to be in a bell curve distribution. The person collecting that data would know that it should have a bell curve shaped distribution.
So, whoever provided that data is at least as much of a scientific ignoramus as you are, or else would have realized that there was something wrong with the data.
Since they went ahead and published obviously flawed data, the first assumtion is "they engaged in fraud", and the onus is on them to prove otherwise.
I realize you're not a lawyer, but for those who are lawyers, but don't like numbers, this is roughly equivalent to submitting a brief to a court where you've claimed to quote a case, but you completely reverse the actual ruling in the case.
You can come up with a story about how that's a perfectly innocent screwup. But the judge is going to start with the assumption that you willingly perpetrated a fraud on the court, and you're going to have to work really hard to convince him otherwise.
The article gave some pretty good reasons to suspect that the data had been out-and-out made up, quite plausibly with the help of a random number generater.
The open question is by whom.
The article strongly suggests the data is fraudulent. While its evidence regarding who fabricated the data is less than conclusive, it does point some potential fingers. For example, the article said that the author responsible for the data had been involved in a previous paper where he had been responsible for the data and the data had later turned out to be made up.
"Science, whether social, medical, or physical"
Medical and physical yes are sciences. Social, absolutely not.
This is just snobby nonsense. I know, because I was once that kind of snob.
But now that I work with social scientists, among other disciplines, that they're doing science. Often descriptive not predictive, but certainly science.
If it isn't predictive, it isn't science.
If all you are doing is descriptive, you are doing nothing more than making measurements - gathering data. That's a great first step towards generating a predictive model, but it isn't "science".
Unless you pretend that "science" is just generating any facts, in which case writing asking my wife for a grocery list is also "science".
I think you are proposing a non-standard definition - in general usage the definition of science includes taxonomy and whatever you call the geologists who are just mapping what's there, as opposed to offering explanations about why those geological formations exist, or why the taxonomy is what it is.
Lots of things are "in general usage", but not true, valid, or even reasonable.
The scientific method:
1: Make a meaningful testable and falsifiable prediction (that means a prediction that could be proved false (the average world temperature in 2500 will be X), and can be tested in a reasonable amount of time (which this one fails at))
2: Do a study where your proposition can actually be tested
3: Honestly report your results, with enough detail so your worst enemy can try to replicate your experiment and prove you wrong.
If what you're doing does not fit into that, then it may be the spadework that makes later science possible, but it is not in and of itself science
FWIW, I'm not disagreeing with the definition of 'scientific method' - you are using the standard definition for that.
I'm disagreeing with the notion that the dictionary definition/common usage of 'science' excludes descriptive science. A couple of examples:
-from the wiki article on van Leeuwenhoek: "...was a Dutch businessman and scientist...". He made microscopes and reported what he saw through them; no predictive theory.
-from the wiki article on Linnaeus: "...was a Swedish botanist, zoologist,... ...He was one of the most acclaimed scientists in Europe at the time of his death." His work was pure description.
It's fine to argue the mainstream definition/usage is wrong and should be changed, but merely asserting 'A Dachsund isn't a dog because my personal definition of dog requires long legs' isn't persuasive in a world where everyone else uses the more conventional definition.
Both of your chosen examples fail, because they both worked on developing models to predict relationships between classes of creatures.
As Absaroka says - lots and lots of science of many disciplines is based on making observations.
Even materials science is more about characterization than prediction.
"Even materials science is more about characterization than prediction."
True enough for the classic metallurgy text full of phase diagrams, but that is changing pretty fast with atomic level modeling. I've been reading articles for several years at least along the lines of 'we decided to design a material with properties of ...'.
Not fast enough for my liking, but yes there is a burgeoning theoretical/modeling study, especially with ML. But it is really, really early.
Most of what I'm seeing is still largely intuition/perturbation-based formulation, and then characterization.
If we can find a way so synthesize or mimic the properties of rare earths...but we are nowhere near.
Not all that much need to, "rare" earths aren't all that rare, they're just dirty enough to extract that we let China get a monopoly on extracting them.
Phase diagrams are predictive, though: they say that if you take a sample of material X at temperature Y and pressure Z, it will behave like matter with a phase of W.
Social science isn't nearly so rigorous, given the density of confounding variables. Even economics ends up saying "things are like this except when they're not", or having a false rigor based on using formulas that involve variables that nobody agrees on. (What is GDP? What is GNP? How do you quantify those in a rigorous, repeatable, transferable way?)
Modern materials science is more describing the periodicity of microphases in crystals and how they form and the interaction on the interface and whatnot. All descriptive.
Social science is quite rigorous - there is no such thing as confounding variables because descriptive doesn't require causality. I would characterize it as in a much earlier stage of scientific development than physics and chemistry, but it's not for lack of rigor.
Elsewhere on this blog, I've locked horns with people insisting this action or that will be inflationary. I don't think economics is able to make such hard-and-fact predictions. But that doesn't mean it's not rigorous, just that it's not there *yet*.
...
No, as former statistician who had to do statistical analysis for dozens of social sciences papers, the fields there are no where close to "rigorous".
Psychology, for example, has more than 50% of papers fail reproducibility. A major study in Science tested a large sample of papers, and found that barely 1 in 3 could be reproduced.
And yes, there are many confounding variables - more so than in physical sciences, in fact, which is one of the problems. It is very difficult to directly measure people's opinions, views, and attitudes. The problem is so large that almost all of the social sciences is devoted to measuring things indirectly.
Are you trying to claim that the social sciences are "rigorous" because they count correctly? Not 'can be reproduced', not 'can predict similar cases', but 'correctly counted the things observed'?
Toranth, what do you make of neuropsychology? It looks to me like an observational science which delivers its results as highly reproducible numerical summaries. Those, in turn, enable insightful predictions about human development and behavior which are more often reliable than not.
You seem to be confused about the difference between "descriptive" and "observational". These are not the same thing. It is quite easy, and proper, to use observational data for predictive purposes. If your model supports it, of course.
Neuropsychology is a field that attempts to create a model for behavior based on neurology - while it is primarily observational, it is not merely descriptive. A quick check online did not reveal the extent of replication problems in that specific subfield, but take a look at Gelman & Guerts 2017 for a brief discussion of the topic which points out many of the ways that the same problems from social psychology are present in neuropsychology - with examples.
"Phase diagrams are predictive, though: they say that if you take a sample of material X at temperature Y and pressure Z, it will behave like matter with a phase of W."
ISTM that's like saying a map of the US is predictive, because it says if you are between Colorado and Montana you must be in Wyoming. That's true enough, but it's because you went out and mapped things.
The phase diagram for steel, for example, wasn't drawn from theory - it was mapped empirically. The only prediction is 'we've looked at this temp and carbon content before, and here is what we found then'.
New Mexico and Texas also lie between Colorado and Montana, but by a longer route.
Generally, saying 'B is between A and C' is short for 'B is on a reasonably direct path between A and C'. Changing the definition of 'between' to 'one can construct an arbitrary path from A to B to C' would put Beijing 'between' CO and MT. The moon and the Crab Nebula would also be between them, as would, in fact, every point in the universe. That doesn't seem like the most useful definition of 'between'.
(I actually wondered if someone would point out that a line from the northeast corner of Colorado to the northeast corner of Montana passes through Nebraska and the Dakotas. That would be a valid, if pedantic, objection.)
As I said, making observations is a good first step in the scientific process, but it is not, in and of itself, science.
Materials science is is heavy on making predictions - how will this material behave under circumstances x, y, and z? When will this substance break when a and b happen to it first? So on.
"Often descriptive not predictive, but certainly science."
Nope.
The scientific method involves making a falsifiable prediction, and then testing the result.
Description can be important and useful. It can be the basis for science.
Until you can make meaningful predictions with it that can be tested, and are routinely found correct, what you're doing isn't science
This is actually a pretty common misconception - I unthinkingly held it even through grad school.
Science - the systematic study of the structure and behavior of the physical and natural world through observation and experiment - requires more than the scientific method as you learned it in middle school is only one among many methods science has and does use.
Otherwise, you've thrown our most of science in the 1800s, and like zoology, and DNA phylogeny, and as I said above materials characterization.
A lot of times Victorian-style 'poke it and see what happens' is the right investigatory tool for the job.
"Otherwise, you’ve thrown our most of science in the 1800s, and like zoology, and DNA phylogeny, and as I said above materials characterization."
Too the extent that they simply collected data, correct.
But my recollection is that most of the collectors did not just collect the data, they also tried to organize it, and systematize it, and make predictions about the world from it.
As pointed out by a commenter above, materials science endeavors to say "if you put this pressure on a bar made with this material, it will break".
I had long arguments with my history of science professor about this one, so I've been exposed to your side. i just don't agree with it.
If you're not making testable, falsifiable, predictions, you're not engaging in science. You may be collecting data that other people will use to do science, it may be that no one can know enough to make predictions until you've finished collecting your data, so you're the necessary precursor to field X becoming a scientific field.
But unless you're making meaningful falsifiable predictions ("the climate will change" is not a meaningful prediction. The climate pretty much is constantly changing. "The average worldwide temperature will increase by X degrees for every 100 PPM increase in CO2" is a meaningful falsifiable prediction. Unhappily for the AGW pushers, but happily for the rest of us, no such prediction has every proved correct), you're not engaged in science, and your pronouncements are not entitled to the respect that actual scientific claims often deserve
Tell you what, Sarcastr0.
You pick your 1 or 2 favority "Greg's rules says they aren't science, I say they are", either fields or people, and make your case. i'll either agree with you, or make my case.
As an example, consider the folks who discovered the structure of DNA (and some of them, cough, got a Nobel for it). That seems purely descriptive, and thus not science per your definition.
I think your definition is kind of neat, but it isn't the definition in general use.
Hey - I just read "The Code Breaker: Jennifer Doudna, Gene Editing, and the Future of the Human Race."
I've got some issues with it, but it was reasonably engagingly written and not a bad read of the research environment in modern labs.
You mean the people who predicted that DNA was a double helix, and that A would always be lined up with T, and C with G? And that DNA replication worked by splitting that helix, and using that complimentarity in order to make correct copies?
I see a couple of predictions there, all of which mainly turned out to be true.
Studies show that if you give your money to me I'll invest it and make you lots more money which I'll then give to you!
Do you sign your affirmation of honestybefore or after I give you the money?
He has an affirmation that he, personally, has never lost money taking others' money to attempt to do things.
One out of two people are enriched from my investment scheme!
The #fakenews Afghani interpreters story actually has a real news angle—it’s our generation’s Japanese Internment Camps!! So Congress that drafted the laws in 2008 had extreme vetting because of Islamaphobia!?! So they rolled out a red carpet for Cubans and Mexicans but Muslims are apparently more likely to be terrorists so they had a hefty helping of CYA!! So not only does America have the indignity of two asinine wars based on lies and stupidity…but we have our very own Korematsu!! Unfuckingbelievable!!
The question I have is how many Afghan interpreters could we have employed over the course of 20 years. I've seen visa estimates upwards of 250,000 (I understand that also includes immediate family). Was everyone and their cousin a casual interpreter for the military though? Even assuming that includes 5 people per party that is still a lot.
They didn’t even think about all of this before we invaded and then it becomes another huge unintended consequence of George W Bush’s Global War on Terror—invading countries and then we are apparently required to make hundreds of thousands of people in those countries American citizens!?! Don’t get me wrong—I’ll take all of the German scientists we can get over here, but I doubt Afghanistan has a bunch of scientists that can split the atom and get us to the moon. Btw, since 2015 we haven’t even really had boots on the ground in Afghanistan so that shows how dumb all of this is—our ally the Afghanistan Army doesn’t need interpreters.
Good point. From now on we should only invade countries that use English so we don't need translators. OK, maybe Spanish as well.
If some Iraqi worked with Marines and Rangers for years then there shouldn’t be a vetting process…it’s a dumb program based on CYA and Islamaphobia.
Bzzt, thank you for playing, we have a lovely parting gift for you
There's a level of security classification "NOFORN", which means "no foreign nationals may see this, not matter how much we otherwise trust them"
"You've worked with us for 5 years, and we haven't caught you screwing us over, yet" does not mean you're not a sleeper agent.
And it doesn't mean you don't have close family who will be left behind, and who could be used for leverage against you.
Do let us know when people who've escaped from Cuba / Mexico start engaging in terrorist (as opposed to merely criminal) activities against Americans.
Until then? You're being a total a$$
Are those visas only for interpreters, or do they include people who did other jobs for or at US facilities in the country? 50,000 interpreters is a lot for a country of 39 million or so, but it seems plausible if it includes cooks, laundry people, cleaners, and other support jobs.
Exactly, as long as we are in a country we keep producing new people we need to protect…insane!
Google (not the best of sources but it's what I could find) suggests that the average family size in Afghanistan is about 8. Add dependent elders and somewhat more distant relatives who become "immediate family" when that's the rule needed to escape and I'd be surprised if the multiplier is anywhere near as low as 5.
Assuming 10 (in part, because it makes the math easier), that means 25,000 interpreters.
The turnover among wartime interpreters is high. Few do it for more than a year or so. A total of 25k over 20 years works out to 1250 at a time. Since each unit needs at least one and often two (male and female), that number strikes me as quite plausible.
Come on Eugene. You can do better. The better headline should read: "A Big Study on Honesty Turns Out to Be Dishonest".
Well, it seems clear that it's unreliable, and there was some dishonesty by someone within it. But "turns out to be dishonest" suggests that the study's authors (or at least one of them, on whom the others relied) deliberately lied, and that's not clear.
I imagine you posted this because of the rich irony; you could have crafted a better headline.
You could try. Show us your work.
I didn’t take it upon myself to post. Eugene did.
You took it upon yourself to post a comment. Constructive criticism is always better than bare assertions.
Shorter is generally better when it comes to headlines. Might I suggest: "FRAUD!"
It's funny how the "there's NOOOOO evidence of fraud in research!!!1!one" crowd has steered clear of this thread. Actually, not surprising in the least.
This is one of the ones they found, folks -- likely because the authors were sloppier than many. But it's just the top of the iceberg. This sad state of affairs is the natural result of a system that both pressures and incentivizes quantity over quality.
It's getting so bad, a former editor of the BMJ recently rhetorically asked, "Time to assume that health research is fraudulent until proven otherwise?" [link in next post -- grrr Reason]
BMJ link here.
Who says there is no evidence of fraud in research?
And you appear to be a bit fraudulent in your linked story. What it actually says: We have now reached a point where those doing systematic reviews must start by assuming that a study is fraudulent until they can have some evidence to the contrary. Some supporting evidence comes from the trial having been registered and having ethics committee approval. Some supporting evidence comes from the trial having been registered and having ethics committee approval. Andrew Grey, an associate professor of medicine at the University of Auckland, and others have developed a checklist with around 40 items that can be used as a screening tool for fraud
That is about burden of proof in peer review, not about the public in general.
I'd like to see the study showing that their criteria actually successfully predict fraud. Some of the criteria look sensible; If a study has mathematically impossible results, you can be pretty sure it's wrong.
Some criteria seem to be missing: Was the study in Egypt, Iran, or India? It's more likely fraudulent than not. Does the study involve somebody who was already implicated in other fraud?
But a lot of the criteria seem more oriented towards research ethics than reliability of findings. I suppose they could be correlated.
Not my paper, don't endorse it. But I think you have the whole thing wrong - these criteria don't seem to look for evidence of fraud, but rather say to look for a certain amount of evidence against fraud before it can be assumed to be legit.
But nationalistic generalizations seems a pretty crap heuristic. Bigoted, even.
For Egypt, 100% of the studies they'd checked on were fraudulent. That's a pretty good heuristic, right up there with assuming financial offers out of Nigeria are fraud. I'm not saying it's impossible for a study out of Egypt to be real, but if half or more of the studies out of a country that have been checked turned out to be fraudulent, shouldn't that be your starting assumption, which requires a lot of contrary evidence to abandon?
But, yes, some of these criteria actually are evidence of fraud. Use of manipulated images, for instance. Numbers that don't add up. Unlikely statistics.
Others of the criteria are consistent with a study being honest, but just not done in a manner American academic institutions might approve of.
You appear to be saying lets assume if it comes from Egypt it's made up.
That's not a good idea, for a number of reasons.
My friend, if you don't understand that systematic review and peer review are completely different things conducted at completely different points in time by completely different groups of people, you really ought to refrain from commenting on the subject -- much less so haughtily.
And if your attempted point is that the order of the cloistered priesthood should assume that a study is fraudulent until receiving proof to the contrary, but the unwashed masses should just cheerfully and uncritically accept everything out there until instructed otherwise... well.
Fine, I misread.
But you still way overstated your case. The paper you link is about a better methodology, not about actually assuming most papers are fraudulent.
You don't get to attack authority, and then cite authority to that point, and then exaggerate what that authority said.
Well you can, but then you're just a crank.
As to your final sentence, expertise is a thing that exists, it is not a conspiracy against the public.
A methodology for attempting to disprove the null hypothesis that a given paper is fraudulent. And?
Look, I'm sure you think you have a Really Clever Point to make, but all the stutter-stepping and armwaving is just making you look desperate. Most studies can't be replicated -- that's just a fact, however inconvenient for you. Things like this just help us sort out the degree of malice vs. incompetence in that vast body of rot.
It's a method to use specifically in systematic reviews. And it's not about any kind of widespread level of fraud, it's about burden of proof.
The piece is saying nothing about how the public should treat published papers, nor about how endemic fraud is generally.
As my linked quote demonstrates.
OK, so as I said a couple of posts ago, your position apparently is that it's fine for other researchers to distrust published papers by default, but the public should just open wide and accept all published papers until told otherwise (apparently by someone with sufficient alleged authority to do so, however that's supposed to work).
That's both elitist and, as the view behind the curtain becomes clearer and clearer, dangerous.
No, my position is only that the thing you linked does not say what you say it does.
I post downstream about public and science and trust. I don't agree with you there either. But my main issue here is that you mischaracterized your source.
I'm not sure I see the importance of your distinction.
My sense is that we have a looming credibility problem, partly as a result of the incentives enabling fraud, and partly a journalism problem. When the average person starts to disbelieve (and perhaps rationally disbelieve) the article they just read, that's really bad for society.
And that day isn't far off, e.g. the misconduct behind the 'you eat less from small plates' research at Cornell.
Journalism doesn't help ... I read regular reports loudly proclaiming 'Drinking coffee prevents toenail cancer' that when you drill down the study actually says 'a study of 42 people found that those who drank more coffee had a 3% smaller risk of toenail cancer than those who didn't', which is well into 'who cares' territory even if the results are bona fide.
This is followed next week by 'New Study Shows Drinking Coffee Heightens Earlobe Cancer Risk!!!' based on similar statistical noise (or possibly misconduct).
This matters. When you want people to trust the science, the science (and reporting of it) needs to be trustworthy. It's not enough for Honest Al to put up ads saying 'You can trust us at Honest Al's Used Cars' if I've been cheated by Al in the past.
Your comment reminds me of this classic.
That's a great one, albeit it makes the scientist out as victim of bad journalism. That's a real problem - I bet there are good scientists who cringe reading how their latest study is reported. But sometimes the villain is the scientist who needs published studies for career reasons and keeps p-hacking until he finds a jellybean color with significant results.
It's a two faceted problem, where scientists and journalists both face bad incentives.
LoB cited a paper for a thing the paper doesn't say. This says nothing about the credibility of science, or issues with journalism or reproducibility. It says a lot about LoB and what he's willing to do to push his agenda.
I blame a lot of it on the elevating of single studies. Scientific methodologies have not gotten shoddier over the years, but rather better. Many are trying to tell the time by looking at the seconds hand on the clock.
"LoB cited a paper for a thing the paper doesn’t say."
Sorry, not seeing it.
I don't care how you slice it, that opinion piece does not stand for the proposition 'Time to assume that health research is fraudulent until proven otherwise.'
It advocates a procedure where the burden of proof for a particular function is shifted.
That's nowhere near the same as the much more broad attack on the medical research enterprise he cites the piece to make.
"It doesn't propose assuming fraud until proven otherwise -- it just proposes shifting the burden of proof for showing fraud!" Good grief, dude.
This one is so crystal clear, not even you and your epic word games can obfuscate it. You might consider stopping embarrassing yourself, if that's an emotion you still experience.
Shifting burden of proof is not the same as an accusation. You made it into an accusation.
And that's sperate from the massive broadening of scope you did.
I'm not playing epic word games, you failed to understand the thesis of your source.
And now it's time for the obligatory vacuous characterization of what I supposedly said, predictably devoid of any actual words I used. How sad it must be if that's the only way you can pretend you're making a cogent point.
It’s getting so bad, a former editor of the BMJ recently rhetorically asked, “Time to assume that health research is fraudulent until proven otherwise?”
This is absolutely an accusation. It is also your own words.
Thanks for finally using quotation marks at least. But it really doesn't help your bare statement, "that's an accusation!"
I, and I suspect most people not trying to twist words to generate straw men, would refer to it more as an observation -- an observation of a serious and worsening problem indisputable by anyone other than those whose salary depends on them disputing it.
You're taking it out of context as support for a thesis the author does not address.
That's bad. It's not twisting words, nor is it a strawman.
Now I see why you rarely provide sources - you're so ideological it keeps you from reading comprehension.
"It advocates a procedure where the burden of proof for a particular function is shifted."
Amusing how you elide that it was shifted from a rebuttable presumption of honesty, to a rebuttable presumption of fraud.
Never hid that, I think I was pretty clear when I quoted that above.
LoB is took the linked opinion piece out of context to support something it does not.
That's dumb and bad, whether he's right or not as to his thesis.
Looks shady. But Dan Ariely seemed like such a great person (solely based off his writings). Wsan't he writing as "The Ethicist" in WSJ or something? I just don't want to believe he'd lie.