Maryland Supreme Court Limits Testimony on Bullet-Matching Evidence
The ruling is likely the first by a state supreme court to undercut the popular forensic technique.

The Maryland Supreme Court ruled Tuesday that firearms experts will no longer be able to testify that a bullet was fired from a particular gun. The decision is likely the first by a state supreme court to undercut the widespread forensic discipline of firearms identification, which is used in criminal cases across the country.
In a 4–3 decision first reported by The Baltimore Sun, the Maryland Supreme Court overturned the murder conviction of Kobina Ebo Abruquah after finding that a firearm expert's trial testimony linking Abruquah's gun to bullets found at a crime scene wasn't backed up by reliable science. In the majority opinion, Maryland Supreme Court Chief Justice Matthew J. Fader wrote that "firearms identification has not been shown to reach reliable results linking a particular unknown bullet to a particular known firearm."
The ruling is a major victory for defense groups like the Innocence Project, which works to overturn wrongful convictions and limit what it calls faulty forensic science in courtrooms. It's also not the only one: Radley Balko recently reported at The Watch on a similar ruling from a Cook County circuit judge in Illinois.
But Tuesday's ruling is the first by a state supreme court limiting such testimony that Tania Brief, a senior staff attorney at the Innocence Project, which filed an amicus brief in the case, is aware of.
"One of the tensions in our work is that the law is always playing catch-up with the current scientific understanding," Brief says. "And this is a real step forward in the law catching up with what the current scientific understanding is."
Forensic firearms identification includes well-established uses such as determining caliber and other general characteristics, but examiners are also frequently called on to testify whether a particular bullet was fired from a particular gun. A gun's firing pin and the grooves on the inside of a gun barrel leave marks on cartridge casings when a bullet is fired, so a firearm examiner compares crime scene bullets to samples fired from the suspect gun and looks for matching patterns under a microscope.
According to the Association of Firearm and Tool Mark Examiners (AFTE), which sets standards for the field, a positive identification can occur when there is "sufficient agreement" between two or more sets of marks or patterns. The AFTE argues—as one of its members did as a witness for the state of Maryland in Abruquah's appeal—that its methods are scientifically sound, widely accepted, and have low error rates in testing.
However, over the past decade many forensic methods, especially "pattern-matching" disciplines like bite mark and tool mark analysis, have been challenged by critics who argue that they rely on subjective interpretations that are nonetheless presented as scientific conclusions in courtrooms.
In 2016, the President's Council of Advisors on Science and Technology (PCAST) released a report finding "a dismaying frequency of instances of use of forensic evidence"—such as analyses of hair, bite marks, and shoe prints—"that do not pass an objective test of scientific validity."
For example, in the case of bite mark evidence, the PCAST report found that "available scientific evidence strongly suggests that examiners not only cannot identify the source of bitemark with reasonable accuracy, they cannot even consistently agree on whether an injury is a human bitemark." Bite mark analysis in particular led to a wave of horrific wrongful convictions based on what amounted to junk science.
The PCAST report was more favorable toward firearm analysis but concluded that it was a "circular" method without enough appropriately designed studies to determine whether it was scientifically valid—that is, repeatable, reproducible, and accurate. (The Justice Department under former Presidents Barack Obama and Donald Trump disputed the PCAST report's findings and rejected its recommendations for strengthening the reliability of forensic methods.)
The Maryland Supreme Court did not entirely throw out firearms identification testimony. It found that the methodology is strong enough to support testimony that a bullet was consistent or inconsistent with those fired from a specific gun. However, it found an "analytical gap" between what evidence the technique can support and the trial expert's unqualified opinion that the crime scene bullets were fired from Abruquah's revolver.
While the ruling is a win for the Innocence Project, Brief says tweaking the language does not solve the fundamental problem with firearm identification and testimony. Instead of more carefully couching the language expert witnesses use, she wants it replaced with unambiguous statistics and meaningful error rates.
"This just shows us that there's more work that needs to be done," Brief says. "I think a juror may be scratching their head. How is a juror supposed to know what the words consistent or inconsistent really mean in a scientific way?"
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
This seems reasonable and consistent. If we can't even tell what a woman is, you're then going to turn around and tell me you can tell if a chunk of lead came out of a specific gun?
The PCAST report was more favorable toward firearm analysis but concluded that it was a "circular" method without enough appropriately designed studies to determine whether it was scientifically valid—that is, repeatable, reproducible, and accurate.
*looks around nervously*
You had me at "it's all just a social construct" but you're losing me at the reason-based "repeatable, reproducible, and accurate" stuff. How 'bout we turn the conversation back to things like "promethean transformations" and stuff.
Easily start receiving more than $600 every single day from home in your part time. i made $18781 from this job in my spare time afte my college. easy to do job and its regular income are awesome. no skills needed to do this job all you need to know is how to copy and paste stuff online. join this today by follow details on this page.
.
.
Apply Now Here—————————->>> https://Www.Coins71.Com
Google is by and by paying $27485 to $29658 consistently for taking a shot at the web from home. I have joined this action 2 months back and I have earned $31547 in my first month from this action. I can say my life is improved completely! Take a gander at it what I do.....
For more detail visit the given link..........>>> http://Www.SalaryApp1.com
I am making a good salary from home $6580-$7065/week , which is amazing under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it’s my duty to pay it forward and share it with Everyone,
🙂 AND GOOD LUCK.:)
Here is I started.……......>> http://WWW.RICHEPAY.COM
"I know this bullet is a 9mm and the defendant carries a 10mm, but the bullet identifies as a 10mm."
Six months ago I lost my job and after that I was fortunate enough to stumble upon a great website which literally saved me. I started working for them online and in a short time after I've started averaging 15k a month... The best thing was that cause I am not that computer savvy all I needed was some basic typing skills and internet access to start.
🙂 AND GOOD LUCK.:)
HERE====)>>> https://www.Salarybiz.com
Yeah. Since the science of bullet matching, which is mostly right but occasionally fallible, goes, the climate science which, in several decades, struggles to make one accurate prediction goes too, right?
Or is this one of those “When employed on even a case-by-case basis the dogma of which I disapprove of is verboten and when employed broadly under penalty of law, the dogma I approve of is mandatory.” situations that TeenReason loves to soak their panties* over?
*I didn’t specify a gender so that’s acceptable again, right?
Next up; drug dog testimony and DNA analysis.
I get the drug dog thing, but what is the problem with DNA analysis? (Honestly asking out of ignorance of the topic)
They lie about the odds. They twist the statistics.
DNA is not like fingerprints which you can lay out side by side or superimposed and see directly. There are billions of elements which have to be homogenized or selected or filtered for whatever the comparisons are. That provides all the opportunity you need to "improve" the comparisons.
I know next to nothing about how it's done, but when you start with a billion elements, comparisons quickly depend on filters.
There are some human characteristics which can be determined from DNA -- hair color, eye color, skin pigmentation, general ancestry. Some day they will add a whole lot more, maybe intelligence, face shape, basic height and weight, who knows. I don't know how much of these go into comparisons now, directly or indirectly.
I don’t know how much of these go into comparisons now, directly or indirectly.
None.
I think the statistics they give for DNA are reliable assuming the samples weren't switched or contaminated or planted. The problem with DNA is that it seems so certain, but ignores mistakes. I recall seeing one true crime show where the suspect in a murder was coincidentally killed in a motorcycle accident the night of the crime. Turns out his was the corpse on the slab before the murder victim.
I have read of quite a few cases where the DNA match was clouded by crappy statistics. They relied on the wrong subset or whatever, but it was definitely the statistics at issue. The trial expert would say one chance in 100 quadrillion, and it turned out later there were hundreds of "close" matches.
The trial expert would say one chance in 100 quadrillion, and it turned out later there were hundreds of “close” matches.
You realize that you reduced the likelihood from one in 14M times the current human population to one in 140,000 times the current population, right?
More importantly, you’ve glossed over the crucial details (and committed essentially the same categorization error) in at least some of the statistical cases you’ve cited where, for example, the statistical error does render the result to within the realm of existential probability based on gross numbers but would still require there to be something like 500M black people to be living in the US, the specific state or even the specific city to be a relevant concern.
LOL: Unless you can prove that it's Thanos' DNA on the Infinity Gauntlet in every universe in the multiverse you must acquit!
I think sample handling is where the risks lie, not the DNA analysis itself. You can get a false negative or positive from degrading or contaminating the sample - the coroner's slab cock-up being a prime example of how not to do DNA - but false positives from the methods themselves are rare, from what I know. And you can repeat the testing as often as needed to address the risk of error, provided redundant samples exist.
Naw. DNA, at least autosomal DNA, is definitive. You test a hundred thousand or more highly variable regions, you compare it with your sample, if it matches, it matches. No "filtering" needed.
Matches are not yes/no. What is the threshold? 99,000 out of 100,000? 90,000?
"If it matches" is the question, not the answer. Try again.
The real question is, "What's the reliability of your self-assessment for chromosomal disorder?" Anything less than 1 and the question pretty much answers itself.
And yet Ancestry DNA accurately IDed all my relatives (all the ones that have also taken tests) correctly. If it can tell that my aunt is my aunt based on both of our spit, I'm going to say it's pretty impressive.
Most times it's little more than a fancy brain/nose swab style Covid test, which is perfectly accurate or so we're told, only more often it's either a cheek swab, spit in a cup, or blood test but they all suffer the same problem. You have only a few good samples of a really long strand of fairly fragile DNA, most of which will be broken and garbled so standard procedure is to copy it so they can ensure a large enough sample. Of course the copying process introduces mutations just as it does in your body but nobody mentions that or what the drift rate is per subsequent copy and should taint any sense of absolute certainty.
Then there's the whole other can of worms called chimerism where an individual can have completely messed up DNA. Sometimes it's from a twin that became commingled in the early cellular stages of life resulting in one individual with two distinct DNA profiles. At that point it comes down to which DNA set is being tested against what. Further, it's been found to happen with bone marrow transplants, which actually makes sense since the recipient is actually using someone else's bone marrow to make blood cells so yeah, probably not going to look like the broken ones the bad marrow was making and instigated the transplant in the first place.
Needless to say you'll never find an expert mentioning any of that to a jury because that would require them to recognize the Dunning-Kruger effect as it applies to them.
Then there’s the whole other can of worms called chimerism where an individual can have completely messed up DNA.
Fucking retarded. From your own source, there are only 4 documented cases in human history. It would be like saying we can't do fingerprint analysis because there are people who have polydactyly and/or wear their fingerprints down to undetectable levels.
Your "drift rate per subsequent copy" argument is even more retarded. If it were so fallible, the technique wouldn't work in even the simplest of applications, to say nothing about the larger biome's ability to coherently replicate itself every. single. day.
Again, I make no claims that DNA is 100.00% infallible. The issue isn't the DNA. The issue is that the system entirely without DNA doesn't even come close to approaching the infallibility of DNA analysis.
Teaching moment on dunning kruger: if someone invokes it in this manner, they ironically employ circular reasoning and create evidence supporting the misunderstood version, further continuing the cycle of misapplying the study.
How dunning kruger is misunderstood: experts know their limits and as such are less arrogant. Laypeople do not know their limits and think they are experts due to their ignorance.
What dunning kruger data actually shows: experts are just as prone to overconfidence as anyone else, but because they’re actually experts, the degree of difference between their self assessment and reality is less. Laypeople know so little that they cannot accurately gauge what they know, so the degree of difference is greater.
Dunning kruger is rarely understood correctly, but because self-labeled experts are looking for validity, they cite it to make themselves seem stronger, thus creating evidence of ignorant people being overconfident. Perceptive people pick up on the irony. The misunderstood analysis makes it sound like people at the 10th percentile believe they are experts at the 90th percentile. In reality, they think they’re at the 30th or 40th, but because the guy at the 95th percentile guesses he’s at the 93rd or 92nd, dunning kruger citations have created this pernicious idea that experts are humble and idiots are cocky assholes.
Statistical data is often used as white noise to obfuscate basic human truths. There is no objective reason to believe that ignorant people believe they are geniuses. That would require an assumption that the majority of humans lie to themselves and face intense pressure to avoid saying “I don’t know.” In reality, cockiness is a universal trait that doesn’t go away with expertise. If anything, we experience the most extreme stubbornness and inflated ego from the highest performers who think they’re hot shit, reinforced by societal validation and the truth of their hotness relative to society at large.
Sample degradation leads to false negatives, but not false positives, and you can repeat the testing with as many fresh samples as are available. I’ve never seen a technical argument for DNA testing per se to be anything but orders of magnitude better than any other modality of evidence. Sample handling and chain of custody are where the issues lie. Ask OJ.
There’s nothing wrong with the DNA analysis. The problem is with humans executing truth(iness) in (and out of) court. Like BLM overtaking police reform, it’s a false flag. Even with lab errors, intentional switches, statistical games and all, DNA analysis is more objective and *more* infallible than eyewitness accounts. Some people just really love adopting a ‘perfect as enemy of good’ position as a general rule or when it suits their needs. Making every decision consciously and on a case-by-case basis is too hard.
There are absolutely cases where evidence gets doctored or contaminated and statistics get muddled, but unlike eyewitnesses, drug dogs, and bite marks, the PCR machine only lies if you intentionally turn it up to 35 cycles and even then, the PCR machine never turns itself up to 35 or whatever on its own.
Sounds probable.
What irks me is all these articles that allude to problems but leave any stats and procurement issues completely void.
The article talked about percentage of error, but NEVER named a number. It never went into methodology.
So the public remains absolutely ignorant and must rely on CSI and other entertainment for "conclusions".
We always get dispute, but never basic facts nor the basic science and it's problems.
We get idiocracy, and endless controversy, because real knowledge that settles the matter or even allows a decent evaluation is never shared, and likely never investigated.
If the court records were read enough to see the forensic arguments, we'd have a much better idea about what has been going on in the "justice" system. Lies do seem like a very likely possibility. So keep everyone dumb as a rock.
Great article, Mike. I appreciate your work, I’m now creating over $35,200 dollars each month simply by doing a simple job online! I do know You currently making a lot of greenbacks online from $28,200 dollars, its simple online operating jobs.
.
.
Just open the link————————————————>>> http://Www.OnlineCash1.Com
from the linked amicus brief
One prominent study found only 21-38% of bullet marks may line up from the same gun, while 15-20% of bullet marks line up from different guns. Moreover, no method for determining which marks count as a match is available.
LOL
I can think of quite a few double blinded studies that could be done to test how well this really works. It'd be interesting to know. I doubt the people making money being experts are going to jeopardize their good thing.
What kind of money are we talking about? I mean, I shoot guns and own a microscope. What are the occupational licenses and
hand-wavingtraining I'd need to become an "expert"?You are likely at least as well qualified as most of the "subject matter experts" testifyig at trials are.Most jurors and far too many judges get overy impressed by hi falutin'sounding words and conlicated sentences. Buffaloed by buffalo dung, in other words.n
Now if I were really wanting to prove a particylar handg fired particular bullet, I thibk I'd take the EMS prints of the found round, then tst fire maybe twenty new rounds, as close to identcial as possible to the found projectile. then I'd run the photomicrographs of the twenty rounds I fired (into gel?) ad start seeing how well the one matches how many of the twenty. Next I'd take maybe foty more rounds, go ou and buy/borrow/rent five or six different guns of the same make, model, calibre, barrellength, of the subject's gun,
Now if NONE of the "different gn" rounds come anywhere near a match for the accused's gun, and if 19 out of 20 of my test rounds are obviously very close matches to the suspect's gun, then I can safely say that beyond a reasonable doubt, the one round fired and recovered at the scene WAS on a fer more probable than not basis, WAS fired from the same handgun which fired the round now presented as evidence.
But I rathr suspect these "studies" are not done so throughly and exhausvely.
You should read the Balko article.
"Magnum Force" wasn't a documentary?
But I thought the Science®™ was settled?
Who are we to question the Experts®™?
Often enough, the court “experts” are not scientists – at least, not in the specific field. In few of these forensic fields has the science properly supported the claims – often enough, someone with “years of experience” in the field, e.g., a fire marshal* when the alleged crime is arson, but with limited scientific training and ignorant of experimental design is regarded as an expert because they’ve testified in a previous criminal trial. Or the method itself is accepted around the US once it’s been accepted in a single court.
But you should read the Balko article.
*Cameron Todd Willingham was murdered by Texas because of such testimony.
Why is it hard to measure a given expert’s error rate? Just give them bullets you test-fired from known guns, and see how often he correctly matches bullets to guns.
It seems a pretty simple experiment. Buy 100 of the same gun. Fire each once to get the comparison model. Engrave the butt of each bullet with the numbers 100-199.
Then fire each gun hundreds or thousands of times to put some wear on them. Engrave every single one with some random but logged ID.
Now you bring in your forensic experts, pick one bullet, and have them find which of those 100 guns fired it.
I suspect they'd pick the right gun more than 1% of the time, maybe even 5% of the time.
It may be a simple experiment but it's the wrong experiment. The "comparator" has no gold standard that you can test the examiners on. The best you can do is see how often examiners agree with each other when they report on the same sets of bullets. And studies show that their agreement rate is abysmal, therefore calling into question whether the evidence itself is reliable.
That’s fine; anything less than perfection is not good enough by itself. 5% would be laughable.
Besides, you read my experiment wrong. It's not comparing forensic examiners against each other, it's finding how many times they match the actual correct gun.
"The examiners correctly matched the spent bullet to the barrel that fired it 98.8 percent of the time. The study also found that examiners with less than 10 years of experience did not reach different conclusions than examiners with more than 10 years of experience; that is, there was no significant difference between these two groups in their ability to correctly identify which bullets were fired from which consecutively manufactured Glock barrels."
https://nij.ojp.gov/topics/articles/science-behind-firearm-and-tool-mark-examination
It's still the wrong question.
Where are you going w/ this? This study's outcomes likely were highly if not entirely dependent on glock's EBIS, a feature that other firearms do not have. I don't have a solution, but as a general rule don't trust testimony from 'firearms experts' or any information presented by police or prosecution to be anything but skewed.
I was simply pointing out that being able to match a bullet to a barrel that you know came from that barrel doesn't answer the question that juries have to answer - what is the likelihood that THIS bullet came from THIS gun. Under test conditions the examiners can match two test bullets from the same gun over 98% of the time. That is not the same thing as saying that they can match a bullet from a crime scene to the suspect's gun reliably. That would require a totally different study design with no "gold standard" available to use for the study.
Not analogous. It's highly doubtful that a gun would be fired "hundreds or thousands of times" between when it was used in a crime and when it was retrieved and a bullet test-fired from it.
On the other hand, I'd like to see a study comparing a bullet from a badly-fouled weapon to a bullet fired after the weapon had been cleaned.
Fine, fire single consecutive shots after the master shots.
The point escaping you quibblers is to make it random and show how incompetent the lying forensic examiners are.
The point escaping you quibblers is to make it random and show how incompetent the lying forensic examiners are.
YOU”RE ALL FUCKING QUIBBLERS! CRIMES DEFINITIVELY AREN”T RANDOM. POLICE DON”T PICK UP SUSPECTS BY CLOSING THEIR EYES AND STICKING THEIR FINGER IN A PHONE BOOK. JURIES DON”T TOSS COINS TO DECIDE IF THE COPS WHO STUCK THEIR FINGERS IN A PHONE BOOK GOT THE RIGHT GUY OR NOT. WITHOUT THE OTHER FACTS OF ANY GIVEN CASE YOU”RE ALL QUIBBLING ABOUT DETAILS THAT MAY, AND EVEN LIKELY DO, AMOUNT TO LITERALLY NOTHING ONE WAY OR THE OTHER.
AND MOST IMPORTANTLY, EVEN IF I”M 100.000% COMPLETELY WRONG ABOUT ALL OF THE ABOVE YOUR QUIBBLING GETS EVEN MORE TRIVIAL DEFINITIVELY!
HOLY SHIT! Where's the tylenol?
Calm down. Yes, police DO pick random suspects and then once they have a plausible suspect they start looking for evidence to convict them and tunnel vision causes them to ignore exculpatory evidence, even to the point of hiding it from the defense counselor. It's not hard to imagine that an examiner, having been presented with a gun, a test bullet and a bullet from the crime scene might have just a little bit of bias to declare a "match" - if you don't see that then you have a bad case of "back the blue derangement syndrome."
Yes, police DO pick random suspects and then once they have a plausible suspect
The suspect *cannot* be both random and plausible.
“back the blue derangement syndrome.”
BtBDS or words and numbers actually have meaning, take your pick.
LOL, get a lot of cases where you have between two suspects with 50 of the makes/models of the same handgun each, and 100 suspects each with the identical make/model of handgun do you?
One day there was a fire in a wastebasket in the office of the Dean of Sciences. In rushed a physicist, a chemist, and a statistician. The physicist immediately starts to work on how much energy would have to be removed from the fire to stop the combustion. The chemist works on which reagent would have to be added to the fire to prevent oxidation. While they are doing this, the statistician is setting fires to all the other wastebaskets in the office. "What are you doing?" the others demand. The statistician replies, "Well, to solve the problem, you obviously need a larger sample size."
"Bayesian statistics is difficult in the sense that thinking is difficult."
G. E. P. Box & G. C. Tiao
Ha ha! 🙂 To be fair, the question here is not "what to do with a wastebasket fire" The question is should we allow comparator experts to testify that bullets match guns in criminal courts?
The joke should actually be amended to include two statisticians. One frequentist. One bayesian. The frequentist begins lighting fires everywhere and the Bayesian walks in with a fire extinguisher and tells the Physicist and Chemist "It's OK. I've got this. This isn't our first fire."
The question is should we allow comparator experts to testify that bullets match guns in criminal courts?
Is that in both directions so that suspects can't call on ballistics experts to testify to their innocence or are you saying that only suspects can call ballistics experts in to defend themselves?
The average American personally knows 600 people. Something like 80-90% of murders and sexual assaults are perpetrated by someone who knew the victim personally. About 30% of the US population own a gun. Knowing nothing other than "American", "murder or sexual assault" and gun, cuts the number down to the 20 gun owners. Caliber alone almost certainly cuts that down to less than a handful. Now, yes, cross-country SWATings do happen, but they aren't the norm. Circling back to the point, the 30% represents registered firearms (purchases) but, generally, people outside that number are both de jure and, more importantly, de rigueur criminal. Keep in mind, this is an Innocence Project report from Reason. Meaning the cases are likely to be heavy populated by people like Gage Grosskreuz and Jordan Neely to begin with.
The system absolutely is not perfect, but it's also not like we don't have a problem with The Science supplanting science either.
“Is that in both directions so that suspects can’t call on ballistics experts to testify to their innocence …”
Fair point. The positive predictive value of a “match” is likely to be much lower based upon scientific evidence than the negative predictive value of failure to match from the same research. It is far more likely that an honest failure to match is more reliable as evidence to dismiss the charge than an honest match is as evidence to convict. Once again – there are two questions pertaining to admissability of evidence: the honesty of the expert witness (including unintentional bias); and the scientific evidence behind the forensics. A study that proves that under research conditions experts can match bullets to barrels 98% of the time does not constitute evidence that THIS bullet would NOT match any number of other barrels out there somewhere in that city – or county – or state. There is no study design imaginable that could calculate the number of matches a particular bullet MIGHT have to unknown gun barrels.
It's not the error rate. It's the inter-examiner consistency rate. There is no "gold standard" for bullet marks. Each examiner compares the marks and decides if it's a match. If two examiners disagree, which one is right and which one is wrong?
That's why you test them against known bullets. Not whether two examiners agree. Check whether they got the only possible correct answer.
I mean, testing both seems like a good idea as well.
The point is that you are reversing cause and effect. In the test you proposed (which has been done before) the examiner matches test bullets that are known to have been fired from one of several guns to see which bullet was fired by which barrel. In the criminal evidence world, the examiner is testifying that the bullet found at the scene matches a test bullet fired from a suspect's gun - a totally different question. The correct scientific study would be to test bullets from hundreds of crime scenes against hundreds of test bullets fired from hundreds of test guns - some of which were suspect guns - to see what the positive predictive value of the match is. Not the study you proposed.
This is an important step in the right direction. Medical science has been in the forefront of the movement to insist on evidence-based diagnostic and treatment science. For example, radiologists interpret x-ray images to help treating physicians make accurate diagnoses. But recently studies have shown that individual radiologists frequently give significantly different interpretations to the same radiographic images, indicating a more than expected unreliability of the subjective "readings" very much like bite marks and bullet markings. A different kind of problem exists with DNA matching: if you match on a long segment of one chromosome it means "one in a million" NOT "absolutely certain."
"A gun's firing pin and the grooves on the inside of a gun barrel leave marks on cartridge casings when a bullet is fired, so a firearm examiner compares crime scene bullets to samples fired from the suspect gun and looks for matching patterns under a microscope."
No. CJ don't talk about things you do not understand.
Rifling does not leave marks on cartridge casings. Cartridge casings expand to create a gasket in the chamber, they do not travel down the barrel where the rifling is.
Firing pins impact primers, not casings.
Casings may be ejected or retained by the firearm. Primers are removable/replaceable and bullets are the part that get grooves but have to be recovered and tend to deform upon impact.
I was thinking the exact same thing with one caveat, with rimfire ammunition the pin does strike the casing. Then, of course if it's fired from a revolver and the shooter isn't an idiot and drop the fired cases on the ground, that bit of evidence goes away regardless. Oh, and it's one reason Cali is going to wind up with a roster with only revolvers and single shots on it should it survive.
I've often wondered how confused cops would be if you used a revolver that fired rimless ammo, like there are revolvers that shoot .40 or 10mm using moon clips to hold in the cartridges.
I'm not able to rightly comprehend the thinking that there are a lot of
cases where the perpetrator gets off 'because revolver' or someone is wrongly convicted 'because automatic'. Do you imagine that every shooting the cops show up to every last casing corresponding to every last bullet is always recovered and they can't solve crimes otherwise?
Ruger makes 9mm revolvers that don't use clips. They also had a revolver that used .30 carbine rounds.
Like blood splatter "science", it's largely unreliable.
First bullets and now blood splatter? Talk about bursting my CSI and Dexter bubbles.
These programmes have led juries to give more weight to forensic evidence than is justified.
For sound economic perspective go to https://honesteconomics.substack.com/
Wouldn't it be possible to have computers do this? Scan the bullets in question alongside the test samples and let an objective program compare them.
That sounds good, but consider that this assumes that the underlying science is legit, that when you fire a bullet from a specific gun, it will have markings unique to that gun, and that those markings can reliably be detected and distinguished from similar markings from other guns.
It may be that there is enough variation in individual firings that if you're given a bag of 20 bullets from one gun and another bag of 20 from another gun, you can confidently say that bag 1 came from gun A and bag 2 came from gun B but if just one bullet is taken from each bag, you can't always say which gun they came from.
We simply don't know enough to get to the point of having a program in the first place, CSI notwithstanding.
Well said! I've been struggling in earlier posts to make just this point. The question is not whether this bullet came from this gun, but how many other test bullets from how many OTHER guns would also be declared to be matches. The most you can say from current standards of evidence is that this bullet doesn't rule out this gun in this case - NOT that this bullet DID come from this gun!
Wonder if there could be a percentage chance of how much it matches?
Any program is only as objective as the programmers parameters make it.
So whomever makes the code, is the responsible party, not "the computer" - the computer never decides.
LOL, objective program, you're funny.
Sounds like bullet matching is just...a shot in the dark.
brrrrmtsh!
How hard is it for Reason's reporters and editors to do even a modicum of research into the articles that appear in the magazine. Because there is no such thing as the "Maryland Supreme Court." Maryland's highest court is the Court of Appeals, the intermediate appellate court is the Court of Special Appeals. To me this means that the author did not actually read the opinion, because nowhere in the opinion would the words "Maryland Supreme Court" or "Supreme Court of Maryland" appear.
That’s hilarious, a great busted reason post. Glad someone else complained because my complaint is no error rate percentage and no outline of the science and the issues in it’s validity, which would require new knowledge and bit of study or research so as to edify the public’s understanding. Instead, the dumbed down stupid take is given. Everyone is just as dumb as before and has no real idea who is correct. These two argued over this and this was the “verdict” of the most powerful pertinent authority. As far any of the science, any of the error rate numbers, any real explanation, forget it. JUST OBEY, plebe.
lol don't ever listen to Rupert!
"Effective December 14, 2022, the former Court of Appeals of Maryland is renamed the Supreme Court of Maryland and the Court of Special Appeals is renamed the Appellate Court of Maryland. This is a change in name only and does not affect the precedential value of the opinions of the two courts issued before the effective date of the name change."
https://mdcourts.gov/opinions/opinions?_gl=1*14ktkkk*_ga*MTU3Mzg4MzUzNS4xNjg3NDY5MTA1*_ga_6TNWH2LH1Z*MTY4NzQ2OTEwNC4xLjAuMTY4NzQ2OTEwNC4wLjAuMA..
Nearly 100% of popular forensic methods are bullshit.
They were never designed to be good science. They were designed to get convictions.
I don't agree entirely, but I do find that the government's labs are likely extremely corrupt. The pressure for making it match must be tremendous. When they interface with the raging jackboot from the government, the heat is likely impressive.
If you don't keep the rate high, you lose a lot of business. A way to keep much business flowing and make more money easier is to skip parts of methodology or never do the test but present a printout of results.
If it comes up no match, that could cost you. The LEO community will be serverely displeased, are you calling them liars ?
Is that your expert opinion?
A lot of forensic "science" is not reliable for a match, but is useful for elimination.
You should have clearly started that this ruling has to do with marks on shell casings instead of repeatedly using the term “bullet”.
Ballistic science, using unique striations left on the round as it travels down the barrel to match a particular round to a particular firearm, was not under question.
'Ballistic science, using unique striations left on the round as it travels down the barrel to match a particular round to a particular firearm, was not under question.' Read the linked amicus brief before making assertions.
Are you referring to the Abruquah ruling? While employing the unfortunately ambiguous “bullet” terminology, it clearly involves analysis of the fired slugs, not the cartridge cases.
Knowing little about the quality of discrimination that fired projectile analysis affords - I didn't read the technical background stuff in the ruling - it seems like the court’s ruling is excessively restrictive.
I can understand the point in denying a witness the ability to state a categorical conclusion about a projectile’s origin, but relegating the conclusory value to simply “not inconsistent with” seems to rob the court of a defensible estimation of likelihood. Juries won’t be able to assess how “consistent with” the evidence is, on their own.
Why not let cross-examination on the issue of certitude bring out the qualified nature of the expert’s conclusion, with support from competing experts if needed?