The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Do Electronic Signs Displaying Number of Traffic Fatalities Actually Cause More Crashes?
"The effect of displaying fatality messages is comparable to raising the speed limit by 3 to 5 miles per hour or reducing the number of highway troopers by 6 to 14%."
From Jonathan Hall & Joshua Madsen, Can behavioral interventions be too salient? Evidence from traffic safety messages, in the journal Science:
Policy-makers are increasingly turning to behavioral interventions such as nudges and informational campaigns …. Guidebooks say that these interventions should "seize people's attention" at a time when they can take the desired action, but little consideration has been given to the costs of seizing one's attention and to the possibility that these interventions may crowd out other, more important, considerations.
We estimated these costs in the context of a … behavioral campaign with the stated objective of reducing traffic crashes. This campaign displays the year-to-date number of statewide roadside fatalities (fatality messages) on previously installed highway dynamic message signs (DMSs) and has been implemented in 28 US states….
We estimated the impact of displaying fatality messages using data from Texas. Texas provides an ideal setting because the Texas Department of Transportation (TxDOT) decided to show fatality messages starting in August 2012 for 1 week each month: the week before TxDOT's monthly board meeting (campaign weeks). This allows us to measure the impact of the intervention, holding fixed the road segment, year, month, day of week, and time of day. We used data on 880 DMSs and all crashes occurring in Texas between 1 January 2010 and 31 December 2017 to investigate the effects of this safety campaign. We estimated how the intervention affects crashes near DMSs as well as statewide. As placebo tests, we estimated whether the chosen weeks inherently differ using data from before TxDOT started displaying fatality messages and data from upstream of DMSs….
Contrary to policy-makers' expectations, we found that displaying fatality messages increases the number of traffic crashes.
Campaign weeks realize a 1.52% increase in crashes within 5 km of DMSs, slightly diminishing to a 1.35% increase over the 10 km after DMSs. {Because of imperfect compliance and competing demands, traffic engineers do not display fatality messages on all DMS hours during campaign weeks, implying that the effect of displaying a fatality message on a DMS is even larger.} We used instrumental variables to recover the effect of displaying a fatality message and document a significant 4.5% increase in the number of crashes over 10 km. The effect of displaying fatality messages is comparable to raising the speed limit by 3 to 5 miles per hour or reducing the number of highway troopers by 6 to 14%…. Back-of-the-envelope calculations suggest that this campaign causes an additional 2600 crashes and 16 fatalities per year in Texas alone, with a social cost of $377 million per year.
Our proposed explanation for this surprising finding is that these "in-your-face," "sobering," negatively framed messages seize too much attention (i.e., are too salient), interfering with drivers' ability to respond to changes in traffic conditions….
Thanks to Prof. Glenn Reynolds (InstaPundit) for the pointer.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Nothing the lawyer does works. The opposite of the intended effect is the rule. The sole job the lawyer does well? Serk and collect the rent. The profession is all garbage.
Torts are a threat to safety, inducing a coverup of the analysis of mishaps and the destruction of manufacturing in our country. Contracts cause massive violations of promises under $million since it costs $50000 to enforce any agreement. Criminal procedure allow 15 million common law crimes, and 100 million internet and identity crimes a year. Family law has destroyed the family. Admiralty for Pete's Sakes has destroyed ship building. IP has stifled innovation. Name a law subject, the effect has been the opposite of the purpose, and hideously damaging to the nation.
An Amber Alert costs $50000. It was meant for stranger abduction. Most are feminist driven abductions by an alienated male parent, a total waste of money.
Sex offender registries are another rent seeking waste of tax money. Each has a warning. No one may harass the sex offender or risk getting arrested. Both girls for which they were named would not have been helped by them. One was abducted from her bed by her neighbor across the street. The other opened her door to the neighbor. Jessica was alive in the closet when the police visited the well known child rapist across the street. He was too cowardly to kill her. Instead, he buried her alive in a bag. He was given the death penalty. The tax payer spent likely $100000 for the treatment of his anal cancer on death row. All immunities for the vile lawyer and judge professions should be lifted. Her horrible end was 100% the fault of the vile lawyer profession.
The lawyer profession is an expensive, hideous, toxic prank on the public.
Boy the people in government sure are smart.
Don't expect them to ever change their behavior.
It vaguely reminds of those dynamic text freeway signs in California. When they were first announced, they claimed they would only show timely, targeted, and temporary signs, nothing distracting, no advertisements, etc. (The TTT aspect reminds me of every Keynesian ever). Naturally they now show Amber alerts, "click it or ticket", drunk driving warnings, etc etc etc. No ads yet. But just like the Prop 65 cancer warnings, I no longer pay any attention to them, other than to remind me they still say nothing useful.
But you can be sure there are people monitoring the roads and maintaining the signage.
An aggregate claim should be filed against these dipshits. No taxpayer fund should pay for the damages. All damages from their personal assets. The taxpayer did not pay for them to cause more crashes.
Today, the most hideous lawyer mistake is the promotion of the electric vehicle, an unmitigated ecological and humanitarian catastrophe, from the mining of lithium by child slaves, to the empowerment of China, to the hideous costs of disposing of these monstrosities. The lawyer cannot be that stupid. There has to be corruption and payoffs.
Behar is starting to remind me of the old joke: two old Russians meet up to complain about conditions in the country.
One says, "The Jews are responsible for our suffering."
The other one says, "Yes, the Jews and the bicyclists."
The first looks confused and says, "Why the bicyclists?"
The other one shrugs and says, "Why the Jews?"
Interesting result, and a plausible explanation.
I rarely think about showing up to the local governing body, but did a few years ago when they dropped a construction style sign that flashed "slow down" on a major street close to where I lived at the time. Problem wasn't the sign, but the fact that it blocked about half a lane of traffic on a busy road resulting in people feeling like they had to swerve into the opposing lane to get enough clearance on the right.
The engineer actually defended the placement at first saying it was meant to do that in order to slow people down and cars could take turns giving way. The elected officials didn't really buy it though and looked at the guy like he was an idiot.
Sign got moved the next day.
I read a newspaper article about a story like that in the Midwest, except there was a permanent obstruction like a chicane meant to force drivers to swerve and slow down. When drivers hit it instead the responsible engineer said it was working as intended, punishing drivers for going too fast.
I wonder what happened in 2016 that caused the deaths-difference percentage to go negative.
I'm also interested in the suggested consequence of their results: that "vital announcement" messages like Amber alerts or critical traffic announcements, would also cause increases in accidents.
Although perhaps that really shouldn't be a surprise, given that the basic idea should be "don't distract the drivers".
I think it's great to try new and different things, to accomplish noble goals like reducing traffic accidents. Assuming that these data are valid (ie, repeatable), one hopes that cities and counties will either eliminate these, or will change them. Having good intentions should not stop us from saying, "Well, we tried X, but it was a bad idea. Or, at least, a bad implementation of a good idea."
Most new things will be bad ideas and/or bad implementations. But the governments involved mostly don't want to find that out, so they don't look. For example, this study was funded by Canada and the EU -- not Texas or even the US. This is a big problem for honesty and transparency in government policy and services; the citizens involved often have a better sense of where things have gotten worse than the bureaucrats do, so the bureaucrats' credibility is damaged by good intentions going awry.
re: bureaucrats' good intentions going awry
Even where they're well-meaning -- as opposed to, e.g., seeking personal profit (example) or seeking to harm one's political opponents (example 1, example 2) -- many (most?) bureaucrats are incompetent.
As Ronald Reagan said: "The most terrifying words in the English language are: I'm from the government and I'm here to help."
What's the moral of the story? Limited government. Or, as Grover Norquist put it: "I don't want to abolish government. I simply want to reduce it to the size where I can drag it into the bathroom and drown it in the bathtub."
Trying new things isn't bad, no.
But it seems like it took more than 10 years before someone - someone completely unrelated to the people that made this policy - decided to check if it even worked.
They seem to have just assumed it was a good thing because they had good intentions, or it sounded good at the time, or some other hoary reason.
"Trust, but verify". Not matter how common-sense it seems, or how pure your intentions: always make sure it actually works.
Tucker Carlson: Something Dark is happening in traffic deaths.
https://youtu.be/DGb748VOcYU
I know the right loves to point out the derangement syndromes of the left, but the right is also afflicted with thinking that stretches the facts to the point of absurdity. I give you this segment on traffic deaths as my evidence. Here, Tucker lays the 55% increase in traffic deaths among Black drivers in June, 2020 at the feet of ... police who were told not to enforce driving regulations in the aftermath of the Floyd shooting incident the month before.
Hoo, boy. I'm not saying it couldn't be true, but it would take some powerful evidence to make it stick. Extraordinary claims require extraordinary evidence. Just saying its so doesn't make it so. Or even very likely so.
People really stink at two things: risk assessment and explaining other people's motivations.
Minor point, but of course there was no "Floyd shooting incident."
Police stopped enforcing traffic laws for fear of being sued for racism by the lawyer profession. Black crashes surged. Murders of blacks surged as well. Police stopped active enforcement. They were deterred from doing anything other than show up after a 911 call, taking a report, and filing it. Period. Good report.
This is the kind of garbage science common in social sciences.
Correlation is not causation. There could be many, simpler explanations, such as signs are most likely to be placed on roads/at intersections during times when/were traffic deaths are higher*.
Its very hard to control for this sort of bias. Its also very hard to detrend data. The period for this study is 2012-2017. This is a period when cellphones and distracted driving grew rapidly. Probably hard to control for this effect.
Figure 4** of there paper is suspect to me. Their theory is that signs worsen fatalities when traffic is higher. Sept - Jan signs increase fatalities. So do May, June, and August. July? down.
* The study never says how TxDOT picked their campaign weeks. They did not pick random weeks, thats for sure.
** You need a crap ton of data to estimate the equation at the bottom of figure 4 and table 2. Think about it: they used: pretreatment (January 2010 to July 2012) and treatment (August 2012 to December 2017) periods.
That means that to estimate the effect in "January" they have 4 treatment Januaries, and 2 pretreatment Januaries. They are estimating the effect in January essentially comparing 4 observation points to the previous two.
As a side note, I am hyper critical of these dumb social science studies because this is confirmation bias wrapped in fancy differences of differences statistical gobbledygook. It's nearly the same type of garbage we see coming out of gun prohibitionist "studies" - except in those studies they are less ethical about the cherry picking data.
Yet there is a tort of increased risk. 4% increases of a condition are not clinically meaningful. Yet the maker will pull the product and disrupt the care of well treated millions of patients, out of the fear of litigation. We are sick of this vile, toxic, deadly lawyer profession.
July being down doesn't mean it is significant, especially with a small number of data points.
As for campaign weeks, from the part quoted in this blog post:
Your claim of bias based on sign location or selection does not make sense: they used a difference-in-differences method to look at relative fatality counts for each sign's locality across years, with the messages running for an entire week. That controls for variation as a function of location and time (of day or of year).
They also compare against the non-campaign weeks of the same month of each campaign week, to detect whether there might have been a year-over-year hidden bias.
Your attention to detail leaves much wanting.
I read the paper, including the "campaign week."
LOL, I bet three paychecks I can go back using the same methodology and prove that TxDOT's monthly board meeting also causes increased fatalities.
I have see a lot of these types of studies come and go over the years. Fancy statistics is never ever a substitute for a good old fashioned randomized controlled trial.
Bayesian inference, how does it work?
You claim the study never says how they picked the campaign weeks, but that's right there in this article (emphasis mine):
So it's not "random" but it's also not dependent on traffic. For the weeks that they don't show the message, they can try to control for any year-over-year changes that had nothing to do with the signs.
"Correlation is not causation."
True enough and yet we know that where there is smoke there is likely to be fire. The correlation is doubtless real. The explanation is plausible. But is not proved.
Your conjectures for the causes are all falsifiable. They are worthy of testing. One does not need a "crap ton" of data is nonsense. One needs enough data to see a 5 sigma effect. That amount is calculable.
Your last paragraph serves only to show that you don't believe the study BECAUSE you don't want to believe it. Your post proved too much.
Do a randomized controlled trial of road signs, then call me.
I drove 250 miles today. The highway is littered with road signs and billboards, of all shapes and sizes.
Think about the absurd implications of this study. Signs cause fatalities. Given how many signs and billboards are on the road of every highway, enquiring minds want to know: Is it all signs? Certain sizes/shapes/colors? Maybe its not the signs themselves, but messages with the word "accident" or "fatality"? Maybe people slow down for the word "fatality." Maybe its not signs themselves, but a threshold number of signs.
Ever been on a highway through a major tourist destination like Las Vegas or Orlando? The sign density per mile is astronomical. I am deeply skeptical that the sign density is causing fatalities.
Unfortunately, people see statistics and lose their ability to think critically.
Without the data, its hard to know exactly where they went wrong.
These signs are unlike billboards in that they are over the road, and unlike most road signs in that they have novel information in an ad hoc format.
Do a randomized controlled trial of the universe's creation and then let us know how well your approach to science works on hard problems.
Please, try to actually read the paper. It explains almost everything you've mentioned - aside from that series of "what ifs", which are meaningless. What if the increase in accidents was a man with a laser blinding people? Or a secret plot of the mole-servants of the Great Subterranean Lizard Empire? You're welcome to gather any data you want to true to disprove those "alternate theories" like you've invented... but good luck with that.
As a former professional statistician, I did read the paper. It is well written and covers all the necessary bases for a solid analysis. The conclusions drawn are specific to the data covered, and while it suggests things (especially in conjunction with the hundreds of studies about distracted driving) it does not claim to be absolute proof.
The biggest weakness of the study is the relatively small amount of data - only 7 or 8 years. But despite that, the results show both significance and effect, which means that it is now up to you to show contradictory evidence (or fundamental flaws in the study) if you want to argue they are wrong.
You hit on one of my long time gripes about traffic control. Traffic signs are treated as marketing tools. I'm driving through heavy rain in poor visibility and a flashing sign distracts me from the road because I couldn't figure out for myself that the road was wet. A three page message (the legal limit is two) warns me of a 15 minute road closure at an unspecified time next week or reminds me of the current federally funded ticket blitz. None of these do even the tiniest bit of good.
Don't take the speed limit or trooper comparisons seriously. A study from last year observed that over a very wide range of ticketing levels on major highways there was no correlation with safety. On speed limit changes, here is the typical garbage that dominates the literature. Utah increased speed limit to 80. Deaths went down. A report by the IIHS (perennial liars) said the decrease must be due to economic conditions or some such, anything but speed limits. A few years later on the same road with no additional changes deaths went up. IIHS said the original speed limit increase must be responsible.
We're suffering from sign fatigue because every local squeaky wheel resident and ignorant councilor thinks we can't live without a bright yellow sign every 100 feet. They were even more closely spaced than that where I used to live. Bicycles! Crosswalk ahead! Crosswalk here! Bicycles! The signs were two seconds apart at urban traffic speed. If you spent one second looking at each sign that would be half your attention gone. And what are you going to do after looking away from traffic to read "(bicycle symbol) / share the road"? There are no bicycles around. In that specific case, the Federal Highway Administration proposes to delete the word message "share the road" because nobody knows what it means. But the bicycle symbols will no doubt remain because cities need some way to signal they are bicycle-friendly without spending money on bicycle infrastructure.
Signs come from torts of failure to warn, to disclose. They are a lawyer scam. A German town ended crashes by removing all traffic lights.
Road signs in particular are not there because people are suing cities complaining that they weren't warned about pedestrians or bicycles, OK? Not everything is about your hate of lawyers.
Because I doubt the findings, I did not read every word. Had the effect been seen only in the immediate vicinity of a sign, you could plausibly attribute it to distraction. You would then somehow have to sort out how every other unexpected visual input would not deliver a similar distraction, and a similar accident boost, and then figure out a way to untangle those effects.
I suggest another challenge to think about. In the Boston area, for instance, mysterious-looking traffic fluctuations occur. Commuters get familiar with the routes they drive. They know what to expect of traffic at specific times on specific days of the week. They learn the seasonal variations which make differences. Then, sometimes, those expectations are simply confounded. It seems as if some cryptic signal has gone out, to call a notable surplus of drivers into action all at once.
Surplus drivers happen predictably before holiday weekends. They happen predictably during the 10 days before schools open. They happen unpredictably and unexplainably at apparently random other times.
Perhaps a cause can be guessed at. A Patriots game in Foxboro may plausibly explain more traffic on the Southeast Expressway, many miles away, during an interval before the game. Other times drivers are stuck with heavier-than-normal traffic, and it remains a mystery. At those times, some confluence of unknown factors apparently synchronized surplus traffic activity in an atypical way, and that is all the explanation anyone can hope for. Also unknown is what other activities may be synchronized with the traffic, or by the traffic.
Maybe local doctors, as a group, prefer Thursdays for scheduled patient visits, and Tuesdays for golf. Could affect traffic in the medical district, with second-order effects who knows where. Maybe arrival pulses at the airport vary from day to day—or according to weather variations at remote departure locations—and dump unexpected traffic at unpredictable times. Who knows why these things happen? Who knows what factors interact?
So for this study, is some unknown factor, or combination of factors, synchronizing days, traffic sign appearances, and traffic count variations? Do the researchers have a day-by-day accurate count of the traffic at all their sites? As I scanned the report, I looked for that, but did not find it. I might have missed it.
I don't think you can conclude much about causality unless you count how much traffic there was, and also know what accident frequencies are characteristic of what traffic counts at what various measured locations. That seems a lot to ask.
And by the way, has anyone asked whether the signs with the fatality data are in random locations, or instead in locations chosen by workers following unknown criteria? Criteria related to work schedules, to other time factors, to traffic levels, to highway design factors, or to accident frequency at particular locations would all suggest the possibility of confounding variables, which might in some indirect way link to traffic conditions and accidents which prevail on the days and times when the signs get monitored.
Even systematic placement of the signs at regular distance intervals might produce some measurable effect to be accounted for, especially if other features with potential to affect traffic were also systematically placed. Maybe drivers would encounter rhythmic combinations of distractions, syncopated distractions, or traffic fatality report signs intermingled with Burma Shave ads.
Traffic engineers seem to understand they face analytical problems which are complex, and almost infinitely various. They know from experience that some of their designs work better than others, and they cannot always predict which will work best. Maybe something similar could be said of this report.
So there are a lot of words to explain why I withhold judgment about the conclusions suggested.
As usual, the more Stephen Lathrop writes, the less worth reading it is. Because he did not read enough, and especially because he admits that was because of his mental biases, he is a talking ass.
The study addresses the specific concerns raised. The fatality rates after these signs during a "campaign week" were compared to fatality rates for the same distances before the signs, in previous years, and in the non-campaign weeks of the same months as the campaign weeks. The "signs with the fatality data" are all the signs being studed:
(emphasis added)
The non-specific concerns raised, like unknown, unspecified and implausible "synchronized surplus traffic", are typical Lathrop garbage words.
The study addresses the specific concerns raised.
Good. So the study published site-specific traffic counts for each instance where measurements were taken. Glad you cleared that up, Michael P.
That's not a concern, that's a demand for a pointless degree of detail. Can you produce a certificate that specifically says you know more about statistics than a bowl of jelly?
"In the Boston area, for instance, mysterious-looking traffic fluctuations occur."
Indeed SL, in Boston even a single car can be a traffic jam. More seriously, from 6 years of relatively recent experience, the kind of fluctuation you describe in Boston are real and they can occur with no noticeable change in traffic density or road condition.
I speculate that the Boston traffic operates at a density for given road condition very close to the change of state transition as described in Ising models of traffic flow. (Change of state transitions: Just think that when water is at 99.8°C it takes very little added heat to boil and turn to steam.)
"Here's some incredibly, incredibly, incredibly messy data. But we're super smart and wonderful and extremely confident in picking just the right things to control for (because we're so smart). So here's this tiny effect and we're really confident we're just so so right. Did I mention how smart we are?"
And people wonder why trust in "experts" declines.
That kind of vague whining and inaccurate paraphrasing is a very effective way to convince all of us that you have not even bothered to think about what they looked at.
Without looking deeply into the data, what I have seen so far looks much better than the average retrospective study. I have seen a few where the only reasonable conclusion was "no provable effect". One out of 200 cars reduced its speed by 1-3 miles per hour. Speed limits work! (And another one increased its speed by about the same amount, but you can bin the data to hide that fact.)
The effect reported in the Texas study does seem larger than I would expect. What makes me more willing to believe it is, the study is not trying to contradict a large body of research. We know the average driver pays little heed to speed limit signs in the absence of a credible threat of enforcement. When a study blames or credits speed limits I have reason to be skeptical. We don't know that what fraction of drivers is affected by large electronic message signs. (Or I don't.)
Exactly this!
I went into the study expecting it to be another sensationalist screed with the questionable math, like almost every other statistical 'study' that shows up in the news.
But this was really well done, and the authors seems to have accounted for or acknowledged all the potential major flaws I could see in their analysis.
While it isn't indisputable proof, it does make a sound argument that there is something about those, and similar, signs that is harmful.
4.5% is not small...
4.5% may or may not be small depending on the size of the range of probable values, i.e., the standard deviation of the measurement. But if the number is precise and accurate, then 4.5% is not insignificant.