(Page 2 of 3)
In a public docket accessible online, there is a list of the 71 companies that the NTSB classified as “curbside” and the 51 companies classified as “conventional.”
These lists are jaw dropping. Greyhound and Peter Pan—the two most iconic conventional bus lines—were categorized as “curbside” carriers.[**] It’s as if a major study of the restaurant industry had classified McDonald’s and Burger King as leading outdoor food trucks. Also, lesser-known carriers like Martz Trailways and Fullington Trailways are on the list. I spoke with officials at both those companies, who confirmed they are conventional bus carriers without any curbside lines.
On the “conventional” carrier list, Hampton Jitney is listed. Apparently the study authors never took part in the New York City summer ritual of getting picked up on the streets of the Upper East Side for a weekend on Long Island’s beaches. (A call to Hampton Jitney confirmed that the company has no lines that pick up or drop off at a conventional station.)
The largest company on the conventional list is New Jersey Transit, a statewide public transit system, with its 2,172 buses. If public transit systems meet the study criteria, why stop with New Jersey Transit?[***]
The NTSB report states that during the study period these 71 curbside companies had 37 accidents with at least one fatality, and with a total of 52 fatalities.[****] I correlated the list of companies with federal accident data and came up with almost the same results: 37 fatal accidents and 51 fatalities. (My list is available here.)
Then I called every “curbside” company on the list that had experienced a fatal accident, including Greyhound, which was responsible for 24 of those accidents. Nearly every company I reached was not a curbside operator. I found that 30 of the 37 accidents that the NTSB classified as involving curbside buses did not involve a curbside bus. This alone invalidates almost all the study’s findings. But that’s just the beginning.
The study reported that curbside carriers had a fatal accident rate of 1.4 per 100 buses, while conventional carriers had a rate of 0.2 per 100 buses. Since 1.4 is seven times greater than 0.2, that’s how “seven times“ more got reported. Since the numerator 1.4 comes from the number of fatal accidents tallied at 37, we already know the calculation is wrong. But what about those “100 buses” the study put in the denominator?
USA Today, Reuters, and the Los Angeles Times) naturally assumed that the study meant that curbside buses were seven times more prone to fatal accidents. To arrive at that figure, study authors would have had to add together all the buses operated by curbside companies. In fact, had the NTSB calculated the results in this way, the data would have shown essentially no difference between the fatal crash rates of curbside buses and conventional buses, even assuming the study had not mistakenly attributed those 30 extra accidents to curbside buses.Some press write-ups (see
But as Aaron Brown first surmised and as the NTSB confirmed in an email, the study authors took a different approach which was misleading. They calculated the fatal accident rate for each bus company and then averaged together the company rates without taking into consideration the size of each company.
This would not have been such a problem had the number of buses operated by each company been about the same. But that wasn’t at all the case.
In practice, Greyhound—let’s say for a moment that the study was right in calling it a curbside bus company—had 1,515 buses and 24 accidents. Another bus company on the list, Sky Horse Bus Tour, had one bus and one fatal accident. The two companies were given equal weight. As a counterfactual, let’s assume that Sky Horse’s single accident hadn’t occurred, but Greyhound had had 1,515 fatal accidents instead of 24. The NTSB would have come up with the same “seven times” finding. It’s as if a rookie baseball player with three at bats and one hit received the same ranking as a starter with 600 at bats and 200 hits.
Even if the NTSB’s method of ignoring company size told us something—and even if the accident data weren’t wrong—the NTSB’s “seven times” finding would still have little meaning because it doesn’t achieve what researchers call statistical significance.
The chart used to arrive at the “seven times” finding is pictured to the right. There are two vertical lines in the middle of each measure called error bars. They’re drawn according to a “95% confidence interval,” a standard measure of statistical significance. The bars completely overlap. This means the results could have easily occurred purely by chance. A research journal with any standards would have flagged this finding as inconclusive and not fit for publication. Instead, the NTSB promoted this number to reporters without mentioning how little it actually means.
“The key to statistical analysis is that it is innocent until proven guilty,” says University of Pennsylvania Wharton School statistician Ed George, who examined the NTSB study for the purposes of this article. “You would start with the assumption that there’s no difference in the safety rating of curbside and conventional bus companies. Then you look for persuasive evidence otherwise. The error bars overlap in this chart, so there is not persuasive evidence.”
Other problems with the NTSB study abound. The agency had no data on miles traveled, generally a key measure in any analysis of transportation safety. And the study is derived from a federal data set known for its errors and omissions because it relies on local law enforcement agencies to voluntarily report data only every two years. The study acknowledges these limitations, but the press release didn’t mention them.