Driverless Trucks Will Be Great for Consumers, Safety, and Truck Drivers
Don't buy the doom and gloom over autonomous big rigs.

No longer content with merely killing off manufacturing and agriculture jobs, the looming specter of automation is now coming for the livelihood of truck drivers. At least that's what media and politicians are saying.
In January, Rolling Stone warned that "automated driving could offer a dark replay of the decline of factory work," while the Guardian fretted last year that "self-driving trucks are set to lay waste to one of the country's most beloved jobs."
Goldman Sachs predicts job losses of 25,000 per month for all professional drivers once automated vehicles hit their peak, and Republican Gov. Charlie Baker declared in February 2017 that self-driving trucks had the potential to cause "a tremendous amount of economic hardship."
Lurking behind all the doom and gloom predictions of yet another profession biting the dust thanks to technology is a surprising fact: The trucking industry is struggling to attract workers to this "beloved" blue collar profession.
The American Trucking Associations (ATA) estimates that at the end of 2017, the trucking industry was short by some 50,700 drivers. That number is expected to rise to 63,000 by the end of 2018. Should current trends continue, the ATA predicts the trucking industry will be down 174,000 drivers by the end of 2026.
With the nation's current crop of self-driving freight trucks only capable of the occasional beer run, automation certainly can't be to blame for this dearth of drivers. Instead a lot of what is dissuading potential new entrants are the inherent conditions of the job. The ATA's report identifies the lifestyle required of trucking, with long hours on the road away from home, as one of the major factors turning off potential new entrants to the field.
Individual drivers say much the same thing.
"I'm away from home for months at a time," says Ellie O'Daire, a driver for Jim Palmer Trucking. "When I do take time off, it's often nowhere near where I live."
For O'Daire, who describes herself as a "a loner," the solitude of the job is not too much of a burden. But for a lot of drivers, she says, "it can be very stressful on any kind of relationship, any kind of family, any pets that you can't take with you."
Compounding this issue is a growing economy that is giving would-be big rig pilots better options in similarly skilled fields that do not require this kind of sacrifice, says Don Lefeve of the Commercial Vehicle Training Association.
"The good economy oddly enough is a bit of a headwind for recruiting drivers," Lefeve tells Reason. "People are choosing to go into professions other than trucking. It's either more home time, or a better wage."
A Wall Street Journal article from today notes that the trucking sector attracted only 5,600 jobs in February, while construction jobs grew by 61,000 and manufacturing by another 31,000.
Lefeve—who's organization represents truck driving schools—also notes the regulatory hurdles facing the industry, including prohibition on under-21 drivers servicing any interstate routes and state backlogs in certifying newly trained drivers.
The result of these hurdles is an aging pool of drivers who are not able to keep up with the demands of the industry. According to a 2017 International Transportation Forum (ITF) report, the average age for a truck driver in 2015 was 47.5 years old. That is four years older than the average U.S. worker in 2015, and three years older than the average truck driver in 2005.
So rather than killing a fledgling profession, self-driving trucks could instead be seen as offering an aging industry a way to automate positions humans are increasingly unwilling to fill.
Indeed, that same 2017 ITF report offered a number of different scenarios for how driverless trucks will impact the job market. Under more conservative estimates, the ITF estimates the effects of an aging work force and new workers being dissuaded from entering trucking will result in little job displacement. More "disruptive" scenarios predict a greater degree of machine substitution for human drivers on longer routes, which would lead to roughly two million lost jobs by 2030 in both Europe and America.
But the report also suggests that driverless trucks—by decreasing the costs of hauling freight—would result in an increase in demand for freight. This in turn could lead to an increase in the demand for local freight drivers, who transportation companies depend on to not just pilot vehicles, but also interact with customers, load and unload cargo, and choose the most expeditious local routes. In other words, automating long-haul trucking could result in more short-haul trucking jobs for human drivers.
An economic analysis done by Uber, which is currently experimenting with driverless freight trucks on Arizona highways, predicts a 400,000 increase in truck driving jobs as a result of the deployment of self-driving trucks.
Then there is the safety factor. Trucking is by far the most dangerous mode of freight transportation. According to the Bureau of Transportation Statistics, some 722 people lost their lives as a result of large truck accidents (defined as trucks over 10,000 pounds). That's compared to 514 people killed by freight rail, 12 people killed in water freight accidents, and the zero killed by U.S. air carriers.
Self-driving trucks offer the opportunity to lower these fatalities considerably. For O'Daire, that makes them well worth rooting for.
"I am out here every day and I see people do unbelievably stupid things trying to save a half second or a second off their commute," she tells Reason. "I would love to see a world in which people don't die from car accidents anymore."
As for what the emerging technology will mean for her own job, O'Daire is similarly cheery that more autonomous vehicles will be a blessing to truck drivers, at least for a while. "I think there is going to be a really great decade or so where it makes my job extremely easy before I lose the job entirely."
Rent Free is a weekly newsletter from Christian Britschgi on urbanism and the fight for less regulation, more housing, more property rights, and more freedom in America's cities.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Well, if it keeps one pet from being lonely - - - - - -
If it saves just one life... That seems to be the logic reason has embraced on this. It is also funny how about every week reason publishes an article about how amazingly safe modern life is. Yet, whenever the subject of self-driving cars comes up, suddenly modern life is some version of Death Race 2000.
Driverless trucks are a pipe dream. This is not so much a software problem as a manufacturing problem. Manufacturing problems are much harder to solve. Coming up with a single example of a truck that can safely drive in a narrow set of circumstances is something entirely different from building thousands of trucks that will be expected to drive in every circumstance imaginable and some you haven't thought of.
These claims of safety come from comparing the accident rate of a one off vehicle operating under a narrow range of conditions. That is like just taking the accident rate of the best driver in the best vehicle operating in the best conditions. What is the human accident rate that? Pretty close to zero. That rate is not translatable to widespread use. Widespread use is a much more difficult problem and not one that is going to be solved nearly as quickly or as easily as is claimed. And the stakes of an 80,000 lbs truck are so high that they are going to have to be solved to a very high degree.
Driverless trucks are a pipe dream.
Said John, the guy who is consistently opposed to automated driving and will be proven dead wrong in short time.
What reason do you have to believe that other than wishful thinking? If these things are so close to being safe, why is Uber so keen on ensuring there is no public examination of the crash that killed that woman?
Just because you want something to be true, doesn't mean it is.
What reason do you have to believe that other than wishful thinking?
Historical achievements in technology trumping hand-waving.
If these things are so close to being safe, why is Uber so keen on ensuring there is no public examination of the crash that killed that woman?
The politics of the thing demand it. Clearly the car is at fault; the car fucking failed to see the woman. But pretty much anyone would have failed to see that woman. Humans are responsible for an enormous amount of vehicle casualties, but try banning humans from driving and see how far that gets. Autonomous driving offers a safer, better method of getting around, but the deep-learning methods of getting there need time on the road to improve.
Historical achievements in technology trumping hand-waving.
So where is my jet pack and space station? Just because some technology works doesn't mean all of it will. Claiming "but technology" is just begging the question and a non answer.
The politics of the thing demand it. Clearly the car is at fault; the car fucking failed to see the woman. But pretty much anyone would have failed to see that woman.
That is not true. The car failed to see the woman because she stopped moving on the assumption that the car would swerve to miss her once it saw her. Had a human been driving, that is likely what would have happened. It didn't happen here because the car can't tell the difference between stationary objects. It just sees objects that either move or don't. It ignores all objects that don't move. That is a huge problem. You can't have driverless cars that can't tell the difference between a stop sign and a deer or a person. Worse, since they just ignore motionless object, they won't avoid things like fallen trees or debris in the road.
There is no quick or easy solution to that problem. The lidar only tells you something is there and at best its size. No one has a good idea of how to get it to be able to tell fine details about an object much less get a computer to recognize such. Your eyes are a hell of a lot more complex and the process of your brain seeing much more complex and cognitive than you think it is. And this is like number 20 of the 100,000 problems that they will need to solve.
The resolution of LIDAR is better than your eye. Depending on the system the refresh may be a little slower. Does being able to see 10 lp/mm on a tree actually aid you in avoiding it? No.
Sure it can be better than the eye but that is only half the problem. You have to have the computer recognize what it is seeing. And no one has any kind of grasp on how to do that with the kind of precision humans do.
You don't even understand the words you're using. What is "precision" to you? What is "accuracy?" Automated facial recognition is generally superior to human recognition. Our big advantages are understanding context, something an expert system can never do, and adaptability. But we have tons of weaknesses as well.
Convolutional neural networks are already better at this than humans, John.
The resolution of LIDAR is better than your eye.
This is exactly the sort of magical thinking I referred to in my other post.
Do you even have the slightest clue what goes into a complete system to turn coherent light into a decision to avoid a woman pushing her bike across the road?
You're talking about the correct functioning of dozens (if not hundreds) of inter-dependent embedded subsystems, which is itself one of the most difficult and error-prone (though admittedly fasting-growing) area of engineering today.
The restless hype for autonomous driving NOW is entirely dependent on the fact that most people today interact with an infinite array of technology they don't remotely understand, so telling them "LIDAR has better resolution than your eye" is no more magical, to them, than reading Reason over the internet on their tablet.
Autonomous driving will unquestionably have its day. But only the ignorant believe that day is arrived, and only the insane are demanding this technology see widespread application in the immediate future.
A proper development cycle for something of this degree of complexity would be at least another decade, probably more like 2 or 3, assuming today's degree of accomplishment.
Do you have any understanding of the meaning of "resolution?" Do you think it means anything more than "resolution?" Did you understand this part:
No, no you didn't.
Yes, let us push this frankly miraculous technology back another three decades when it could be ready in one. Stop being such a wimp.
I don't think people tolerate our public roads be some laboratory for some monied interest.
So where is my jet pack
https://www.youtube.com/watch?v=gCYSWyHDpfU
Technology is capable of things that are seemingly miraculous. As for space stations; we've had commercial private launching for just a decade. Cool your heels and see what happens.
Back to my point, having a few examples of something is a completely different problem than putting something into mass use. You think these things are coming into mass use because a few of them exist and operate in limited conditions. Sorry, but it doesn't work that way. If it did, we would have a lot more gadgets and advanced technology than we do.
John sees a working jetpack and hauls the fucking goalposts across the field.
If it did, we would have a lot more gadgets and advanced technology than we do.
John pretends that we don't live in a world that is saturated with technological gadgets of sophistication thought nearly impossible mere decades ago.
John is, I think, an old fart. I forgive him his pessimism and lack of understanding.
John is just terrified that they're going to take his driving privs away. It's not a completely irrational fear, but it is leading to irrational arguments.
You have irrational faith in technology and grossly underestimate the problems involved. And if these things ever are perfected we will lose most of what is left of our privacy and autonomy. That is a dead certainty. How anyone could be so scared of the small dangers of traffic to welcome that is behind me.
You have irrational faith in technology and grossly underestimate the problems involved.
You have an irrational skepticism of technology and grossly overestimate the problems involved.
And if these things ever are perfected we will lose most of what is left of our privacy and autonomy.
That is pure speculation.
And again you demonstrate your small world view. This goes well beyond safety. It increases the carrying capacity of roads. it has the potential to radically transform and downsize all of the waste of mass transit. It allows disabled and elderly people to remain independent. There are a host of opportunities for this beyond your obsession with fear.
Technology is capable of things that are seemingly miraculous.
No, it isn't, and it's irresponsible to claim otherwise.
Not a single major technology in use today is dependent upon the "miraculous". Every principle is understood down to the smallest nuance by the engineers who seek to use it in practical, functional, successful applications.
I realize that I may seem to be ignoring your use of the word "miraculous", but it's precisely because you seem to be under the impression that most autonomous driving skeptics are laymen. Quite the contrary, although engineers who earn a living in the field are enthusiastic (as one would expect), the balance of enthusiasm comes nearly entirely from laypersons, meanwhile engineers who don't work in the field are the source of most skepticism from what I've seen.
That's because engineers understand that complexity can't be handwaved away simply due to how far we've come.
Driverless car technology is of a complexity that makes moon missions look like a physics homework problem.
We'll get there, but we aren't even close today.
I'm glad that you think quantum effects are understood "down to the smallest nuance." I eagerly await your Nobel acceptance speech.
Technology is capable of things that are seemingly miraculous.
"Aeronautics was neither an industry nor a science. It was a miracle." - Igor Sikorsky
It's called artistic license. Sikorsky was an aviation pioneer and developed helicopters and fixed-wing aircraft. He was also a deeply religious man. But he was not irrational.
Driving a car may seem like a incredibly difficult problem, and in terms of what computers can do today, right now, it is. But it's not an impossible problem: driving is a game played by a set of rules, on a board of varying quality, with other players of varying skill. It is so simple, people do it without thinking, largely because they trust everyone else to play by the same rules. The fundamental problem is teaching a computer program to play that game. Some(one) said AlphaGo was the final word on computers playing board games. Driving is an extension of the same idea. The engineering hurdles are enormous, but so were the hurdles to achieving flight.
Self-driving trucks could be implemented in parts. First you might have them only take the truck from Chicago to Dallas during the part on the freeways, and have a human do the hard part of traversing within the cities. An automated fueling system could be set up, or one could have humans meet the truck at designated fueling spots.
How will the Teamsters react? I expect some foul play.
Because Uber fucked up.
Next question.
Indeed.
How did uber fuck up? The only way I can see that they fucked up was using the car. So, yes, they don't want people to know how badly they fucked up by using such an unsafe technology.
Uber fucked up because the multiple sensors should have had no issue detecting her. The car appears to never attempt to brake all the way up to the point of impact. This was not a detection issue, this was a failed decision. And that decision was entirely of Uber's making.
It didn't break because the woman stopped moving and the car was programmed to ignore stationary objects. it has to be because it is unable to tell the difference between objects. It only knows to miss moving objects. That is a big problem and not Uber's fault but a failure of the technology.
You'll need to provide a citation for the claim that the car ignores stationary objects. You'll also need to provide evidence that she even stopped.
A car can't ignore stationary objects or it's going to hit a stopped car in front.
The evidence that she stopped is the video. Go watch it yourself if you don't believe me. As far stationary objects
http://ideas.4brad.com/it-certainly-looks-bad-uber
While a basic radar which filters out objects which are not moving towards the car would not necessarily see her, a more advanced radar also should have detected her and her bicycle (though triggered no braking) as soon as she entered the lane to the left, probably 4 seconds before impact at least. Braking could trigger 2 seconds before, in theory enough time.)
To be clear, while the car had the right-of-way and the victim was clearly unwise to cross there, especially without checking regularly in the direction of traffic, this is a situation where any properly operating robocar following "good practices," let alone "best practices," should have avoided the accident regardless of pedestrian error. That would not be true if the pedestrian were crossing the other way, moving immediately into the right lane from the right sidewalk. In that case no technique could have avoided the event.
Do you mean literal RADAR or just sensor in general?
What the hell are you talking about? That isn't how radar works. "Basic" radar just fixes direction and distance. It tells you nothing about relative motion. To get that you have to keep emitting and comparing. Doppler radar provides relative motion through the frequency shift of the return. Radar sure as hell can see stationary objects.
Of course it would be true. Do you think the sensors stop at the curb? The only reason it would not be true is if there was some obstruction hiding her, e.g. trees or signage.
You seem to have no reasoning behind your position other than "God damn it it is technology and only a Luddite could be skeptical of it". Sorry, but not all technology works as advertised and not all problems are easy to solve. Sometimes life is just like that.
Hard to argue with that hand waving.
It is not hand waving. Hand waving is "the technology will work because technology always does". No it always doesn't. And I have given a long list of reasons why the problems associated with driverless cars will not be solved any time soon. And neither you nor coins has offered any response to them other then "but technology always conquers", which is hand waving. If you have anything intelligent to say about this or anything to add other than blind faith, add it. If not, go join a church where blind faith is valued.
Hand waving is also saying that the technology can never work, because reasons! The only things that can never happen are things that violate the laws of physics (and even those might be possible because our understanding of those laws is always changing).
You can present your "long list" again if you wish. I haven't seen you make much of a case at all.
AI competent enough and reliable enough to operate under the huge variety of conditions required for millions of robotic cars to be in everyday use, is a problem so complex that it is highly debatable whether it is even possible. To say that it is requires an understanding of human consciousness that we do not currently have and are unlikely to have anytime soon. That is not hand waving that is reality. You are just pretending something is possible because you want it to be possible. It might be possible, but we are not even close to knowing even if it is let alone actually achieving it.
This is what we call hand waving.
Navigating a vehicle down a defined path while avoiding collisions does not require replicating human consciousness.
Navigating a vehicle down a defined path while avoiding collisions does not require replicating human consciousness.
No it doesn't./ But there is a lot more to driving than that. I get it. You want to believe. And I honestly hope they figure it out. These things are not a good thing but the technology could be applied in other areas for good. I just don't have the emotional and religious fervor to believe that they will. Things are just not this simple and not every problem is just a question of throwing more computer power at it. You are going to be very disappointed by this technology. It isn't going to replace most driving let alone create some science fiction world of no traffic and packs of robotic cars doing 150 like is claimed. It just won't.
No, you have the emotional and religious fervor to believe that they won't.
Bailey thinks these things are going to be here next week. You think they are impossible. Neither of you have any experience in developing complex systems.
My position is pretty funny, really. I think that they're about 10 years away, which is disappointing because I could use one starting about 10 years ago.
You, of course, are right Skip. Technology advances in small quantum steps, largely through trial and error. This was an error that will be overcome.
John is like those people who said man would never fly - or at least it is too dangerous to attempt - that we would never have wrist-phones like Dick Tracy, and that watching a baseball game in real time from a distance was impossible. He simply has no imagination or experience developing technology. Perhaps read a bit about the development of the automobile, plane, phone, computer, or petroleum refining. They all start out with lots of inefficiencies and issues and slowly evolve into super reliable and safe systems.
Driverless trucks will roll out [sic] before driverless cars. Sure you'll see a driverless car here and there before them, but it will be the trucking industry that first adopts the technology industry wide.
The real obstacle is the government. There will be more accidents with trucks simply because trucks take longer to come to a stop. Some idiot swerves in front of one and it won't be able to stop in time, with or without a driver. But government will wring it's hands and step in. But they can't stop it forever.
Look at where the research money is going today, and more of it is heading toward trucking than it is to your junker in the driveway.
If you can get them, and the associated bureaucracy to work together.
I think this is key. Get the FAA to come up with liberal rules on drones, and suddenly it makes sense for an autonomous truck to have a drone flying a mile out front analyzing road and traffic conditions. Maybe have a second one out ten miles that flags conditions back to the home office where a human can manage exceptional issues.
There are all sorts of other bureaucratic obstacles that limit possibilities.
Let's hope they can do better than Tesla in determining what a gore point is.
The important thing is we need the government to do something. Anything.
Roll on highway, roll on along,
Roll on Uber till you get back home,
Roll on program, roll on self,
Roll on Google like I asked you to do,
And roll on eighteen-wheeler, roll-on
Sorry Britches, but I have it on good authority from very confident-sounding internet randos that autonomous vehicles are 1) bad, 2) unbelievably dangerous, and 3) impossible.
And we should believe the head Rando named Akston that they are a miracle and will make us all safe and happy.
Self awareness just isn't something you do is it Hugh?
Hey, if a robot can identify and flag hate speech, it can identify an object pushing a bike across the road.
It was smart enough to convince the world it was an accident and managed to shift the blame to stupid humans.
http://www.roadandtrack.com/ne.....iving-car/
While Herzberg's death might have far-reaching ramifications for the industry as it attempts to regain public trust, the legal aspect of the case "has been resolved," according to Cristina Perez Hesano, an attorney with the firm of Bellah Perez in Glendale, Arizona that Herzberg's family retained after her death.
The terms of the settlement were not made public. Hesano also said that Herzberg's daughter and husband, whose names were not disclosed, will be making no statements on the matter.
Whatever the agreement, Uber will now be spared a trial which could have made key components of its self-driving technology public record. Likewise, the question of who is at fault when a self-driving car hits a human will not be argued in a court of law. At least, not yet.
All of the claims about self driving cars are true. That is why Uber was so eager to have a trial that examined their safety. And why they felt the need to pay these people not just to go away but never say anything about it publicly again. That speaks volumes about their confidence in their product.
Hmm.
That reasoning sounds familiar.
Sure they should have. But they didn't. and they didn't on a one off prototype. How many times won't they when you produce hundreds of thousands of these cars and have people use and misuse them in every way imaginable? You grossly underestimate the problem here. If they can't get one car to work perfectly, the imperfections of hundreds of thousands will be enormous.
OK, let's play John logic. One drunk driver proves that humans cannot drive. Gee, that was easy.
The difference with what happened here is that we can figure out what failed here in the Uber software and correct that corner case. Every car then potentially learns from that. Can we do that with human drivers? 1MM+ annual DUIs pretty conclusively proves that to be a no.
We are talking about manufacturing here. Making one of something is nothing compared to making thousands of it and seeing people use them in ways you could never antipate. You either don't understand the difference between writing a program and making something or refuse to remember because you don't like what it tells you.
Look at those hands wave. Why, they're going so fast only a computer could see them.
That's a shame.
Daytime street view of where she crossed
2 lanes from one direction - huge safety island - streetlights on each side. Bus stop on other side of median - 'desire' bike paths to the right which link to 'bike path system'. Only natural risk there is the freeway overpass which the car would be under when ped decides to cross.
Intersection where that 'legal crosswalk' is
7 lanes - ped has to look in 4 directions - no median island - no streetlight in middle so 5 lanes of 'dark' - lights timed for cars not peds so sprint - no 'crosswalk' marking so drivers won't nec yield to ped in street.
There's a reason Phoenix has highest ped death rate in US. Make it legal to cross only where it is unsafe and illegal to cross in the naturally safest (and prob busiest for peds/bikes) stretch where a ped-triggered crosswalk SHOULD be. With this settlement, neither that prob nor robocar can be discussed.
In January, Rolling Stone warned that "automated driving could offer a dark replay of the decline of factory work,"
Jesus Christ. Does anyone actually look forward to pushing buttons and watching shit roll by in a factory? Those dipshits at RS should be mandated to work in one for a month, see how much they like it.
I don't think trucking jobs going away is a bad thing. It is just very unlikely to happen anytime soon.
Even if one day driverless cars are reasonably safe, I still say if we can't get driverless technology when the vehicle is trapped on two steel rails, I don't see it enjoying wide-spread acceptance anytime soon, because unions.
There is that. But more than that, it is just that the capabilities of AI are massively oversold and the complexities of creating a true robotic car that can drive anywhere under any condition, safely without any need of human oversight are massively overstated. Remember, driverless trucks have to be totally autonomous or they don't make any sense. If I have to pay someone to be there to take over for the computer if it fails, I might as well skip the computer and pay the person to drive it.
The driveway problem is a hard problem. I would say we're still ten years off from getting a car from my car port to my mother's garage three hundred miles away. Not town to town, but garage to garage. And another twenty years before it can take me to the cabin at the lake
But you don't have that problem with many trucks. Some, but definitely not all of them. Most trucks go from a well defined point to another well defined point with a couple of well defined stops along the way. They also have the advantage that the don't need to rely only GPS. They can also use sensors embedded in loading docks and weigh stations. Hell, they won't need weigh stations.
I don't know if you've seen modern distribution centers, but they're already set up for this. The driverless trucks will start out going from distribution center to distribution center along the interstates. Then driver trucks will operate locally along the local roads.
Seems like trains are an obvious place. I can't help but wonder if those are largely union regulations that prevent trains from becoming almost entirely automated.
WE don't have them on trains because the consequences of even one mistake on a train are huge. It is more than just unions. Not every railroad in the world is unionized.
I actually looked it up, and there are a lot of them around the world. Including in the US.
Most systems elect to maintain a driver (train operator) to mitigate risks associated with failures or emergencies.
Replied to wrong comment.
Boom.
And I wonder how necessary it is. Because they do have a list of fully automated, with no watchers. Though looks like most of them are more people movers. Like apparently that tram at Sea-Tac has no human oversite on board.
Tram at Seatac doesn't, but the light rail from Seatac to Seattle sure does.
Yes, if I wasn't clear I meant the actual tram inside of Seatac that takes you to the outer terminals.
And I wonder how necessary it is. Because they do have a list of fully automated, with no watchers. Though looks like most of them are more people movers. Like apparently that tram at Sea-Tac has no human oversite on board.
They are. My ex-wife's family were all railroad people. 15 years ago the Railroads were introducing self-driving locomotives and the unions fought it off.
Most systems elect to maintain a driver (train operator) to mitigate risks associated with failures or emergencies.
Boom.
Like the safety driver in the Uber car?
Boom.
Trains are already mostly driverless. Set the speed and sit back for the next twenty miles. You still need the engineer, fireman, lackey, and political officer, because of union rules, but it would not surprise me if there's a lot of automation already in place.
Plenty of trams are automated. Does it need to be two rails or will one suffice?
A tram on a short track, often indoors, sure.
Amusement park rides also don't have drivers. Amazingly, the ferry that goes back and forth from Disneyworld does have a driver.
It's very obviously not a bad thing, truck driving in a monotonous bullshit job and humans are terrible at backing up semis (worked at an Officemax with a convoluted back parking lot and unloading bay that took EVERY SINGLE TRUCK DRIVER 20 minutes to align into).
"Soon" is a subjective assessment. 10 years time, probably not. 20 years? Almost certainly.
Just like I was going to get a jet pack in 20 years and there were going to be permanent moon bases and cities in space by the end of the 20th Century. Sorry but some problems are not easily solved or solved in a way that makes economic sense.
And yet, we have cars that manage to drive themselves pretty reliably. You're myopically focused on the cases where they fail, and declare it a total loss. What we're seeing a nascent technology stumbling in it's baby steps.
No. We have cars that can drive themselves under a very narrow set of circumstances around very well mapped areas. We do not have cars that can drive themselves in anything like the way you are claiming.
John, what is your position on autonomous vehicles? What do you think they will or won't be capable of and when.
I wonder if anyone at RS knows anyone who is a long haul trucker.
The one thing it will be good for is Big Brother.
I really don't get a supposedly libertarian publications love of taking away the freedom of movement from people and now goods.
Nothing says freedom like full central control and monitoring of everyone's movement. Basically, the reason staff would agree to spend their lives in a super max prison if doing so could be wrapped in a shiny package called "technology".
You forgot food trucks and pot, but assex and Messcans will be in ample supply.
All delivered to you by autonomous vehicles.
Have we found the forth pillar of the libertarian moment?
Sorry, can you unpack that one for me? I don't have the deepest grasp of economics, but my understanding is that more options and more automation tends to increase freedom and reduce costs, so how are autonomous vehicles going to take away the free movement of people and goods?
By making it very easy for governments to take that freedom. It centralizes and tracks everyone's movement. it will be inevitable that governments will use and abuse that knowledge. it is what government's do. And it iis why autonomy and privacy are so fundamental to freedom.
How about not buying into the hype of professional evangelists?
Self-driving technology isn't even close to being ready for the real world, simply at the practical level of engineering.
I don't give a damn about jobs lost to automation. As an engineer, I am aware that the sensor packages and detection/response algorithms are grossly inferior to a human's capability, even if in the ideal scenarios existing technologies are tested in allow them to operate more safely than humans.
Hybrid systems could be a good option today, if (and this is a big if) there was some means of ensuring sustained driver attention/interaction.
But it's irresponsible, IMO, to allow optimism for markets and technology to short-cut basic development cycles, which is what the companies pushing this tech want to do. They want to collect real-world data with immature tech, justified on the highly dubious premise that the current lower rate of autonomous driving accidents will convey across the entire country and every conceivable driving condition.
There is unlimited potential in automation, but this shit is not magic. At the present stage of development, these sorts of discussions are ludicrously magical in thinking.
All of this and more. A self driven car is an interesting challenge and exercise in expanding the limits of robotics and AI. But it is not something that fulfills much of a need, driving is safe and not a burden for most people, and not something that is going to be fully achieved and ready for widespread use any time soon.
Pretty piss poor engineer if you think that. Broader spectrum than we can achieve for one. And superior in low-light conditions.
Sensing is not the same as perceiving. That can't make these things work without killing people when they are only used in ideal conditions. And you think they are going to get them to work in all conditions and make hundreds of thousands of examples that will. You are kidding yourself. The only question is how many people have to be killed before you and people like Bailey admit that they are not going to work as advertised. For Bailey that number is likely infinity. He really is a fanatic. For you, it is less but more than it should be.
The 4th Law of Thermodynamics: It can't work if John says so. Maxwell's Demon got nothing on you.
He claimed the sensors were less capable. That's flat out wrong. I've been pointing out all along that Uber's programming/training was flawed. That does not mean that the concept is flawed any more than a student driver getting into an accident proves that humans can't drive.
it's amusing that you freak out over the safety rationale for going to self-driving cars, but now you're hiding behind safety as a reason that we can't develop them.
John is exactly correct. Even the term "sensor" is misleading to laypersons.
A "sensor" is nothing more than a transducer, turning a physical input into an electrical (or optical, etc.) signal.
That signal must be interpreted (correctly), factoring in all kinds of potential errors, and a decision must be made based on that interpretation.
Saying that "LIDAR is better than the human eye" is a meaningless comparison of apples and oranges. LIDAR is a system for determining the range (and over time, speed) of a remote object. The human eye includes an opto-electrical transducer but also an incredibly advanced machine for performing extrapolative pattern-matching and decision-making based on that input.
To wit, LIDAR can determine the range and velocity vector of another vehicle with significantly more accuracy than my eyes can. But what LIDAR cannot do, at all, is differentiate between one reflective object and another, different reflective object of the same size, distance, and with the same velocity vector.
Again, Skippy, magical thinking is not how engineering works. And you are a magical thinker, as evidenced by your apparent belief that the comparison you insist upon is relevant or meaningful in the first place. In reality, is an absolute non-sequitur.
Note that I am using the phrase "human eye" as shorthand for the complete suite of a human's optical-cognitive abilities, and "LIDAR" as shorthand for, well, LIDAR.
So, system-level comparison.
So you are misusing the term as a layperson. Got it.
Which is entirely irrelevant to avoiding a collision. I do hope that you don't work in a mission critical industry.
The "system" as you want to describe it can compute the trajectories of itself and the potential obstacles to much higher accuracy AND precision than a human can. One need only look at the accuracy of radar guided fire introduced 75 years ago to show how much better man-made sensors and control loops are to our onboard systems.
One need only look at the accuracy of radar guided fire introduced 75 years ago to show how much better man-made sensors and control loops are to our onboard systems.
A radar guided fire system is orders of magnitude simpler than a self driving car. Order of magnitude.
It is, by definition, a collision detection system. It is, by empirical measurement, better than a human controlled system. And assuming that one subsystem must solve all of the issues with a self-driving car is akin to thinking that the human eyeball is responsible for solving every human motion problem in isolation.
But you freely admit that LIDAR is superior to human eyesight in the narrow envelope of what LIDAR does. And it is but one method of detection that autonomous cars use to "see" the world around them. The main fault here is not the capabilities of LIDAR but the ability of the program controlling the car to make decisions based on that information.
What I don't get it why skeptics can see computers performing ever-more-complex tasks, often with novel solutions, but are quite certain that THIS problem is beyond the ability of engineering and software development. Or, that getting a learning machine on the road, where it will be presented with novel problems and forced to deal with the real world, is unacceptable. It's the precautionary principle in action: it's not perfect, so it must be kept in a lab until it is made perfect.
There is such a thing as acceptable risk. One human casualty due to computer error against the human history of millions of people dead in auto accidents caused by human error seems to me an acceptable risk. One hundred casualties is acceptable. One thousand. Ten thousand. With the goal of reaching none.
The car did not trigger on the LIDAR. It did not trigger on the RADAR. It did not trigger on the far-field or near-field cameras. What would Occam say the likely cause is?
Electromagnetism?
I blame Uber actually.
What would Occam say the likely cause is?
A shitty switch/case statement in the AI. Ie, the interpretation of the data.
Congratulations, you finally got my point.
Pretty piss poor engineer if you think that. Broader spectrum than we can achieve for one. And superior in low-light conditions.
Better potential capability-- it's how you interpret the data that becomes tricky. Again, the 'stationary object' problem is the elephant in the living room. The sensor arrays should be superior to human sensory capabilities on every level, microwave radar, multiple light spectrum/infrared/ultraviolet etc. Some group of clever engineers needs to solve the stationary object issue, and then I'll feel more comfortable with the technology.
Not better potential capability, better capability They see a broader spectrum. Many more sensors can be used (side and rear facing). Accuracy and precision are quantifiably better. If you want to argue about the pattern rec and the decision process, you won't get an argument from me as that is the point I've been making all along. But when people just throw in the sensors without a thought or a specific criticism (e.g. The Velodyne sensor has a refresh below 20Hz which WOULD be inferior to a human eye), that demonstrates how little they really know.
What does stationary have to do with anything? This isn't Jurassic Park with T-Rex's running around. The sensors can detect stationary objects better than moving ones (not that there's a significant degradation in the latter case).
Not better potential capability, better capability They see a broader spectrum.
You're still missing the second half of the equation and thus, the point.
You need an interpretation logic on the receiving end of that sensor.
What does stationary have to do with anything?
Stationary objects are EVERYTHING with the self-driving vehicle.
The array of sensors on the vehicles, be they radar, infra red, microwave radar are literally surrounded by stationary objects. Parked cars, street signs, concrete barriers, other cars. Take a look at the Wired article for an in-depth discussion of the central problem. The sensors pick up stationary objects everywhere, and are programmed to ignore them.
Actually, you missed the point:
They cannot ignore them because that will lead to a collision. What do you do if there's a stationary object in your path? And to say that they have to ignore them in order to work is just nonsense. What they have to do is detect the collision and avoid it.
From the Tired article you linked:
JFC, that is NOT what radar knows. All it knows is distance and bearing based on time-of-flight and it MAY know velocity IF it uses doppler. Otherwise it has to compute velocity by constantly sampling and comparing.
What do you do if there's a stationary object in your path? And to say that they have to ignore them in order to work is just nonsense. What they have to do is detect the collision and avoid it.
I'm going to guess you haven't done a lot of software engineering in your life to extrapolate what you see on the road, and how a computer might process the literally millions of stationary objects you're surrounded by.
There are stationary objects in front of you, all the time when you drive, but you're able to determine that a particular stationary object, based on its context to other stationary objects, isn't a problem. Again, if you really READ the article, you should begin to understand the breadth of the problem.
The AI (improperly named) of the car has to take the data of all the stationary objects it sees and make a kind of determination that a given set of them are not a threat, but another set might be. I fear you're trivializing the capacity of the human brain to understand context, and are overly optimistic about a bot to do the same.
There's a lot of statistics that don't get discussed in the realm of self-driving technology-- like # of interventions per thousands or tens of thousands of miles. Many researchers are contending that the ratio is still too high.
Actually my opinion of software "engineering" is quite low. I'm dealing with a bunch of them working on the tool my team is developing right now and they can barely understand a state diagram.
The sensors gave position and velocity. The solution to the collision is NOT difficult. Understanding that you are on a collision trajectory HAS to happen. Then it becomes a set of other decisions you need to make:
When will the collision happen? This is the key point that the writer just does not get.
When do I need to take action?
When do I start my blend, i.e. what is my allowed motion profile?
(cont.)
From the article:
So no, it's not really stationary objects that are the problem, but a limited sensor suite that can't detect them properly.
Stationary objects that are near the reaction time of the vehicle have to be dealt with. Moving objects that are on a collision course have to be dealt with. That latter are a much, much bigger problem. Example: You are coming up on an intersection and have the green. Crossing traffic is oncoming but has a red. If the crossing traffic does not stop there will be a collision. Now there's a context where you have to ignore the potential problem. Avoiding a fallen tree in the middle of the road doesn't come close to being a challenge.
There is no such thing as software engineering. There is software writing and there is engineering. Sometimes engineers write software. But writing software does not make you an engineer.
Shorter, I can put a motion sensor off a $20 security light from Home Depot on the grill of my car. It's really, really good at detecting motion, and unlike me, it doesn't text while driving or fall asleep at the wheel, but... what am I going to plug it into?
Again:
Jean-Claude Van Damme.
Someone had to say it.
The gulf between driver assist and driverless is so vast that it's a completely different problem.
Driver assist is making real, practical, usable advances, today.
Driverless is a huge leap, technologically, socially, and legally. It's a long way off.
John, I don't think you're quite getting the picture here with your argument that autonomous cars will never be able to deal with human ingenuity in fucking things up. If these car companies have a lick of sense, and I'm sure they do, they're not designing autonomous vehicles for the real world. If you can program a car to reliably follow the rules of the road that's good enough - as long as every other car is following the rules of the road. And that's where autonomous cars become mandatory. There's not going to be anybody speeding, swerving, running stop signs, suddenly slamming on the brakes, autonomous cars don't do those things. Driving, or "riding" as it will become more properly called, is simply going to be all about getting from Point A to Point B in as safe and efficient and boring-as-hell manner as possible. City people already are familiar with "riding" and us hicks in the sticks have no opinions about this new form of mass transit worth considering.
Good point Jerry. The problem with that is that while you can ban human drivers, you can't ban pedestrians. And they will do all the things you describe and create all of the problems human drivers do.
"you can't ban pedestrians"
This sounds like a challenge.
"And that's where autonomous cars become mandatory."
Don't count on that.
People won't stand for being told they aren't allowed to own or drive non self driving cars.
"But the report also suggests that driverless trucks?by decreasing the costs of hauling freight?would result in an increase in demand for freight. "
The way to decrease the cost of hauling freight is to ship it by rail instead of trucks at all. No truck self driving or not will use less energy to haul anything than a train hauling the same load.
The only method more energy efficient than rail is water transport which has a lot more delivery limitations.
Ger, this morning I discovered that the driver assist radars on my new vehicle have their function compromised by a
moderate snowfall.
This does not raise my confidence in purely driverless vehicles taking over anytime soon.
Did it ever occur to Reason that truck fleets are having a hard time recruiting because the industry is hell bent on killing the job with automation? I thought libertarians knew the concept of incentives.