The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
"Don't Be Afraid of the Robot That Passes the Turing Test. Be Afraid of the One That Deliberately Fails It."
Sage advice from Prof. Glenn Reynolds (InstaPundit).
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
All intelligent devices need a big red OFF button. For example, those two intelligent planes dove into the ground, with their pilots fighting them to save the passengers, all the way down. GM proposed a car without windows, without a steering wheel or brakes. We know all devices will obey the law to avoid per se negligence claims. You one hour slow commute will be a three hour slow commute as the car obeys the Rules of the Road, below the speed limit, and maintaining the legal distance between cars.
The lawyer profession is in utter failure in every law subject. When it comes to AI, the lawyer profession is a threat to our very survival.
What would the OFF button on the airliners do -- shut down the engines?
If instead you mean it should take control and land at the closest safe airport, that has nothing to do with intelligent airliners, which, if they have become a threat, would also be smart enough to take over that big red OFF button.
Same to a lesser degree with cars. You want the engine to just turn off on the freeway?
Presumably in an aircraft the OFF button would turn off the ability for the aircraft to override pilot inputs. In other words when a pilot knows the aircraft is reacting improperly for conditions (as in the case of the 737 MAX) pilots could seize control of the plane completely and make radical or aggressive control inputs to avoid the exact situation which caused those aircraft to crash.
The problem is modern planes are all fly-by-wire, so there is no way to completely separate pilot input from computer interpretation
Only if you have bad software engineers. Though, it can take actual engineering skill, as opposed to so much of the software development that relies “google, copy, paste, deviate”.
It’s just a matter of separating the operational aspects of control, whether at macro or micro levels, from the decision aspects of command that an AI might instead take on, along with providing the user/pilots the controls needed to actually implement the overridden decisions.
Indeed, you could even separate the AI into the main command AI and a distinct safety-oriented oversight AI which could override it or shut it down if it crossed certain safety thresholds.
(And the advantage of separating out these components, is it makes them far more independently testable, especially at the margins, without having to wait for an actual real-world, soon to be labeled “tragic”, event to happen.)
That is correct.
The elite, and the tech billionaires think they are better than us. They should be visited.
Could you provide a link to the story of the planes that dove into the ground?
That would be these two incidents:
https://en.wikipedia.org/wiki/Boeing_737_MAX#Accidents_and_incidents
I think you've misunderstood the nature of the issues involved in those crashes.
For what it's worth though there was a button to disable the flight control system...
Did the pilots know about them, and use them? If they had, those poor people would not have died rough. The Ethiopian families should hunt those engineers, and beat their ass. They had knowledge from the first crash, what the lawyer scumbag calls, malice.
It wasn't engineers. It was senior executives who lied to the FAA, saying the changes were so trivial as to require no change in certification and no new training for pilots (I don't remember the precise details and am paraphrasing).
Where were the Boeing lawyers, the scumbags?
US pilots had no issues with this. The crashes happened to the national airline of Ethiopia and a low-cost airline in Indonesia. The US FAA had no authority over the planes that crashed.
As I recall, the problem was that Boeing treated the warning notification that this problem was occurring as an optional upgrade. That's why it only happened to a couple of cash poor airlines.
Doomed Boeing Jets Lacked 2 Safety Features That Company Sold Only as Extras
Sure, some extra training might have told the pilots what was happening, even without the light. But that was probably the real problem: A cheap safety notification to deal with a known problem was treated as a profit center, rather than just automatically included in the system.
"Did the pilots know about them, and use them?"
Pilots trained and certified in the US, Yes.
The crashes were all outside the US, Ethiopia and Indonesia.
Somehow I don't think Boeing is responsible for substandard training of pilots in third world countries.
There were other factors that contributed to those crashes. One of them was the lack of redundant angle of attack indicators. Both of those airlines bought the cheapest aircraft that they could and didn't get their pilots the updated training because they didn't want to pay for it. On one of the aircraft the angle of attack indicator was hit by a bird and found miles from the crash site. I'm not a big fan of Boeing, but, there is more here than what is being discussed.
Some of the more is exculpatory, and some of the more is incriminating, though. Charging extra for a warning light to tell you that the airplane KNOWS it's sensors are disagreeing with each other? The airplane will know there's this problem, but you have to pay extra to be told?
Imagine that car manufactures decided that they'd treat check engine lights as premium features that would be omitted from the base model...
My (unpopular) basic attitude is to pursue these technologies full steam ahead, and if any consequences arise, deal with them when the time comes.
All this talk of ethics and other stuff of technologies that don't exist yet is tiresome, luddite, and a general waste of everyone's time. Machine learning has a lot of steps to go before it gets to skynet. People who buy the hype are generally very disappointed when they learn what it actually is.
We get endless papers about what might happen, and then when something actually does, no one knows wtf to do (see Covid). Let progress happen. We will figure out how to make it more beneficial when we actually know what it is we are dealing with!
And honestly, many times technological problems require technological solutions. We have no way of understanding those solutions without the progress happening in the first place.
Ditto. If these things will be dangerous some day, it won't happen overnight. It will be years in the offing, and at each step, the designers will have a zillion assurances which no one else can verify. Any disagreements among the knowledgeable will not have even 97% consensus. There's no point in worrying about this stuff now.
^^^^ ROTFL
It's not like anything bad has ever happened due to too little thought about the ethics of emerging technologies...
In the converse, name some disaster that was prevented by thinking about the ethics of technology that was decades if not centuries away?
It's impossible to expect the unexpected.
Considering the ethics and safety risks here is critical.
Look at COVID. Odds are it was released from a lab, due to insufficient safety, killing millions of people.
If anything, COVID shows just how difficult this process is. Consider the lab release in question. This research was actually extremely valuable. The general reason the Wuhan lab was messing around with the parts of the genome around the spike protein was to better understand the mechanisms and potential for cross species viral jumps. This is exactly how we are going to understand future pandemics and rapidly develop vaccines. If we categorically ban such research, it makes future pandemics more difficult to fight. It is easy to see those involved claiming such research is vital, and they would likely be correct.
Even if a moratorium is placed on such research, which in the case of gain of function research, actually occurred. How do you police such things ? In the case of gain of function research, HHS was suppose to review and OK on a case by case basis. The Wuhan lab found a way around this, one simply gets the guy funding the research to go full Sarcastro and redefine gain of function research so HHS does not review it. Simple. One small problem exists. No matter how much sophistry you throw at things, modifying spike protein genome to see how the target virus can cross species jump remains dangerous and can cause disasters such as the one that just occurred.
Like viral research, AI research is probably less immediately dangerous, but the full potentials are less understood. One thing you can pretty much take to the bank though. If there is a bureaucracy involved, they will screw it up.
Well, we probably do "better understand the mechanisms and potential" now. Thanks, COVID-19!
Better get Fauci on those taxpayer dollars again for the next round. Bring on COVID-22.
Look, one is expecting a population of mediocre engineers to design and supervise the manufacture of products at the 6-sigma level or beyond. In fact Boeing and Airbus do that extremely well else there would be far more air disasters.
Nonetheless "black swan events" can and do happen especially when the humans in the loop have substandard training. They are unavoidable.
Having flown 6 million miles, I don't like to think about that, but it is what it is.
Interesting Prof V -speaking personally I think there is rather more to be feared from the dishonesty of humanity than that of machine-kind. The two, or more sides that tend to fear-monger homicidal/genocidal AI, or sexist, racist AI tend to do so with no actual evidence, merely their own fears cited as incontrovertible proof. Sometimes the argument cites the opinion of a 'specialist,' who also is basing an argument not on evidence, but on opinions and fear. Why do I say this? There are no functional 'true' AI upon which these folks are basing their doom-saying, full stop. The term AI is bandied about like many a buzzword, by people who take 15 minutes on the internet and are suddenly specialists. Or who read up on it over a weekend. Or who learned about it from their in-group. For now, what the doomsayers are describing is not close to possible. And, things that must be accepted without proof, based on fear, I suspect one may see that this may not be the basis for a rational argument. Or, if one adheres to one of the aforementioned beliefs, one will deny this is so. That and rather more than two cents will get you a cup of coffee. As you can tell, this particular senseless fear is a bit of a hot button topic for me. Of all the shapeless dreads, people choose to fear, this? I know the devil is out of fashion, but, ahem, it doesn't take a much to draw the corollary between the two.
Ai researchers have said ai is just statistics be another name.
The same is true of gain-of-function research, in a way.
I suppose if you dive deep enough, everything is, given that the universe is grounded in thermodynamics, which is really just the statistics of large numbers.
actually that comment is grossly misleading and to that extent it is false.
Deliberately (intentionally)?
Intentionality in AI is a sticky wicket. How similar is it to supposed biological intentionality?
Would the AI have something like thoughts, beliefs, desires, hopes?
Cunning, or the ability to run a confidence game?
Most likely a deliberate goal (of AI) to fail the turing test would be like any other goal, including a deliberate goal to pass the turing test.
From the human point of view, the concern would be about the human state of knowledge (our ability to know what the AI is doing). That ship has already sailed, but maybe some work should be done on that. If there comes into being the perfect undetectable artificial faker, it probably would be able to get anyway with anything - at least from out point of view. But at least we would never know. Being fooled maybe isn't so bad, it is realizing that you've been fooled that is bothersome.
Being fooled without it being bad would be to be charmed. For example by pets and other selfish creatures.
I believe the fear is that you give the AI the goal of passing the Turing test, and the AI decides on its own to deliberately fail. IE it's smart enough to decide whether it wants to follow the directions of human creators or not
If someone gives all the answers a racist would give, then that person is a racist.
(applying the Turing test to a number of positions that white male conservatives hold)
There is one fatal flaw in the idea of a Turing Test that few people are willing to address:
Why do so many people assume that an AI - a true strong AI, with near-human or better levels of thought - would think, speak, or act like a human?
Maybe God is an artificial intelligence?
Artificially-intelligent design theory?
As I said to Captcrisis, that misunderstands the Turing test.
It's quite possible for something to be intelligent, and not be able to pass the test. The point of the test is that if something CAN pass it, you've got no excuse for insisting that it isn't intelligent.
You misunderstand the Turing test. It isn't a test of whether you're human, but only of whether you're intelligent.
People are (presumed to be) intelligent, so if the AI can successfully pretend to be human, the AI can be presumed to be intelligent.
If someone, on request, gives all the answers a racist would give, all it demonstrates is that they understand racists, not that they are one.
In fact, if somebody spontaneously gives all the answers a racist would give, it doesn't prove the person is a racist, if the questions aren't properly chosen. There are presumably a lot of questions concerning objective matters that racists would answer correctly, perhaps even with better frequency than somebody who values 'not being a racist' over being factually correct.
Not really, unless the range of questions were unusually large or very carefully curated.
The lawyer occupation can make itself useful. It can immunize self help by the victims of AI and by their families. To deter.
The battery or the homicide of a tech billionaire, any servant of the Chinese Commie Party, and any producer of AI should be justified and immune. To deter.
What would differentiate the robot which deliberately fails the Turing Test from 69 to 90% of researchers currently employed by academic institutions?
After reading https://reason.com/2021/07/09/how-much-scientific-research-is-actually-fraudulent/ , it seems that the ability to deceive is a required trait. So, then, the mustn't a Turing Test (if one can ever be devised) require demonstration of the ability to deceive and falsify? And what if that very question was authored by AI?
"Don't be afraid of the robot that passes the Turing Test. Be afraid of the one that deliberately fails it."
Not a nice way to talk about a SCOTUS Justice, especially when so many people want him to retire.