The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Will AI replace all lawyers -- except for its own?
Episode 209 of the Cyberlaw Podcast
It was a cyberlaw-packed week in Washington. Congress jammed the CLOUD Act into the omnibus appropriations bill, and boom, just like that, it was law, and you could wave good-bye to the Microsoft Ireland case just argued in the Supreme Court. Maury Shenk offers a view of the Act from the United Kingdom, the most likely and maybe the only beneficiary of the Act. Biggest losers? For sure the ACLU and EFF and their ilk, who were more or less rendered irrelevant without the funding and implicit backing of Silicon Valley business interests.
But wait, there's more Congressional action, and this time it's bad news even for Silicon Valley business interests. For the first time, the immunity conferred on social media platforms by Section 230 of the Communications Decency Act has been breached. Jamil Jaffer and I discuss FOSTA/SESTA, adopted this week. In theory the act only criminalizes media platforms that intentionally promote or facilitate prostitution, but any platforms that actually read their own content are likely at risk. Which is what Craigslist concluded, killing its personals section in response to the act. Worse for Silicon Valley, this may just be the beginning, as its unpopularity with left and right alike starts coming home to roost.
Not to be upstaged by Congress, President Trump announces a plan to impose $60 billion in tariffs on Chinese goods and new investment limits on Chinese money. Sue Esserman explains the plan and just how serious an issue it's addressing.
Jim Lewis tells us about the FCC's rumored plan to pile on Chinese telecom manufacturers, adopting a rule to bar the use of Universal Service funds to purchase Chinese telecom infrastructure gear. If we want to keep China out of our telecom infrastructure, he says, we should be prepared to pay a hefty price.
Speaking of hating Silicon Valley, there's a wave of criticism – and a lawsuit – building against Uber in what may be a self-driving car accident that better tech could have prevented. Jamil urges caution in reaching conclusions.
In any other week, Jim and Jamil would get to spend quality time chewing over the indictment and sanctioning of Iranian hackers charged with massive thefts of IP. Not this week. They give us their bottom line up front: indictments and sanctions are a good first step but can't be our only response.
We barely have time to nod at the massive flap over Facebook and Cambridge Analytica. Still I can't help noting that in 2012, when the Obama campaign bragged about stripping the social graph of its Facebook followers, there was no privacy scandal. Today, after Cambridge Analytica made dubious claims to have done something similar, the EU's Vera Jourova sees a "threat to democracy." If you're a conservative who supports new privacy attacks on Facebook, don't blame me when it turns out that the new privacy law is weaponized against the right, just as the old one has been.
And, as a token bit of international news, China's social credit system is being implemented in a totalitarian fashion that reminds me of Lyft's embrace of the McCarthyite Southern Poverty Law Center, in that both systems deny transportation to those suffering from wrongthink. Maury Shenk says it also tells us something about the efficiency and clarity of authoritarian uses of new technology.
Speaking of wrongthink, Google's YouTube is banning firearms demo videos. Some of the banned videos may soon be hosted on pornhub, which at least will allow all those guys who used to read Playboy "for the articles" to visit pornhub "for the gun instructional videos."
Finally, in our interview, Cyberlaw Podcast joins forces with the hosts of National Security Law Today, a podcast of the ABA Standing Committee on Law and National Security.
We interview Michael Page of OpenAI, a nonprofit devoted to a developing safe and beneficial artificial intelligence. It's a deep conversation, but lawyers will want to spend time with the latest study suggesting that AI reads contracts faster and better than most lawyers. Luckily, I have the solution: We'll prevent AI from running amok by requiring it to obey the entire United States Code. And when it can't figure out what the code prohibits, well, it'll have to go to court to get the answer. So at least one lawyer will have full employment!
As always The Cyberlaw Podcast is open to feedback. Send your questions, suggestions for interview candidates or topics to CyberlawPodcast@steptoe.com or leave a message at +1 202 862 5785.
The Cyberlaw Podcast is hiring a part-time intern for our Washington, DC offices. If you are interested, visit our website at Steptoe.com/careers.
Download the 209th Episode (mp3).
Subscribe to The Cyberlaw Podcast here. We are also on iTunes, Pocket Casts, and Google Play (available for Android and Google Chrome)!
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
Do that thing Captain Kirk does to make computers explode - ask the computer a question which cannot be answered in terms of machine-logic.
That should put those machines in their place.
"Computer, explain how due process can be substantive."
"Computer, explain the scope of the 14th Amendment by examining this sentence: 'At the heart of liberty is the right to define one's own concept of existence, of meaning, of the universe, and of the mystery of human life.'"
Good point, Eddie. The contradictions within statutory law would cause any AI to crash.
Or as my torts professor suggested, ask, "Computer, was the plaintiff the child of the occasion?" (He was discussing possible jury instructions in light of Wagner v. International Railway, for those who can't remember the origin of that rhetorical flight.)
Keeping China out of our telecom infrastructure would seem to be a worthy investment.
By the way, ever see regulatory capture framed positively in a media headline?
From Tech Dirt:
REGULATION COULD PROTECT FACEBOOK, NOT PUNISH IT
Zuckerberg has the cash to jump through the government's hoops
To the extent that YouTube would have liability for showing gun demos; what about PornHub would make it immune from that liability?
Or is it that YouTube would not be liable for these demos, but simply does not want the bad publicity?...I could see how a porn site would not be worrying about this sort of right-wing and/or left-wing criticism.
(Sample of imagined complaint from an old-timey dowager: "Well, I'm okay with seeing videos of our president + women urinating, but videos of people firing guns is beyond the pale!")
Maybe the difference is the pretense that Pornhub requires users to be 18+ and YouTube doesn't.
The key difference is that PornHub just wants your money, while YouTube wants your heart and mind. So PornHub doesn't have any interest in censoring wrongthink, so long as people engaging in wrongthink have money.
Wow.
So you're saying basically any entity that doesn't offer/publish NRA talking points is automatically some dastardly, Orwellian society bent on total--GLOBAL!--control of all humankind.
Here's something that I think might help you: https://www.amazon.com/ Electro-Deflecto-Unisex-Foil-Size/product -reviews/B01I497JAM (Electro Deflecto Unisex "Tin" Foil Hat One Size).
"So you're saying basically any entity that doesn't offer/publish NRA talking points is automatically some dastardly, Orwellian society bent on total--GLOBAL!--control of all humankind."
Yeah, pretty much. If you set up a system to make money publishing other people's content, and then once it becomes a near monopoly suddenly decide only political content you agree with can be published on it, you ARE engaged in a dastardly, Orwellian plot to control people.
PornHub doesn't censor, because they just want money. YouTube censors, because they want political control over the population more than they want money. So they're willing to leave money on the table, lots of it, in order to censor public discourse.
Brett, you ought to get out of the habit of saying "censor," when what you mean is "publish"?including as that term does the centuries-old prerogative of a publisher to decide what content it prefers to publish. That is a very old norm, which nobody much questioned until the internet disrupted it. Now, society must decide if that disruption is an example of creative destruction, or just plain old destruction. The news on Section 230 is a likely preview that society is leaning toward deciding it's plain destruction.
Don't like that? Maybe you ought to start thinking more creatively. The internet could still be used to broaden publishing possibilities enormously?because of the radical reduction of publishing expense it enables. If Section 230 were repealed in toto tomorrow, that broadening would probably follow shortly.
To be sure, publishers would have to go back to the previous custom of reading everything, and deciding what to publish. That would pressure and shrink monopolists like Facebook and YouTube, opening up the publishing market. Actual commercial, money-making possibilities would open to a wide range of new publishers, with a variety of views, who were willing to do the newly re-required reading and fact-checking. They would re-commence competition on the basis of publishing quality.
If that happened, it would be a huge improvement over what is going on now, in terms of both economic opportunity and quality.
The reason I don't go with your terminology, is because YouTube is, in my opinion, less like a publisher such as Simon and Schuster, and more like a phone company. They don't contract with authors to create specified content, they merely offer a conduit between third parties, which is paid for by, nominally, subjecting those people to advertisements, or by allowing you to skip the advertisements for a fee.
That they would exercise editorial control over that conduit, and even do so in a politically biased fashion, was never part of the deal. Rather, they conducted a sort of bait and switch, offering a neutral platform until they reached near monopoly status, and then, covertly at first, later more openly, censoring the content on a politically partisan basis.
It's a bait and switch, cheating a customer base they'd never have had in the first place if they'd been honest about their intentions.
Um, does the phone company business model include attracting an audience? Disseminating the content of telephone calls worldwide? Selling advertising on the basis of telephone call content and the size of the audience?
The phone company is nothing like YouTube, and YouTube is everything like a traditional publisher. Except that by being awarded liberty to ignore content?to not even read content?YouTube has been enabled by government to grow freakishly large. As have a few others, which now, collectively, threaten to monopolize publishing. Of course you don't like that. Who would? But you ought to at least understand what's happening.
Well, one key difference is that YouTube is headquartered in California and subject to US law. Pornhub is headquartered in Montreal and is subject to Canadian law.* And while Canadians are not generally thought to be as sympathetic to gun rights as the US generally is, they do currently seem to be having a better track record on free-speech protections.
But probably more important is management philosophy. YouTube management (or more precisely, Google management, their parent owners) has been very clear that they want to see themselves as "agents of change" who can "make the world a better place". They also have a very paternalistic approach to deciding exactly what counts as "a better place". Pornhub management has given no indications that they have any such illusions.
* Probably Canadian law. According to Wikipedia, Pornhub is wholly-owned by Mindgeek, a Luxembourg-based company. If someone did try to sue them, choice of law could get interesting. But that very complexity adds to the flexibility that Pornhub/Mindgeek have in dealing with controversial topics.
Here's a readable explanation about why what the Obama did was different from CA. As indicated, Obama campaign:
1. people knew they were signing up for a campaign
2. Obama campaign suggested to the people who signed up how they could contact friends
With Cambridge Analytics:
1. info collected under false pretenses (for personality study, but actually for politics)
2. info used in violation terms of service (commercially)
3. friends were contacted directly for political reasons
4. they falsely said they deleted the info when confronted by FB
http://www.politifact.com/trut.....ge-analyt/
You're making the same mistake a lot of people are making. No, Cambridge Analytics didn't do 1-3. Somebody else did, and Cambridge Analytics just bought the data off them. From your own link to Politifact:
"Aleksandr Kogan, one of the Cambridge researchers involved in the project, sold the data to the upstart political consulting firm Cambridge Analytica. "
That is quite misleading.
The original exploit (getting data from users and their friends, allowed through the facebook API) was discovered by David Stillwell, a psychology PhD student at Cambridge, after he made a personality app. He teamed up with Michal Kosinski, a fellow student at Cambridge's Psychometric Centre, to perform more serious research on the topic of utilizing online social networks for large scale social research.
Here is where Cambridge Analytica (CA) comes in.
Christopher Wylie worked for a company called Strategic Communication Laboratories (SCL). He approached Stillwell and Kosinski for their data, and they said no. He didn't give up. He turned to another Cambridge researcher, Aleksandr Kogan. In 2014 Kogan made a new personality test app to gather the same data that Stillwell and Kosinski did. Only 270,000 users signed up, but he was able to access the data for approximately 30 to 50 million users, due to the nature of facebook friend networks.
Wylie then purchased the data he'd recruited Kogan to acquire via a US based company (CA) created by SCL. The CEO of CA was Alexander Nix, a longtime director of SCL. CA was bought into by Breitbart investor Robert Mercer and Steve Bannon (who was also on it's board of directors). Bannon picked the company name.
This data was used for the Ben Carson, Ted Cruz, and Brexit Leave.EU campaigns, in addition to the Trump campaign and many others.
In Short: SCL hired a researcher to gather data for use in political campaigns via a personality test, and then created a company specifically to use this data for political campaigns. The guy who originally hired Kogan (Wylie) is the person who leaked all this to begin with.
Of course, none of this was new to Facebook, who was aware of the potential abuse by any 3rd party app as they originally offered Stillwell and Kasinski jobs after their papers were published in 2013. They didn't change their API's until 2015 or 2016.
As one of Obama's campaign directors recently admitted, FB wasn't actually concerned about that sort of abuse so long as it was only the right people committing it. C.A.'s offense was doing it on behalf of Republicans.
But I appreciate learning that the connection between C.A. and Kogan was more direct than I'd understood.
Facebook wasn't concerned about this abuse by CA either. They didn't even change their policies until it started to get news coverage.
Nothing that was done was particularly illegal by CA or SCL or Kogan, it just wasn't ethical.
There are two particularly egregious things about the CA data usage, for me:
1) The gathering of friends data when they didn't take the survey (I didn't actually know that it was possible to gather the exact same information on friends as it is on the survey taker, even if they didn't take the survey)
2) Paying people to take survey's without explicitly telling them the purpose. Essentially paying for access to friends data.
Amazon has a marketplace called "Mechanical Turk" where people are paid to perform specific tasks.
Kogan (via Global Science Research) was paying for US users (the marketplace was international, he specifically limited the survey to US users) to take an online survey and download a facebook app, while misleading survey takers about what their data would be used for. Kogan utilized this route for user data because the centre he worked at (stated above) wouldn't give him access to their data. (via theintercept.com)
It's unethical for any company or political party to gather / use information in this manner, and I hope that the current scandal leads to regulations preventing this sort of activity (non-explicit gathering of individual data).
I haz a confuse, Mr. Baker seems distressed at some authoritarianism (as exercised by Lyft) and quite at ease with other (FOSTA and CLOUD). The thought of going to Pornhub for some gun content appears to be giving him the vapors.
Mr Baker is a stateist. He's fine with any authoritarianism as long as it's being done by the state/government (FOSTA, CLOUD), but Lyft is a private company.
Um, Lyft is a private company and FOSTA/CLOUD are government actions?
Lawyers won't be replaced with computer programs until they can get the programs to replicate all the irrational biases and accumulated sophistries of the legal system. It's not like they're going to build a system to apply pure logic to the law, and just accept that it overturns wrongful precedent on multiple subjects.
The non-law input to the program, to replicate what living lawyers have done, would dwarf the actual statutory law input, and have precedence.
As you say, the existing irrational biases and prejudices that make up so much legal history would break a computer.
Because computers are based on mathematics, a system of absolute truths, and until you can create a deterministic system for "disparate impact" and other forms of "1+1=RACIST", you will never be able to replace a lawyer, judge, or jury with a computer.
You don't even have to get into "disparate impact" or other racial discrimination issues.
Basic legal concepts like proof "beyond a reasonable doubt" are utterly beyond anything that Artificial Stupidity built with existing technology would be capable of dealing with.
The law is full of fuzzy logic (and illogic) down to the oldest and most basic levels.
Artificial Intelligence is centuries (as in several) away from living up to it's hype.
I'm skeptical about computers reading contracts better than lawyers. My long experience permits me to affirm that, although computers do redlining much faster and cheaper than paralegals, they do not do it as well. I'd like to see someone write a redlining program with judgment before they move on to a contract-reading program with judgment.
An interesting take on computers and law is in the novel Monument by Lloyd Biggle, Jr. published back in 1974. There are still lawyers but they use precedents judged by a computer judge to decide the case. A desperate lawyer submits old precedents in the hope the fact they were overturned will not be noticed.
Sounds like a page-turner.
I predict that FCC vs China will have next to no effect. Look at the way car companies including VW and Nissan got around our trade barriers in the '80s by opening assembly plants in the US which do little more than put together imported prefab parts. There is nothing stopping China from doing the same with telephone gear.
As far as CDA Section 230, it's high time that precarious law were replaced by expansive interpretation of the First Amendment to cover electronic communications. The only reason the FCC needs to exist is to enforce property rights in the radio spectrum after the government sells it -- which Trump ought to be busy doing.
Concerning CDA Sect 230, I'm assuming you mean this item:
2) Civil liability: No provider or user of an interactive computer service shall be held liable on account of?
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected. . . .
So suppose this section is deleted; what then?
1A restricts the govt not private entities, so Facebook, etc. would still be able to restrict whatever content it wished to.
Nice try.
By your assertion, Google could block all content showing blacks?
Block all content showing women?
Block all content showing left handed redheads?
Block all content showing free speech protests?
Block all content showing cakes being baked/decorated?
Social media got section 230 by saying it was an innocent provider of a platform, regardless of content. They are now clearly engaging in discrimination based on speech content.
So if 230 is deleted, they become liable for the actions they take that are based on discrimination for/against a protected class.
Longtobefree, get rid of 230 and social media revert to the status of traditional media. Like traditional media, they become responsible to read for factual truth everything they publish, no matter how sourced?and to pay the price if they publish untruths which are defamatory. They are, however, 100% protected with regard to opinion.
Like every other traditional publisher, they could discriminate on the basis of speech content. Which shouldn't worry you much. If they had to read everything, it's unlikely they would continue to largely monopolize the social media universe. There would be more and better competitors, and more diversity of opinion overall.
If you were still dissatisfied, the extreme financial efficiency of internet publishing would assist you to become a competitor yourself.
Whether or not you think getting rid of 230 would be a good idea (I do), it seems like a backlash is building against the abuses 230 encourages and protects. It would be wise for free speech experts like Volokh to recognize that as a new thing in the speech universe, and start thinking through its implications?a point I have made before, to no avail so far.