The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent


Debating FISA 215 after Pensacola

Episode 292 of the Cyberlaw Podcast


The apparent terror attack at Naval Air Station Pensacola spurs a debate among our panelists about whether the FISA Section 215 metadata program deserves to be killed, as Congress has increasingly signaled it intends to do. If the Pensacola attack involved multiple parties acting across US borders, which looked possible as we taped, then it would be just about the first such attacks since 9/11 – and exactly the kind of attack the metadata program was designed to identify in advance. Now may not be the best time to dump it, after all.

Nick Weaver tells us that China has resurrected the Great Cannon to attack a popular Hong Kong forum for protesters. The Cannon depends on users from outside China connecting without TLS to Chinese sites. I ask why Google hasn't started issuing warnings to Web users before letting them cross the Great Firewall without enabling HTTPS. That could spike the Great Cannon, but Google employees are too busy complaining about the United States government, I suggest. Meanwhile, Microsoft is working hard to make GitHub, an early Great Cannon victim, an essential part of China's IT infrastructure. Remarkably, we verify in real time that, despite the lure of the Chinese market, Microsoft has apparently not told GitHub to dump the content that offended the Chinese government.

In more China news, the trial lawyers are circling TikTok as though it were a wounded wildebeest on the veldt. A California class action alleges that TikTok harvested and sent data to China, and an Illinois class action charges the company with violating COPPA by marketing to children without sufficient privacy safeguards.

Paul Rosenzweig and I dig deep into the 20-year history behind DHS's now-abandoned proposal to conduct airport facial scans on US citizens leaving the country. We reach broad agreement that this is one of the rare privacy versus national security debates in which there's precious little privacy or national security at stake.

Matthew Heiman lays out the remarkable international food fight over taxes on digital business. USTR is threatening big tariffs on French wine to counter France's digital tax. Spain is apparently eager to join France in the fight. And the effort to work everything out at the OECD, where the EU has a 20-1 voting advantage over the US, has predictably not worked out well from the US point of view.

Cue the white cat: The United States has actually imposed sanctions on an entity called "Evil Corp." SPECTRE was apparently unavailable. Nick explains. This is part of criminal charges against two highly effective Russian bank hackers – and arguably a confession of weakness on the US government's part.

Meanwhile, Amazon's efforts to avoid tort liability for third-party sales on its site look to be suffering a long strategic defeat in the courts. The latest example is a Sixth Circuit ruling allowing plaintiffs to pursue product tort claims against the Internet giant.

I offer a quick update and some rare kind words for Nancy Pelosi, who is calling for modification of the North American free trade deal to drop the provision turning Section 230 of the Communications Decency Act into international law. This provision has garnered genuinely bipartisan opposition, so perhaps she'll prevail.

Paul gets stuck explaining two dog-bites-man stories. The FBI says any Russian app could be a counterintelligence threat. Well, what else would they say? And the European Commission, when asked what US regulation of encryption would mean for Europe, says more or less that the EU may have to escalate from eyebrow-lifting to throat-clearing.

Nick closes the program with advice about the new Android exploit that works (in the right circumstances) to compromise apps running on a fully patched and up-to-date Android phone.

Download the 292nd Episode (mp3).

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed!

As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of the firm.

NEXT: District Court on Sealing Settlement Awards

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. re: the attack at Naval Air Station Pensacola

    First, that was NOT the kind of attack the FISA Sec 215 program was designed to identify. From the information available so far, the attacker was a Saudi national already authorized to be on the base as part of the training he was receiving from the US military. Training of foreign military is something we've done for many, many years. Tapping the metadata about his phone records would have told you precisely nothing that we didn't and don't already know.

    Second, even if that attack was the kind of attack FISA Sec 215 was designed to identify, the occurrence of this attack does nothing whatsoever to defend the program from cancellation. At the risk of stating the obvious, the program still exists (though admittedly not in the completely unconstrained way it did pre-Snowden) yet the program failed to identify the attack in advance. So it's either ineffective or not applicable to this kind of attack.

    1. In the case of the Pensacola attack, it is plausible that the perpetrator acted in concert with others, and given his foreign status, equally plausible that some of the arranging and coordinating might have crossed national boundaries. Investigators would be derelict if they did not investigate that possibility, and the 215 program was established specifically to facilitate such investigations.

      That said, congressional restrictions added in conjunction with reauthorization have made what is left of the program hard to use and unreliable, so much so that NSA dropped it as an ongoing activity. Even so, metadata subpoenas can and should be used in this case to try to get evidence of such a conspiracy or determine that if there was one, it did not use the public telephone network in the US or other places where NSA may have or be able to get access to the data.

  2. Can you remind us what your real objection to S230mis again?

    Holding a platform, like a public bulletin board, liable for the content that others put there seems wrong in every way. Or is it that you don’t like the safe harbor clause for content curation such that a platform can perform imperfect moderation without gaining full liability, as opposed to the prior standard where a platform could choose to use moderation and gain complete liability or it could choose no moderation and have no liability.

    That part, at least, I can understand, though I think the better way would be through a model based on contract law and expectations - a platform platform could keep being liable only for itself so long as it does what it says it will do, but it cannot have a “and we can do anything we like clause.” So for instance YouTube could say it will only allow videos it can sell adds on to at least 75% of the market, and 4chan could say it allows anything that’s not illegal in and of itself (which really limits it to banning child porn and nothing else), but YouTube couldn’t say it would ban content that doesn’t meet “community standards” (because that doesn’t actually mean anything), nor could it say it would only ban videos depicting illegal acts and then ban transsexuals talking about their transition, but it could still ban WW2 history channels showing concentration camps.

    1. "Or is it that you don’t like the safe harbor clause for content curation such that a platform can perform imperfect moderation without gaining full liability"

      The problem isn't "imperfect" moderation. The problem is bad faith moderation. It's not like they're stumbling into political censorship, they're deliberately engaged in it.

      And the reason they're doing is is that the "in good faith" language of Section 230 isn't being enforced.

      1. That presupposes what good faith means in this context.

        Suppose the terms of service say that they’ll remove any content they deem hateful in their sole discretion. Then they write an algorithm to remove any posts with the words FSM or “flying spaghetti monster,” because they hate his noodly appendages. That seems like pretty clear good faith, right?

        But what if they mistyped and instead of FSM they targeted FGM, but they’re all in favor of cutting little girls, so they don’t find that topic hateful. That’s still done in good faith - they targeted what they said they’d target, but just weren’t any good at it.

        If the platform is honest about what opinions they’ll suppress, how isn’t that still “good faith?”

  3. This is the 2nd shooting at US Navy base this week. Earlier, a 22-year-old sailor opened fire on three employees of the US Department of Defense, killing two people before committing suicide.

Please to post comments