The Volokh Conspiracy

Mostly law professors | Sometimes contrarian | Often libertarian | Always independent

Volokh Conspiracy

Zoombombing and the Law

A lot depends on what you mean by "Zoombombing."

|

Friday, the top federal prosecutor in Detroit wrote, "You think Zoom bombing is funny?  Let's see how funny it is after you get arrested. If you interfere with a teleconference or public meeting in Michigan, you could have federal, state, or local law enforcement knocking at your door." Is that so? Or, perhaps more usefully, when is that so?

Well, as with so many buzzwords, such as "trolling," "doxxing," "bullying," and the like, "Zoombombing" can cover many different things (as the vague word "interfere" suggests). Let's start by looking at what behavior might be covered by the term, and consider (as the law often does) the physical-world analogs of such behavior.

[1.] Hacking into closed, password-protected Zoom meetings. This is indeed likely a crime under the federal Computer Fraud and Abuse Act, which bars accessing computers "without authorization." Think of this as cyber-breaking-and-entering.

[2.] Accessing non-password-protected Zoom meetings to which the public wasn't invited. This could be a conversation among friends or coworkers, or a meeting between a business and its clients (or students and their professor). It may feel like a sort of cyber-trespass, but it's not clear when it would be a crime: Much depends on the particular state trespass laws involved, but I doubt that a court would be willing to read a typical state criminal trespass statute, written with the physical world in mind, as applying to unauthorized access to virtual spaces.

Computer crime laws are, of course, made with computer access in mind; but there too things aren't clear. Accessing publicly accessible computer-based meetings that are publicly accessible on the Internet, but that the operator doesn't want the public to access, is in some ways similar to accessing publicly accessible computer-based web pages. Federal courts have split on whether such access to web pages violates the CFAA (either in general or when the site operator announces that the scraping is forbidden). For more, see this post by our own Orin Kerr, likely the nation's leading CFAA expert.

It's also possible that some state computer crimes laws might apply here, at least if the Zoombomber has reason to know which state the meeting organizer is located in. (For a sense of how courts deal with claims that state laws can't apply to the Internet, see this Maryland case upholding a state anti-spam law, which also discusses other cases that have struck down state anti-pornography laws; and see also this more recent case.) But a lot depends on the text of the particular state computer crime law, and any precedents interpreting it; so while Zoombombing of this sort could be outlawed, it's not clear whether it is.

[3.] Accessing Zoom meetings to which the public was invited, and displaying offensive materials, for instance in the video feed that's coming from your computer. This is like showing up at a publicly accessible talk wearing offensive messages on your T-shirt, or carrying offensive signs. I doubt that this would generally be a crime, though you could get kicked out of the meeting for that. Likewise, if the meeting opens up its chat function, I think that posting offensive content in the chat isn't likely a crime. (Same if it opens up its share screen function, though wise meeting organizers generally won't do that.)

[4.] Accessing Zoom meetings to which the public was invited, and saying offensive things using your audio, thus making it harder for the speakers' audio to be heard. That's the equivalent of cyber-heckling, and might in principle be punishable by state laws banning disrupting a lawful assembly (see, e.g., here and here). I think such laws might be applicable to online speech, though the boundaries of the law themselves are complicated and not entirely clear. But note that these laws, to the extent they are constitutional, are constitutional because they are content-neutral—because they target conduct based on its disruptive noise, and don't target messages based on their offensive content.

[5.] Accessing Zoom meetings of any sort, and posting true threats of violence or other crime. Such threats are punishable whatever the medium—phone, fax, text message, Zoom, or whatever else. The same is true for speech that is criminalized via some of the narrow First Amendment exceptions, such as obscenity (hard-core pornography) or child pornography.

It's also conceivable that the threshold for finding pornography to be obscene and thus constitutionally unprotected would be lower if it's communicated to unwilling viewers. But to fit within the obscenity exception, the material would still have to be pornographic and not merely violent, racist, or otherwise offensive to some people.

What about the First Amendment? Generally speaking, you don't have a First Amendment right to speak on others' property, if they want to exclude you—and you certainly don't have a First Amendment right to speak in ways that physically interrupts others' speech, or changes the tenor of a conversation that others are choreographing. Content-neutral laws that bar people from accessing others' property are thus generally constitutional, even when someone wants to violate them in order to speak.

But:

[1.] The laws involved must indeed be content-neutral, and applied in a content-neutral way (again, except to the extent they involve unprotected speech, such as true threats). Thus, if people hack into a meeting to say racist things, they can be punished for the hacking, but not for the content of their message.

Hate crimes laws can ban violence or vandalism that targets victims because of the victims' race, religion, and the like: They are constitutional precisely because they ban such nonspeech conduct, rather than mere speech. Hate crimes law thus can't punish, say, showing up to a publicly accessible Zoom meeting wearing a T-shirt with a racist message. And while hate crimes laws could in principle ban trespass or hacking motivated by the target's race, religion, and such, federal hate crimes laws don't extend to that (and, to my knowledge, most state hate crimes laws don't, either).

[2.] If the Zoom meeting is organized by the government, the government may not exclude listeners based on the viewpoint expressed on their video feeds, or even the viewpoint of their heckling. Say, for instance, a public university organizes an event, and invites the public (or even just students). It can't then exclude attendees because of the offensive political messages on their T-shirts; likewise, it can't exclude them because of the offensive political messages on their video feeds.

And while the university could have a policy of imposing discipline on all students who heckle speakers (or even having such students prosecuted), it can't discipline hecklers who shout racist messages but not hecklers who shout, say, anti-Trump messages. The same, I think, would be true of Zoom-heckling.

Note, though, that this involves decisions by the public university itself, or a public university department, or other government bodies. A public university professor who organizes his own Zoom meeting can pick and choose what attendees to invite and which ones to exclude, since he's not using his government powers (cf. Naffe v. Frey (9th Cir. 2015), for more on the state action requirement generally).

This, by the way, highlights the value of public universities and other public government entities have clear policies forbidding heckling, whether on Zoom or in person: If the university allows people to heckle expressing some viewpoints, it can't then punish them or have them prosecuted for heckling expressing other viewpoints.

But let's get real: I've offered a quick, high-level glance at the law in theory. But in practice, you're usually going to have a hard time getting a prosecutor to do anything about most of these problems, except perhaps some kinds of hacking. Federal prosecutors tend to focus on big-ticket items; even many criminal frauds are seen as too small potatoes for them.

State prosecutors may have lower prosecution thresholds, but they may be reluctant to get involved in cases where the offender may be in another state or otherwise hard to get at: Even if they technically have jurisdiction over the offense (because the defendant knew or should have known that he was, say, interrupting a meeting that was organized by someone in that state), the investigation could be a huge hassle, especially if it requires extradition and extensive cooperation with prosecutors in other jurisdictions. And imagine if the defendant is in a foreign country; it would take a lot to get prosecutors willing to deal with that.

The solution, it seems to me, is generally technologically enabled self-protection—features such as muting, passwords, and waiting rooms provided by the videoconferencing systems. These features make it possible to block outsiders altogether, or at least to eject someone who is misbehaving and then keep them from rejoining.

As with much on the Internet, Zoombombing may cause more problems than, say, in-person heckling, because it's easier, cheaper, and in some ways less risky for the bad guys. (Compare, for instance, the relationship between spamming and traditional junk mail.) But it's also easier, cheaper, and less risky for the good guys to fight it: Ejecting a Zoom-heckler is annoying, but much less annoying than trying to deal with a real person (or a mob) who might physically resist such ejection.

These technological self-protection features aren't a perfect solution; like all security features, they create something of a hassle, and at times a bit of an arms race (for instance, if you eject a Zoom-heckler from a deliberately publicly accessible meeting, and he then tries to get back on, perhaps using a different user id). But relying on prosecutors is an even less perfect solution. And the technological self-protection features are likely to quickly get better over time, especially given competition between software providers. Prosecutor responsiveness isn't likely to quickly get better.