Social Media

6 Terrible New Tech Bills in Congress

While expressing concern for free speech and privacy, lawmakers are seriously threatening both.


Sen. Josh Hawley (R–Mo.) has taken a lot of heat for a recent proposal to ban autoplay videos in the name of stopping "social media addiction." But Hawley's new "SMART Act" may not even the nuttiest tech bill to grace Capitol Hill this year.

From the "Bot Disclosure" and "Biased Algorithm Deterrence" Acts, to net neutrality for speech on social media, new ways for the feds to snoop on user data, and a bill that could kill the teen YouTube star, legislative proposals from both Democrats and Republicans display a fundamental ignorance of the internet coupled with a willingness to micromanage even the smallest aspects of digital data and design. Here are six of the worst.

Protecting Children from Online Predators Act of 2019 (S.1916)
Sponsor: Josh Hawley

Hawley's idea of "protecting children from online predators" is to prohibit social media from recommending any videos that feature anyone under age 18. Teen video makers couldn't rely on algorithms to recommend any of their content. Sites like Facebook would be prohibited from recommending family videos featuring children to grandma and grandpa.

Across the board, the bill—introduced in June—would make it  illegal for any computer service "that hosts or displays user-submitted video content, and makes recommendations to users about which videos to view," to "recommend any video to a user of the covered interactive computer service if [the company] knows, or should have known, that the video features 1 or more minors."

Biased Algorithm Deterrence Act of 2019 (H.R.492)
Sponsor: Rep. Louis Gohmert (R–Texas)

The so-called "Biased Algorithm Deterrence Act" would discourage web services from displaying content in anything but chronological order. Under Gohmert's bill, any "social media service that displays user-generated content in an order other than chronological order, delays the display of such content relative to other content, or otherwise hinders the display of such content relative to other content" could face huge criminal and civil liabilities.

The bill, introduced back in January, would amend the federal law known as Section 230 to say that doing any of the above or otherwise using algorithms to determine how third-party content is displayed makes a company legally liable for that content. Users would have no choice in how content on the apps and sites they use would be displayed.

Bot Disclosure and Accountability Act of 2019 (S.2125)
Sponsor: Sen. Dianne Feinstein (D-Calif.)

Feinstein's bot bill, introduced last month, alleges that in order "to protect the right of the American public under the First Amendment," candidates and political parties would be barred from using any "automated software program or process intended to impersonate or replicate human activity online" (it's not quite clear what precisely that means) and that everyone must disclose to the federal government the use of any such software. The Federal Trade Commission FTC would be in charge of setting specific rules for the disclosure of this activity.

For now, the bill lays out this somewhat confusing guidance: the FTC will require "a social media provider to establish and implement policies and procedures to require a user of a social media website owned or operated by the social media provider to publicly disclose the use of any" so-called bots. People running such accounts would have to "provide clear and conspicuous notice of the automated program in clear and plain language to any other person or user of the social media website who may be exposed to activities conducted by the automated program." And companies would have to develop tools to help people disclose bot status, as well as "a process to identify, assess, and verify whether the activity of any user of the social media website is conducted by an automated software program or process."

The result would likely be a huge crackdown on the automated scheduling tools that many organizations and people use in perfectly benign ways all the time–all in the name of avoiding a small number of malicious actors who certainly aren't going to play by the U.S. government's bot-labeling requirements anyway.

Ending Support for Internet Censorship Act (S. 1914)
Sponsor: Hawley, again

This June bill from Hawley is ostensibly about preventing online "censorship," but "ESIC" would actually put the Federal Communications Commission in charge of determining what constitutes politically neutral content moderation and link curation, and it would deny Section 230 legal protections to large companies that don't fit the bill.

If ESIC passes, companies would be discouraged from investigating and filtering out abusive, hateful, threatening, defamatory, or otherwise objectionable content, and would instead be incentivized to simply delete content flagged by any other user, regardless of what rules or norms the content did or did not violate. Check out my recent article on Section 230 for more on how the law works and why Hawley is wrong about it.

To amend section 230 of the Communications Act of 1934 … to stop censorship, and for other purposes  (H.R. 4027)
Sponsor: Rep. Paul Gosar (R–Ariz.)

Gosar's as-yet-untitled bill was introduced on July 25 and still has no full text. But judging from his statements about it, we can expect it to be moderately-to-extremely dumb. In tweeting about the bill, Gosar doesn't even accurately describe the law it's meant to modify, claiming that Section 230 makes some sort of distinction between "platforms," which are allowed "discretion for removing content," and "publishers," who are not allowed this discretion because "they monetize their users' content." This is not accurate.

Gosar goes on to say his bill would tweak language in Section 230 that makes it explicit companies can "restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable." He says it would revoke the part about "objectionable" material. This bit gives companies broader protection for content moderation efforts than they might have if only permitted to moderate based on obscenity, violence, or harassment.

Under Gosar's vision of social media regulation, there would basically be two settings for social-media users: see everything, or stay on a strictly G-rated version of the internet. Users could choose, Gosar tweeted, between "a self-imposed safe space, or unfettered free speech."

Designing Accounting Safeguards To Help Broaden Oversight and Regulations on Data (S.1951)
Sponsor: Sen. Mark Warner (D-Va.)

Warner's bill would define digital entities that profit off of user data as "commercial data operators" and require those with "more than 100,000,000 unique monthly visitors, or users" in the U.S. to regularly provide each user with an individual data valuation estimate. At least every 90 days, companies would have to share with each user "an assessment of the economic value that the commercial data operator places on the data of that user."

The bill would also require these companies to report to the Federal Securities and Exchange Commission the aggregate value of user data, along with information on all data-sharing contracts with third parties and "any other item the Commission determines, by rule, is necessary or useful for the protection of investors and in the public interest."