Where angels fear to tread

Episode 253 of the Cyberlaw Podcast: NewsGuard takes on fake news

|The Volokh Conspiracy |

Our interview is with two men who overcame careers as lawyers and journalists to become successful serial entrepreneurs—and who are now trying to solve the "fake news" problem. Gordon Crovitz and Steve Brill co-founded NewsGuard to rate news sites on nine journalistic criteria. Using, of all things, real people instead of algorithms. By the end of the interview, I've confessed myself a reluctant convert to the effort. This despite NewsGuard's treatment of Instapundit, which Gordon Crovitz and I both read regularly but which has not received a green check.

In the news, Klon Kitchen talks about the latest on cyberconflict with Russia: CYBERCOM's takedown of the Russian troll farm during 2018 midterms. The Russians are certainly feeling abused. They are using US attacks to justify pursuing their "autonomous Internet," and they've sentenced two Kaspersky Lab experts to long jail terms for treason, likely because of their law enforcement cooperation with the United States.

Gus Hurwitz, Klon, and Nick Weaver muse on the latest evidence that information intermediaries still haven't found a way to deal with wayward members of their ecosystems. Amazon marketplace sellers will now have the ability to remove what they deem counterfeit listings. Amazon has let the FTC discipline fake paid Amazon reviews. And Facebook is being criticized for the costs of using human beings to enforce Facebook's content rules. (The failure of Silicon Valley to get a handle on this problem is, of course, the key to NewsGuard's business model.)

Finally, just to give me an excuse to link to this Dr. Strangelove clip, Gus tells us that not even our prosthetic arms are safe from IoT hacking.

Download the 253rd Episode (mp3).

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed!

As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@steptoe.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!

The views expressed in this podcast are those of the speakers and do not reflect the opinions of the firm.

NEXT: Discrimination Against Religion

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. Algorithms are, and always will be biased by the beliefs of the people that write them.

    1. But at this point, personal intervention is capable of much more detailed bias.

      Organizations like Google exaggerate the extent to which they stick to algorithms; They have whole teams of people white listing and black listing sites, and tweaking extensive exceptions lists that are incorporated into the algorithms.

  2. Algorithms are, and always will be biased by the beliefs of the people that write them.

    1. If you have a known finite list of measurable attributes, then it’s just math – no bias in crunching the numbers.

      Bias comes when choosing what to include in the model, and how to interpret the results of the model.

  3. The thing is, Instapundit is a group blog. While Mr. Reynolds is fairly sane, at night he has Sarah Hoyt post a lot of crazy stuff.

    1. You’re really dragging the Overton Window, Jeremy.

Please to post comments

Comments are closed.