Technology

Andrew Yang Proposes Making Social Media Algorithms Subject to Federal Approval

The presidential hopeful on Thursday released a plan to regulate tech giants.

|

Entrepreneur Andrew Yang has run a tech-centered campaign for the Democratic presidential nomination, positioning his Universal Basic Income proposal as a solution to rapid technological change and increasing automation. On Thursday, he released a broad plan to rein in the tech companies that he says wield unbridled influence over the American economy and society at large.

"Digital giants such as Facebook, Amazon, Google, and Apple have scale and power that renders them more quasi-sovereign states than conventional companies," the plan reads. "They're making decisions on rights that government usually makes, like speech and safety."

Yang has now joined the growing cacophony of Democrats and Republicans who wish to amend Section 230 of the Communications Decency Act; the landmark legislation protects social media companies from facing certain liabilities for third-party content posted by users online. As Reason's Elizabeth Nolan Brown writes, it's essentially "the Internet's First Amendment."

The algorithms developed by tech companies are the root of the problem, Yang says, as they "push negative, polarizing, and false content to maximize engagement."

That's true, to an extent. Just like with any company or industry, social media firms are incentivized to keep consumers hooked as long as possible. But it's also true that social media does more to boost already popular content than it does to amplify content nobody likes or wants to engage with. And in an age of polarization, it appears that negative content can be quite popular.

To counter the proliferation of content he does not like, Yang would require tech companies to work alongside the federal government in order to "create algorithms that minimize the spread of mis/disinformation," as well as "information that's specifically designed to polarize or incite individuals." Leaving aside the constitutional question, who in government gets to make these decisions? And what would prevent future administrations from using Yang's censorious architecture to label and suppress speech they find polarizing merely because they disagree with it politically?

Yang's push to alter 230 is similarly misguided, as he seems to think that removing liabilities would somehow end only bad online content. "Section 230 of the Communications Decency Act absolves platforms from all responsibility for any content published on them," he writes. "However, given the role of recommendation algorithms—which push negative, polarizing, and false content to maximize engagement—there needs to be some accountability."

Yet social media sites are already working to police content they deem harmful—something that should be clear in the many Republican complaints of overzealous and biased content removal efforts. Section 230 expressly permits those tech companies to scrub "objectionable" posts "in good faith," allowing them to self-regulate.

It goes without saying that social media companies haven't done a perfect job with screening content, but their failure says more about the task than their effort. User-uploaded content is essentially an infinite stream. The algorithms that tech companies use to weed out the content that clashes with their terms of service regularly fail. Human screens also fall short. Even if Facebook or Twitter or Youtube could create an algorithm that only deleted the content those companies intended for it to delete, they would still come under fire for what content they find acceptable and what content they don't. Dismantling Section 230 would probably discourage efforts to fine-tune the content vetting process and instead lead to broad, inflexible content restrictions.

Or, it could lead to platforms refusing to make any decisions about what they allow users to post.

"Social media services moderate content to reduce the presence of hate speech, scams, and spam," Carl Szabo, Vice President and General Counsel at the trade organization NetChoice, said in a statement. "Yang's proposal to amend Section 230 would likely increase the amount of hate speech and terrorist content online."

It's possible that Yang misunderstands the very core of the law. "We must address once and for all the publisher vs. platform grey area that tech companies have lived in for years," he writes. But that dichotomy is a fiction.

"Yang incorrectly claims a 'publisher vs. platform grey area.' Section 230 of the Communications Decency Act does not categorize online services," Szabo says. "Section 230 enables services that host user-created content to remove content without assuming liability."

Where the distinction came from is somewhat of a mystery, as that language is absent from the law. Section 230 protects sites from certain civil and criminal liabilities if those companies are not explicitly editing the content; content removal does not qualify as such. A newspaper, for instance, can be held accountable for libelous statements that a reporter and editor publish, but their comment section is exempt from such liabilities. That's because they aren't editing the content—but they can safely remove it if they deem it objectionable.

Likewise, Facebook does not become a "publisher" when it designates a piece of content to the trash chute, any more than a coffee house would suddenly become a "publisher" if it decided to remove an offensive flier from its bulletin board.

Yang's mistaken interpretation of Section 230 is likely a result of the "dis/misinformation" around the law promoted by his fellow presidential candidates and in congressional hearings. There's something deeply ironic about that.

NEXT: Congratulations to the Lumen Database!

Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Report abuses.

  1. There’s something deeply ironic about that.

    Irony is dead. It’s buried right next to sarcasm.

    1. Hello.

      All I know is Amazon has the best customer service. BETTER NOT MESS WITH THAT.

      1. How much was that “you had me at poutine” flannel t-shirt, anyway?

        1. I would purchase and wear such a shirt uninronically.

  2. I think it is fair to say that to some extent these companies control the minds of many people. Not 24/7, but they can turn 5 minutes into hours through algorythms, followed by influenced purchasing.

    With more information and better human/AI interfacing, it’s not that much of a stretch to see a future in which people are basically tech zombies. While that sounds awful, adding the government to that industry is downright terrifying.

    1. “rhythm” is a word, and so is “algorithm”. I refuse to believe we live in a world where there isn’t a clever portmanteau to be made there, but “algorythm” it is not.

    2. I know I’ve been browsing for a few minutes, and then it turns into hours or days, and the next thing I know I’ve ordered 10K worth of stuff online that I didn’t need.

      Not.

  3. “They’re making decisions on rights that government usually makes, like speech and safety.”

    Government makes decisions on speech? What dictatorship does this idiot live in?Because in my country, the government does not regulate speech.

    1. Should not, but wants to.

    2. “Because in my country, the government does not regulate speech.”

      Not yet. But according to Reason contributor Noah Berlatsky, it should.

      #BringBackBerlatsky

      1. Come on, dude, get new material.

    3. Because in my country, the government does not regulate speech.

      Your imagination doesn’t actually count as a country.

  4. Yang would require tech companies to work alongside the federal government in order to “create algorithms that minimize … information that’s specifically designed to polarize or incite individuals.”

    Oh, FFS! Just ban communication and be done with it!

    1. but… his very proposal is polarizing and inciting me!

      1. You know who else had a very polarizing and inciting proposal?

        1. Immanuel Velikovsky?

    2. shit some people get polarized of car brands, there will be nothing left to talk about with rules like that.

      1. When will we finally heal the Coke-Pepsi divide?

        1. Real libertarians drink RC Cola. With rum.

  5. Corporate [GOV] wants full gun-forced control of all media content? Does that run at odds with the old outdated concept called “Freedom of the Press” or something?

    1. Does that run at odds with the old outdated concept called “Freedom of the Press” or something?

      Corporate media owned by anti-Trump billionaires (NYT, Reason, WaPo, The Atlantic, etc.) are, of course, exempt.

  6. Classic totalitarian claim: “Once I am in charge, all (algorithms) will be better.”

    1. Google should hand him the halting function.

  7. Let the government decide what’s truth and what we can see?

    What could possibly go wrong?

  8. How awful. It seems the only bipartisan ideas nowadays are bipartisanly bad ideas.

    1. Has there ever been a good bipartisan idea?

      1. The interstate highway system? Standardized time zones? Uh…sorry, that’s all I’ve got.

  9. Regulation is ultimately elitist. Not being an elitist means that we care more letting Facebook’s and Google’s customers do as they please than we do about our disgust for the way Facebook and Google treat the privacy of their customers.

    I wish more people chose to avoid using their services and chose the competition instead, but the sad fact is that plenty of people know that Facebook and Google are treating them like garbage but they willingly choose to keep using their products and services anyway. They should be free to do so without an elitist like Yang using the government to override their freedom of choice with his personal preferences.

    Meanwhile, the government needs to be there to punish Facebook and Google when either of them legitimately violates the rights of their customers. In the case of Google acquiring access to millions of people’s medical histories, without those individuals’ knowledge or consent, that appears to be a clear violation of HIPAA and should subject Google to hefty fines and a court order to stop what they’re doing.

    The problem with using clear violations of legitimate laws to punish the misbehavior of big tech is that it doesn’t offer much in the way of opportunity to indulge in elitist justifications for meddling in the freedom of stupid people and forcing our personal preferences on others. I mean, that’s the problem if you’re an elitist. If you’re not an elitist, then that’s a feature and not a bug.

    Yang’s respect for individual autonomy seems to be be even harder to find than his list of legitimately good ideas. His ideas are generally awful, but even good ideas become awful when we try to inflict them on others using the coercive power of government.

    Staying away from marijuana might be a good idea for a lot of people, and if Yang wants to persuade them to avoid it, he should be free to do so. Yang wants to launch the drug war on big tech, but he’s really talking about preventing people from making choices for themselves. Fuck Yang.

    1. What competition? Every other social media site I’m aware of is even worse about banning speech they don’t like.

      I tried jumping ship. Tumblr was a hell hole of SJW’s bit their moderation team have zero fucks, until if course apple cracked down on them and that site went up in flames. All the other startups I see are starting with the premise they’ll ban anyone that disagrees with the creators.

      1. I can only put one link per post, and if I start posting links in quick succession, it’ll start treating me like a spammer.

        If you’re looking for a social media substitute, I’d look at Mastodon and MeWe, one’s more like Twitter and the other is like Facebook.

        Here’s a link to MeWe. Start a page, invite your friends and family, and tell them to stop subjecting themselves to Facebook’s abuse.

        https://mewe.com/

        I started a Slack instance for friends and family, and with all the ad-ins, it’s better than Facebook in a lot of ways. There are lots of more privacy centered alternatives.

        1. If you’re looking for a social media substitute, I’d look at Mastodon and MeWe, one’s more like Twitter and the other is like Facebook.

          Mastodon was created because Twitter and Facebook weren’t leftist and censorious enough. Just have a look at the Mastodon Server Covenant.

          The Mastodon creators are authoritarian leftists. They hate libertarians with a passion. And they will come up with sabotaging any non-leftist use of their software. It’s best not to deal with them at all.

    2. the government doesn’t need to be there to punish them. customers can punish them directly, in the marketplace, if they actually care.

      1. Well, customers can punish them when they violate customers’ rights and break the law by suing them in court, too, and using the government to protect people’s rights when they’re violated by private parties is a legitimate libertarian use of government.

        No new Yang regulation required.

    3. Fuck Yang is the name of my dentist.

  10. As Reason’s Elizabeth Nolan Brown writes, it’s essentially “the Internet’s First Amendment.”

    A highly astute observation in light of the fact that the first amendment, exactly like section 230, grants special legal impunity to certain commercial interests and ensures that certain types of publishers cannot be held responsible for the content of their publications.

  11. I’ve heard some bad ideas for dealing with this (mostly government created) problem.

    But this is by far the worst.

    1. Perhaps the worst part of it is that,much like Section 230, it would mostly serves as another form of insulation/immunity for the effects of any such “government approved” algorithms.

    2. Yes we have to maintain that strict separation that currently exists whereby social media companies collude with only one party.

      1. They aren’t so much colluding with any one party as they are colluding with the entrenched bureaucracy and the media/special interest merry go round.

        Government-Media complex.

  12. Only 19 comments?

    Losers.

  13. Look like Fist is on an extended coffee break.

    1. I think you might be lost, Rufus. Don’t worry, I won’t tell.

  14. Creating an algorithm to block offensive content shouldn’t be too hard, you could get 99% of it blocked just by banning the words ****, ******, ***, ******, **********, ****(*), ********, and, of course, *****. After that, you’d just have to weed out the few remaining instances of words such as *********, **********, *****, and the like.

    Of course, the ****** in the ******* wouldn’t like it, but they can ******** and ***** for all I care.

    1. With modern machine learning it’s not just particular words that get blocked it’s *** ****** ** ******** *** ***** **** *** **********.

      You can’t even ****** ********* **** *** ****** ******.

      They’ll block that too.

    2. The workarounds for this wind up being much more offensive. For example:

      The severely-tanned individuals that like fried chicken and carbonated fruit-flavored beverages with high alcohol content don’t seem to get along with the long-nosed people with curly hair that are tight with their money. Funny thing is that those with long beards that ride camels and toss those who have same gender relations off of rooftops hate them both while the short ones with narrow eye-openings that are good at math keep to themselves.

      Algorithms are going to miss that.

    3. don’t forget to ban “Alfa Romeo” or “in just a few hours a week”

  15. “To counter the proliferation of content he does not like, Yang would require tech companies to work alongside the federal government in order to “create algorithms that minimize the spread of mis/disinformation,” as well as “information that’s specifically designed to polarize or incite individuals.”

    Wouldn’t an algorithm which minimized the spread of “mis/disinformation” also block almost anything that comes from Mr. Wang? Just asking…

    1. It would block the entire Democrat primary race. Which would be a good thing.

      1. Yep. And it would block a whole lot of Repubs, too. Which is also a good thing.

    2. We’re talking about Mr. Yang, not his Wang.

  16. “As Reason’s Elizabeth Nolan Brown writes, it’s essentially “the Internet’s First Amendment.”

    Since you know it isn’t you have to refer to what someone else claims it is. Crafty.

    The internet doesn’t need its own First Amendment because what those in the US post on the internet is already protected by the First Amendment. Companies don’t have to adhere to the First Amendment when it comes to their products.

    The gov is currently using military scams to combat social media and Facebook is saying it will force users to use real names so someone can’t claim they’re a vet and solicit donations for the legs they Photoshopped out of the pictures. Also, politicians are being impersonated. Facebook can only do this if you provide them all your info and they check it. A scan of your ID probably won’t be enough because if I use someone’s ID that person will then have to somehow prove that they’re the real person and then I insist that I am the real person. (So there) Then Democrats will claim even an ID is racist because their racism assumes blacks don’t have ID’s and are unaware of the DMV.

    This is going to be fun.

  17. Andrew Yang wants the ‘its a series of tubes’ and ‘what, like with a cloth?’ people reviewing technology issues.

    I was reading Ars Technica the other day and they had an article about the FTC head wanting privacy laws he can enforce. And these people were all for that.

    Never mind GDPR. Never mind how they’re all over their bitching about Ajit Pai and how corrupt and captured by special interests the FCC is.

  18. Wait—I thought the CIA was already writing the algorithms.

  19. I’m ambivalent: Is Yang stupid enough to not understand the implications of his proposal, or evil enough to want them? It’s a toss-up.

    Anyway, this isn’t really bipartisan, because on one side of the aisle the complaint is that these platforms are censoring things they shouldn’t, and on the other side, the complaint is that they’re NOT censoring things they should.

    Both sides would mess with the platforms, but to opposite ends.

  20. Right here is why, no matter how much you hate social media, no matter how much Youtube lies to us about how it curates, blocks, bans content via its algorithm, demanding a government solution is always the singular worst way to deal with it. Trust me, Big Tech haters (of which I am marginally one)… you do not want a federal solution to this crap.

    1. Yup. But that doesn’t meant there’s NOTHING to be done about Youtube/Twitter/Google. If users are getting fiduciary benefit out of these services under the agreement that they adhere to “Terms of Service,” that signifies a contract. If these users are then punished without a clear violation of Terms of Service, it’s a breach of contract.

      Just stick to making sure tech companies are upholding their contracts and let consumers choose what’s best.

  21. Just so we don’t lose track of how fucking shit Google And Youtube are, Youtube is now censoring and investigating creators for “credible threats to life” when you merely mention someone’s name and discuss publicly published material.

    1. I would love to see a massive, class action lawsuit by creators against youtube for their frustrating TOS violations against their users. For decades, Libertarians have made fun of the DMV and other government institutions for their comically nonsensical bureaucracy. Youtube makes the DMV look like its run by the staff at the Ritz Carlton.

      1. I would love to see a massive, class action lawsuit by creators against youtube for their frustrating TOS violations against their users.

        But still on the ‘preserve Section 230’ side of the fence? Seems oxymoronic to me.

        1. As section 230 gives the platforms a “right to moderate” content, I think the question is one of contract. Youtube has a TOS. For the last two or three decades, most users have ignored the TOS for various products and services because it rarely came into play. But now that things like revenue sharing and money is changing hands, the TOS is no longer just a vague document that the company provides for legal cover, and users ignore because everyone is generally clear what’s meant by the violations.

          The TOS now a thing that carries real weight. It’s a contract. And if a company violates its own TOS or misapplies it– especially where livelihoods are at stake, this seems like a situation that’s ripe for a civil lawsuit.

          1. As section 230 gives the platforms a “right to moderate” content, I think the question is one of contract.

            Do moderators own the property that they have a congressionally-mandated right to moderate?

            Right to moderate aside, I’m unable to rectify support for an overt liability shield and a preference for massive, class action lawsuit resolution. At the very least you would think the explicit statements of liability would have a chilling effect on any/all lawsuits, no?

            1. Yes, but in my advancing age, I’m reluctant to make grand pronouncements on stuff like this, especially where nuanced legal interpretation is required.

              I’m unable to rectify support for an overt liability shield and a preference for massive, class action lawsuit resolution.

              I think your points are valid and worth considering. But my understanding is the liability shield only relates to content posted by users– making it so the forum operator isn’t responsible for illegal content– which is a reasonable shield– if that’s the extent of the shield. For instance, it would be silly to hold telephone pole companies liable for fliers stapled to their poles. A clumsy analogy, but the best I could come up with in the moment.

              If the liability shield extends to HOW they operate their business in regards to their application of TOS etc., then I would emphatically agree, that’s a fundamentally bad thing.

          2. As has to be pointed out every time somebody says this, Section 230 gives platforms the right to moderate content *in good faith*.

            The very presence of that phrase implies that there’s such a thing as bad faith moderation.

            1. I don’t disagree with this, but I haven’t read the entirety if the law, nor do I purport to understand it fully. I do understand it enough to know it’s not the Internet’s First Amendment, however.

    2. for those who don’t want to click through and watch what might be one of the most laughable, yet enraging examples of Youtube’s bullshit, they’re literally telling creators that they’re being investigated for “credible threats to life”, while telling them they don’t have enough content or context to verify if they’ve actually violated the TOS, but telling them they need to fix the problem that Youtube refuses to articulate to get your videos remonetized or unblocked.

    3. And to beat the dead horse, let this be known: Youtube is NOT “organically” curating content that people really like in ways that are “novel and surprising” helping create a fun and informative experience for the end user. IT’S WHAT MADE THEM GREAT!

      That is Youtube of 2005, not youtube of 2019. They are blocking, banning, spinning, filtering, throttling and threatening people who even DISCUSS news stories that are unfavorable to a political narrative. QED.

      1. I think it’s as much a “user” problem as it is a moderation/policy problem.

        What’s happening is they have a flagging system whose purpose is to help Youtube crack down on things that clearly violate TOS-nudity, minors in inappropriate situations, pirated materials, etc. Users are bombarding this flagging system to discredit materials they dislike. It gets overloaded and Youtube wants to cull these complaints so they can actually manage their flags, so they try to mollify these users.

        It’s a difficult problem. They could completely scrap user flagging, but then they’d lose an extremely valuable resource that helps them protect their platform from the stuff they really don’t want. But as long as they have it, people will continue abusing it for petty shit that drains their resources.

        1. Let’s pretend you’re right (I’m willing to entertain that theory). Imagine a system where if a neighbor complains activity on your property, you get locked out of your house or business by the city and have to appeal the process. The city sends you a letter telling you that you must correct the violations before you’ll be allowed back in– no violations have been noted in the letter.

          You reach out to the city to get clarification– they don’t respond for several days. Then, when they finally do respond, they say that you’ve violated property rules. Again, no explicit behavior or rule is specified. When you again try to get a clarification so you can, in good faith, remove the offending content, the city stops responding to your calls and you’re permanently locked out of your property.

          While it would be tempting to blame the neighbors for this state of affairs, that would ignore the elephant in the room: The city officials locking you out of your property with a comically Kafka-esque response.

          In this case, the Kafka-esque bureaucracy is Youtube (or Facebook… or Twitter, or Google…)

          1. Example is…a little bit off the mark. You’re looking at this specific case, but it’s more like he was issued a warning, with no real threat of consequences. I’m aware others had their videos forced to private mode, but they seemed to know what it was that caused their videos to be locked.

            I’m not going to say that Youtube/Google are blameless in this. Their implicit bias is causing them to lend more credence to complaints that others might dismiss. But it’s also a case where you get a ton of people flagging videos without giving explicit reasons, and Youtube’s moderators themselves are trying to figure out why certain videos are suddenly drawing a ton of flags.

            It’s not like it’s one neighbor complaining about a neighbor’s noise violation, it’s one neighbor complaining about hundreds or thousands of other neighbors, and there’s a few other neighbors also issuing hundreds of complaints, and they’re bombarding city hall who is desperately trying to figure out what the issue is since many of them are anonymous and vague. City Hall needs a better appeals system to avoid handing out severe consequences, yes, but nosy busybodies have way too much fucking time on their hands to disrupt the process.

            1. It’s not like it’s one neighbor complaining about a neighbor’s noise violation, it’s one neighbor complaining about hundreds or thousands of other neighbors, and there’s a few other neighbors also issuing hundreds of complaints, and they’re bombarding city hall who is desperately trying to figure out what the issue is since many of them are anonymous and vague. City Hall needs a better appeals system to avoid handing out severe consequences, yes, but nosy busybodies have way too much fucking time on their hands to disrupt the process.

              Right, so if you realize your process is being trolled, and you ban/demonetize first, and then… well, let’s be real here, never get around to asking questions later, then that’s on you, not the douchebag trolls. Twitter and copyright strikers will always exist. The question isn’t, how do we make them go away, the question is, how do we deal with them without causing damage to the people that make up our platform and ultimately, ourselves.

              If you’re right, that it wasn’t a moderator relying on an AI bot looking for keywords that banned him, but a petty troll (or trolls) who are initiating a frivolous complaint, then Youtube’s moderation policy is basically a suicide pact. It’s not just killing the creators, it’s killing Youtube.

  22. So you want to put all your Internet algorithms under government control? Now imagine your government becomes like China’s. Or that it’s run by Literal Hitler.

    Any more dumb ideas?

    1. It’s not a dumb idea. As long as the right people are in charge, it’s a brilliant idea.

      1. Clearly your not an IT professional.

        1. I’ve been called a hack by my users, many times.

  23. Andrew Yang Proposes Making Social Media Algorithms Subject to Federal Approval

    So why aren’t they subject to Federal Approval via the judicial process? I mean, if my employer can’t fire me because I’m black and the courts can route around their business practices to find out if they’re racist, why can’t they route around and prove that YouTube isn’t discriminatory in its contracting? Why is this in Congress’ hands?

  24. I wholeheartedly agree that giving government any control over the censorship and bias algorithms of privately owned social networks is terrifyingly Orwellian and not remotely within the governments enumerated powers and a clear violation of the first amendment. What people miss is a more fundamental question. Social media companies only have power because their users (aka the electorate) are gullible sheep who vote on issues on how to run a government that effectively has no bounds on it’s power. If you accept the sad reality that the vast majority of the electorate are unable to think for themselves and are easily led, is democracy such a good idea? And if you accept the sad reality that their votes can have a drastic effect on everybody’s lives since constitutional limits on the federal government’s power haven’t been close to followed in over a century, is a wholly unconstrained government a good idea? This entire issue is trying to put a shitty ineffective band aid with massive potential for abuse and unintended consequences on the fact that democracy coupled with an unconstrained government is an awful idea.

  25. Where the distinction came from is somewhat of a mystery, as that language is absent from the law.

    Rule of goats Billy, whether you’re being willfully ignorant or really do like to fuck goats is a meaningless distinction. Title II of the Communications Act of 1934 distinguishes parties responsible for simply communicating information and parties editing and selectively transmitting information. Prior to that from a technology standpoint, the distinction is was made (progressively less) informally on a more case-by-case basis in courts and at the state and local level.

    Section 230 protects sites from certain civil and criminal liabilities if those companies are not explicitly editing the content; content removal does not qualify as such.

    Funny that just one sentence before you said “as that language is absent from the law” and this sentence you would fail to show me where the words ‘explicitly’ or ‘editing’ are found in the law. Matter of fact, “A provider of interactive computer service shall, at the time of entering an agreement with a customer for the provision of interactive computer service and in a manner deemed appropriate by the provider, notify such customer that parental control protections (such as computer hardware, software, or filtering services) are commercially available that may assist the customer in limiting access to material that is harmful to minors. Such notice shall identify, or provide the customer with access to information identifying, current providers of such protections.” Would indicate that they are required to at least be able to edit the content and do so explicitly.

    Section 230 was an ill-conceived moralist piece of garbage that neither established nor enshrined free speech on the internet. Reason’s continued insistence that it is something it’s not only serves to discredit their other claims to libertarianism.

  26. Their hatred of Trump has caused Democrats to consider many bad ideas, including this one, which will come back to bite them.

    For example, it would be relatively easy for an internet ‘fact checker’ to conclude that Warren’s numbers don’t add up, and then refuse to run her ads. Another candidate’s ad that promises to ‘get guns off the streets’ could be refused since we all know that the 2A wouldn’t allow that to happen.
    I honestly don’t think Democrats would even be considering this junk had Trump not come along.

    1. Another non engineer commenting on technology they do not understand.

      1. You’re just another engineer who doesn’t understand how the real world works.

  27. Never heard of the guy. Is he North Korean?

  28. I’m responsible for what I say, nobody else. Commensurate with that responsibility is authority.

    Tech giants overstep their authority when they take responsibility for me.

    Make no mistake about it, it isn’t benevolence. It’s their play for power over our freedom.

    1. You haven’t a clue what an algorithm is or the scope of what he is saying and its axiomaric of your words you chose to write.

      1. What rock did you crawl out from under?

        The algorithms being discussed are the “programming” for automated censorship.

        Tech giants nor anyone else in a society with free speech have the authority or the responsibility to censor legal speech manually or with automation.

        1. The government has the responsibility to tell tech giants that if they want to operate in our society they must adhere to our constitutional rights.

          That or leave their lucrative business to someone who will.

  29. The government gave up, 20 years ago, trying to write software. They now buy whatever is available from the lowest bidder. Government computers run commercial operating systems. Commercial spreadsheets, databases, word processors, all are cheaper than custom ones. And Yang wants to government to approve the design of commercial software? Megadisaster, anyone?

    1. I have been an engineer for 30 years and what he is suggesting is beyond preposterous.

  30. This is why ALL politicians need to shut their stupid mouths when it comes to technology.

    Clearly this feckless moron like every idiot of technology hasn’t a clue what he’s talking about. So this contemptible idiot is proposing MORONS OF GOVERNMENT have mandatory access to code from PRIVATE companies.

    Does this idiot think he’s running for God?

  31. Mr Yang a leader in the Nanny State Junior Debating Championship.

    A true feckless fool and jerk.

  32. What is an algorithm? And since today’s students are even more ignorant of math than I am, where will we find regulators to monitor big tech’s algorithms? I know. China!

  33. My fine pal’s sister-in-law makes $88 each hour on the laptop. She has been laid off for 8 months however remaining month her earnings become $13248 just operating at the pc for a few hours. look those up,,,

    http://buzzjobs.com.nu/

  34. Sheesh. As much as I dislike Trump on a personal level, the democratic candidates are going insane.

Please to post comments