Andrew Yang Proposes Making Social Media Algorithms Subject to Federal Approval
The presidential hopeful on Thursday released a plan to regulate tech giants.

Entrepreneur Andrew Yang has run a tech-centered campaign for the Democratic presidential nomination, positioning his Universal Basic Income proposal as a solution to rapid technological change and increasing automation. On Thursday, he released a broad plan to rein in the tech companies that he says wield unbridled influence over the American economy and society at large.
"Digital giants such as Facebook, Amazon, Google, and Apple have scale and power that renders them more quasi-sovereign states than conventional companies," the plan reads. "They're making decisions on rights that government usually makes, like speech and safety."
Yang has now joined the growing cacophony of Democrats and Republicans who wish to amend Section 230 of the Communications Decency Act; the landmark legislation protects social media companies from facing certain liabilities for third-party content posted by users online. As Reason's Elizabeth Nolan Brown writes, it's essentially "the Internet's First Amendment."
The algorithms developed by tech companies are the root of the problem, Yang says, as they "push negative, polarizing, and false content to maximize engagement."
That's true, to an extent. Just like with any company or industry, social media firms are incentivized to keep consumers hooked as long as possible. But it's also true that social media does more to boost already popular content than it does to amplify content nobody likes or wants to engage with. And in an age of polarization, it appears that negative content can be quite popular.
To counter the proliferation of content he does not like, Yang would require tech companies to work alongside the federal government in order to "create algorithms that minimize the spread of mis/disinformation," as well as "information that's specifically designed to polarize or incite individuals." Leaving aside the constitutional question, who in government gets to make these decisions? And what would prevent future administrations from using Yang's censorious architecture to label and suppress speech they find polarizing merely because they disagree with it politically?
Yang's push to alter 230 is similarly misguided, as he seems to think that removing liabilities would somehow end only bad online content. "Section 230 of the Communications Decency Act absolves platforms from all responsibility for any content published on them," he writes. "However, given the role of recommendation algorithms—which push negative, polarizing, and false content to maximize engagement—there needs to be some accountability."
Yet social media sites are already working to police content they deem harmful—something that should be clear in the many Republican complaints of overzealous and biased content removal efforts. Section 230 expressly permits those tech companies to scrub "objectionable" posts "in good faith," allowing them to self-regulate.
It goes without saying that social media companies haven't done a perfect job with screening content, but their failure says more about the task than their effort. User-uploaded content is essentially an infinite stream. The algorithms that tech companies use to weed out the content that clashes with their terms of service regularly fail. Human screens also fall short. Even if Facebook or Twitter or Youtube could create an algorithm that only deleted the content those companies intended for it to delete, they would still come under fire for what content they find acceptable and what content they don't. Dismantling Section 230 would probably discourage efforts to fine-tune the content vetting process and instead lead to broad, inflexible content restrictions.
Or, it could lead to platforms refusing to make any decisions about what they allow users to post.
"Social media services moderate content to reduce the presence of hate speech, scams, and spam," Carl Szabo, Vice President and General Counsel at the trade organization NetChoice, said in a statement. "Yang's proposal to amend Section 230 would likely increase the amount of hate speech and terrorist content online."
It's possible that Yang misunderstands the very core of the law. "We must address once and for all the publisher vs. platform grey area that tech companies have lived in for years," he writes. But that dichotomy is a fiction.
"Yang incorrectly claims a 'publisher vs. platform grey area.' Section 230 of the Communications Decency Act does not categorize online services," Szabo says. "Section 230 enables services that host user-created content to remove content without assuming liability."
Where the distinction came from is somewhat of a mystery, as that language is absent from the law. Section 230 protects sites from certain civil and criminal liabilities if those companies are not explicitly editing the content; content removal does not qualify as such. A newspaper, for instance, can be held accountable for libelous statements that a reporter and editor publish, but their comment section is exempt from such liabilities. That's because they aren't editing the content—but they can safely remove it if they deem it objectionable.
Likewise, Facebook does not become a "publisher" when it designates a piece of content to the trash chute, any more than a coffee house would suddenly become a "publisher" if it decided to remove an offensive flier from its bulletin board.
Yang's mistaken interpretation of Section 230 is likely a result of the "dis/misinformation" around the law promoted by his fellow presidential candidates and in congressional hearings. There's something deeply ironic about that.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
There's something deeply ironic about that.
Irony is dead. It’s buried right next to sarcasm.
Hello.
All I know is Amazon has the best customer service. BETTER NOT MESS WITH THAT.
How much was that "you had me at poutine" flannel t-shirt, anyway?
I would purchase and wear such a shirt uninronically.
I think it is fair to say that to some extent these companies control the minds of many people. Not 24/7, but they can turn 5 minutes into hours through algorythms, followed by influenced purchasing.
With more information and better human/AI interfacing, it's not that much of a stretch to see a future in which people are basically tech zombies. While that sounds awful, adding the government to that industry is downright terrifying.
"rhythm" is a word, and so is "algorithm". I refuse to believe we live in a world where there isn't a clever portmanteau to be made there, but "algorythm" it is not.
I know I've been browsing for a few minutes, and then it turns into hours or days, and the next thing I know I've ordered 10K worth of stuff online that I didn't need.
Not.
“They're making decisions on rights that government usually makes, like speech and safety."
Government makes decisions on speech? What dictatorship does this idiot live in?Because in my country, the government does not regulate speech.
Should not, but wants to.
"Because in my country, the government does not regulate speech."
Not yet. But according to Reason contributor Noah Berlatsky, it should.
#BringBackBerlatsky
Come on, dude, get new material.
Your imagination doesn't actually count as a country.
Yang would require tech companies to work alongside the federal government in order to "create algorithms that minimize ... information that's specifically designed to polarize or incite individuals."
Oh, FFS! Just ban communication and be done with it!
but... his very proposal is polarizing and inciting me!
You know who else had a very polarizing and inciting proposal?
Santa Claus?
Ray Ban?
Immanuel Velikovsky?
shit some people get polarized of car brands, there will be nothing left to talk about with rules like that.
When will we finally heal the Coke-Pepsi divide?
Real libertarians drink RC Cola. With rum.
Marvel or DC?
Corporate [GOV] wants full gun-forced control of all media content? Does that run at odds with the old outdated concept called "Freedom of the Press" or something?
Corporate media owned by anti-Trump billionaires (NYT, Reason, WaPo, The Atlantic, etc.) are, of course, exempt.
Classic totalitarian claim: "Once I am in charge, all (algorithms) will be better."
Google should hand him the halting function.
Let the government decide what's truth and what we can see?
What could possibly go wrong?
How awful. It seems the only bipartisan ideas nowadays are bipartisanly bad ideas.
Has there ever been a good bipartisan idea?
The interstate highway system? Standardized time zones? Uh...sorry, that's all I've got.
Actually, the standardization of the time zones is a classic example of voluntary cooperation undertaken without government involvement.
Regulation is ultimately elitist. Not being an elitist means that we care more letting Facebook's and Google's customers do as they please than we do about our disgust for the way Facebook and Google treat the privacy of their customers.
I wish more people chose to avoid using their services and chose the competition instead, but the sad fact is that plenty of people know that Facebook and Google are treating them like garbage but they willingly choose to keep using their products and services anyway. They should be free to do so without an elitist like Yang using the government to override their freedom of choice with his personal preferences.
Meanwhile, the government needs to be there to punish Facebook and Google when either of them legitimately violates the rights of their customers. In the case of Google acquiring access to millions of people's medical histories, without those individuals' knowledge or consent, that appears to be a clear violation of HIPAA and should subject Google to hefty fines and a court order to stop what they're doing.
The problem with using clear violations of legitimate laws to punish the misbehavior of big tech is that it doesn't offer much in the way of opportunity to indulge in elitist justifications for meddling in the freedom of stupid people and forcing our personal preferences on others. I mean, that's the problem if you're an elitist. If you're not an elitist, then that's a feature and not a bug.
Yang's respect for individual autonomy seems to be be even harder to find than his list of legitimately good ideas. His ideas are generally awful, but even good ideas become awful when we try to inflict them on others using the coercive power of government.
Staying away from marijuana might be a good idea for a lot of people, and if Yang wants to persuade them to avoid it, he should be free to do so. Yang wants to launch the drug war on big tech, but he's really talking about preventing people from making choices for themselves. Fuck Yang.
^This
What competition? Every other social media site I'm aware of is even worse about banning speech they don't like.
I tried jumping ship. Tumblr was a hell hole of SJW's bit their moderation team have zero fucks, until if course apple cracked down on them and that site went up in flames. All the other startups I see are starting with the premise they'll ban anyone that disagrees with the creators.
I can only put one link per post, and if I start posting links in quick succession, it'll start treating me like a spammer.
If you're looking for a social media substitute, I'd look at Mastodon and MeWe, one's more like Twitter and the other is like Facebook.
Here's a link to MeWe. Start a page, invite your friends and family, and tell them to stop subjecting themselves to Facebook's abuse.
https://mewe.com/
I started a Slack instance for friends and family, and with all the ad-ins, it's better than Facebook in a lot of ways. There are lots of more privacy centered alternatives.
Mastodon was created because Twitter and Facebook weren't leftist and censorious enough. Just have a look at the Mastodon Server Covenant.
The Mastodon creators are authoritarian leftists. They hate libertarians with a passion. And they will come up with sabotaging any non-leftist use of their software. It's best not to deal with them at all.
the government doesn't need to be there to punish them. customers can punish them directly, in the marketplace, if they actually care.
Well, customers can punish them when they violate customers' rights and break the law by suing them in court, too, and using the government to protect people's rights when they're violated by private parties is a legitimate libertarian use of government.
No new Yang regulation required.
Fuck Yang is the name of my dentist.
A highly astute observation in light of the fact that the first amendment, exactly like section 230, grants special legal impunity to certain commercial interests and ensures that certain types of publishers cannot be held responsible for the content of their publications.
I've heard some bad ideas for dealing with this (mostly government created) problem.
But this is by far the worst.
Perhaps the worst part of it is that,much like Section 230, it would mostly serves as another form of insulation/immunity for the effects of any such "government approved" algorithms.
Yes we have to maintain that strict separation that currently exists whereby social media companies collude with only one party.
They aren't so much colluding with any one party as they are colluding with the entrenched bureaucracy and the media/special interest merry go round.
Government-Media complex.
Only 19 comments?
Losers.
Look like Fist is on an extended coffee break.
I think you might be lost, Rufus. Don’t worry, I won’t tell.
Creating an algorithm to block offensive content shouldn't be too hard, you could get 99% of it blocked just by banning the words ****, ******, ***, ******, **********, ****(*), ********, and, of course, *****. After that, you'd just have to weed out the few remaining instances of words such as *********, **********, *****, and the like.
Of course, the ****** in the ******* wouldn't like it, but they can ******** and ***** for all I care.
With modern machine learning it's not just particular words that get blocked it's *** ****** ** ******** *** ***** **** *** **********.
You can't even ****** ********* **** *** ****** ******.
They'll block that too.
The workarounds for this wind up being much more offensive. For example:
The severely-tanned individuals that like fried chicken and carbonated fruit-flavored beverages with high alcohol content don't seem to get along with the long-nosed people with curly hair that are tight with their money. Funny thing is that those with long beards that ride camels and toss those who have same gender relations off of rooftops hate them both while the short ones with narrow eye-openings that are good at math keep to themselves.
Algorithms are going to miss that.
Well done
lol. all of it ^^
don't forget to ban "Alfa Romeo" or "in just a few hours a week"
"To counter the proliferation of content he does not like, Yang would require tech companies to work alongside the federal government in order to "create algorithms that minimize the spread of mis/disinformation," as well as "information that's specifically designed to polarize or incite individuals."
Wouldn't an algorithm which minimized the spread of "mis/disinformation" also block almost anything that comes from Mr. Wang? Just asking...
It would block the entire Democrat primary race. Which would be a good thing.
Yep. And it would block a whole lot of Repubs, too. Which is also a good thing.
We're talking about Mr. Yang, not his Wang.
"As Reason's Elizabeth Nolan Brown writes, it's essentially "the Internet's First Amendment."
Since you know it isn't you have to refer to what someone else claims it is. Crafty.
The internet doesn't need its own First Amendment because what those in the US post on the internet is already protected by the First Amendment. Companies don't have to adhere to the First Amendment when it comes to their products.
The gov is currently using military scams to combat social media and Facebook is saying it will force users to use real names so someone can't claim they're a vet and solicit donations for the legs they Photoshopped out of the pictures. Also, politicians are being impersonated. Facebook can only do this if you provide them all your info and they check it. A scan of your ID probably won't be enough because if I use someone's ID that person will then have to somehow prove that they're the real person and then I insist that I am the real person. (So there) Then Democrats will claim even an ID is racist because their racism assumes blacks don't have ID's and are unaware of the DMV.
This is going to be fun.
Andrew Yang wants the 'its a series of tubes' and 'what, like with a cloth?' people reviewing technology issues.
I was reading Ars Technica the other day and they had an article about the FTC head wanting privacy laws he can enforce. And these people were all for that.
Never mind GDPR. Never mind how they're all over their bitching about Ajit Pai and how corrupt and captured by special interests the FCC is.
Wait—I thought the CIA was already writing the algorithms.
wow tyrant much?
I'm ambivalent: Is Yang stupid enough to not understand the implications of his proposal, or evil enough to want them? It's a toss-up.
Anyway, this isn't really bipartisan, because on one side of the aisle the complaint is that these platforms are censoring things they shouldn't, and on the other side, the complaint is that they're NOT censoring things they should.
Both sides would mess with the platforms, but to opposite ends.
Right here is why, no matter how much you hate social media, no matter how much Youtube lies to us about how it curates, blocks, bans content via its algorithm, demanding a government solution is always the singular worst way to deal with it. Trust me, Big Tech haters (of which I am marginally one)... you do not want a federal solution to this crap.
Yup. But that doesn't meant there's NOTHING to be done about Youtube/Twitter/Google. If users are getting fiduciary benefit out of these services under the agreement that they adhere to "Terms of Service," that signifies a contract. If these users are then punished without a clear violation of Terms of Service, it's a breach of contract.
Just stick to making sure tech companies are upholding their contracts and let consumers choose what's best.
Agreed. Hence my posts below.
Just so we don't lose track of how fucking shit Google And Youtube are, Youtube is now censoring and investigating creators for "credible threats to life" when you merely mention someone's name and discuss publicly published material.
I would love to see a massive, class action lawsuit by creators against youtube for their frustrating TOS violations against their users. For decades, Libertarians have made fun of the DMV and other government institutions for their comically nonsensical bureaucracy. Youtube makes the DMV look like its run by the staff at the Ritz Carlton.
I would love to see a massive, class action lawsuit by creators against youtube for their frustrating TOS violations against their users.
But still on the 'preserve Section 230' side of the fence? Seems oxymoronic to me.
As section 230 gives the platforms a "right to moderate" content, I think the question is one of contract. Youtube has a TOS. For the last two or three decades, most users have ignored the TOS for various products and services because it rarely came into play. But now that things like revenue sharing and money is changing hands, the TOS is no longer just a vague document that the company provides for legal cover, and users ignore because everyone is generally clear what's meant by the violations.
The TOS now a thing that carries real weight. It's a contract. And if a company violates its own TOS or misapplies it-- especially where livelihoods are at stake, this seems like a situation that's ripe for a civil lawsuit.
As section 230 gives the platforms a “right to moderate” content, I think the question is one of contract.
Do moderators own the property that they have a congressionally-mandated right to moderate?
Right to moderate aside, I'm unable to rectify support for an overt liability shield and a preference for massive, class action lawsuit resolution. At the very least you would think the explicit statements of liability would have a chilling effect on any/all lawsuits, no?
Yes, but in my advancing age, I'm reluctant to make grand pronouncements on stuff like this, especially where nuanced legal interpretation is required.
I’m unable to rectify support for an overt liability shield and a preference for massive, class action lawsuit resolution.
I think your points are valid and worth considering. But my understanding is the liability shield only relates to content posted by users-- making it so the forum operator isn't responsible for illegal content-- which is a reasonable shield-- if that's the extent of the shield. For instance, it would be silly to hold telephone pole companies liable for fliers stapled to their poles. A clumsy analogy, but the best I could come up with in the moment.
If the liability shield extends to HOW they operate their business in regards to their application of TOS etc., then I would emphatically agree, that's a fundamentally bad thing.
As has to be pointed out every time somebody says this, Section 230 gives platforms the right to moderate content *in good faith*.
The very presence of that phrase implies that there's such a thing as bad faith moderation.
I don't disagree with this, but I haven't read the entirety if the law, nor do I purport to understand it fully. I do understand it enough to know it's not the Internet's First Amendment, however.
for those who don't want to click through and watch what might be one of the most laughable, yet enraging examples of Youtube's bullshit, they're literally telling creators that they're being investigated for "credible threats to life", while telling them they don't have enough content or context to verify if they've actually violated the TOS, but telling them they need to fix the problem that Youtube refuses to articulate to get your videos remonetized or unblocked.
And to beat the dead horse, let this be known: Youtube is NOT "organically" curating content that people really like in ways that are "novel and surprising" helping create a fun and informative experience for the end user. IT'S WHAT MADE THEM GREAT!
That is Youtube of 2005, not youtube of 2019. They are blocking, banning, spinning, filtering, throttling and threatening people who even DISCUSS news stories that are unfavorable to a political narrative. QED.
I think it's as much a "user" problem as it is a moderation/policy problem.
What's happening is they have a flagging system whose purpose is to help Youtube crack down on things that clearly violate TOS-nudity, minors in inappropriate situations, pirated materials, etc. Users are bombarding this flagging system to discredit materials they dislike. It gets overloaded and Youtube wants to cull these complaints so they can actually manage their flags, so they try to mollify these users.
It's a difficult problem. They could completely scrap user flagging, but then they'd lose an extremely valuable resource that helps them protect their platform from the stuff they really don't want. But as long as they have it, people will continue abusing it for petty shit that drains their resources.
Let's pretend you're right (I'm willing to entertain that theory). Imagine a system where if a neighbor complains activity on your property, you get locked out of your house or business by the city and have to appeal the process. The city sends you a letter telling you that you must correct the violations before you'll be allowed back in-- no violations have been noted in the letter.
You reach out to the city to get clarification-- they don't respond for several days. Then, when they finally do respond, they say that you've violated property rules. Again, no explicit behavior or rule is specified. When you again try to get a clarification so you can, in good faith, remove the offending content, the city stops responding to your calls and you're permanently locked out of your property.
While it would be tempting to blame the neighbors for this state of affairs, that would ignore the elephant in the room: The city officials locking you out of your property with a comically Kafka-esque response.
In this case, the Kafka-esque bureaucracy is Youtube (or Facebook... or Twitter, or Google...)
Example is...a little bit off the mark. You're looking at this specific case, but it's more like he was issued a warning, with no real threat of consequences. I'm aware others had their videos forced to private mode, but they seemed to know what it was that caused their videos to be locked.
I'm not going to say that Youtube/Google are blameless in this. Their implicit bias is causing them to lend more credence to complaints that others might dismiss. But it's also a case where you get a ton of people flagging videos without giving explicit reasons, and Youtube's moderators themselves are trying to figure out why certain videos are suddenly drawing a ton of flags.
It's not like it's one neighbor complaining about a neighbor's noise violation, it's one neighbor complaining about hundreds or thousands of other neighbors, and there's a few other neighbors also issuing hundreds of complaints, and they're bombarding city hall who is desperately trying to figure out what the issue is since many of them are anonymous and vague. City Hall needs a better appeals system to avoid handing out severe consequences, yes, but nosy busybodies have way too much fucking time on their hands to disrupt the process.
It’s not like it’s one neighbor complaining about a neighbor’s noise violation, it’s one neighbor complaining about hundreds or thousands of other neighbors, and there’s a few other neighbors also issuing hundreds of complaints, and they’re bombarding city hall who is desperately trying to figure out what the issue is since many of them are anonymous and vague. City Hall needs a better appeals system to avoid handing out severe consequences, yes, but nosy busybodies have way too much fucking time on their hands to disrupt the process.
Right, so if you realize your process is being trolled, and you ban/demonetize first, and then... well, let's be real here, never get around to asking questions later, then that's on you, not the douchebag trolls. Twitter and copyright strikers will always exist. The question isn't, how do we make them go away, the question is, how do we deal with them without causing damage to the people that make up our platform and ultimately, ourselves.
If you're right, that it wasn't a moderator relying on an AI bot looking for keywords that banned him, but a petty troll (or trolls) who are initiating a frivolous complaint, then Youtube's moderation policy is basically a suicide pact. It's not just killing the creators, it's killing Youtube.
So you want to put all your Internet algorithms under government control? Now imagine your government becomes like China's. Or that it's run by Literal Hitler.
Any more dumb ideas?
It's not a dumb idea. As long as the right people are in charge, it's a brilliant idea.
Clearly your not an IT professional.
I've been called a hack by my users, many times.
Andrew Yang Proposes Making Social Media Algorithms Subject to Federal Approval
So why aren't they subject to Federal Approval via the judicial process? I mean, if my employer can't fire me because I'm black and the courts can route around their business practices to find out if they're racist, why can't they route around and prove that YouTube isn't discriminatory in its contracting? Why is this in Congress' hands?
I wholeheartedly agree that giving government any control over the censorship and bias algorithms of privately owned social networks is terrifyingly Orwellian and not remotely within the governments enumerated powers and a clear violation of the first amendment. What people miss is a more fundamental question. Social media companies only have power because their users (aka the electorate) are gullible sheep who vote on issues on how to run a government that effectively has no bounds on it's power. If you accept the sad reality that the vast majority of the electorate are unable to think for themselves and are easily led, is democracy such a good idea? And if you accept the sad reality that their votes can have a drastic effect on everybody's lives since constitutional limits on the federal government's power haven't been close to followed in over a century, is a wholly unconstrained government a good idea? This entire issue is trying to put a shitty ineffective band aid with massive potential for abuse and unintended consequences on the fact that democracy coupled with an unconstrained government is an awful idea.
Bravo!
Where the distinction came from is somewhat of a mystery, as that language is absent from the law.
Rule of goats Billy, whether you're being willfully ignorant or really do like to fuck goats is a meaningless distinction. Title II of the Communications Act of 1934 distinguishes parties responsible for simply communicating information and parties editing and selectively transmitting information. Prior to that from a technology standpoint, the distinction is was made (progressively less) informally on a more case-by-case basis in courts and at the state and local level.
Section 230 protects sites from certain civil and criminal liabilities if those companies are not explicitly editing the content; content removal does not qualify as such.
Funny that just one sentence before you said "as that language is absent from the law" and this sentence you would fail to show me where the words 'explicitly' or 'editing' are found in the law. Matter of fact, "A provider of interactive computer service shall, at the time of entering an agreement with a customer for the provision of interactive computer service and in a manner deemed appropriate by the provider, notify such customer that parental control protections (such as computer hardware, software, or filtering services) are commercially available that may assist the customer in limiting access to material that is harmful to minors. Such notice shall identify, or provide the customer with access to information identifying, current providers of such protections." Would indicate that they are required to at least be able to edit the content and do so explicitly.
Section 230 was an ill-conceived moralist piece of garbage that neither established nor enshrined free speech on the internet. Reason's continued insistence that it is something it's not only serves to discredit their other claims to libertarianism.
Their hatred of Trump has caused Democrats to consider many bad ideas, including this one, which will come back to bite them.
For example, it would be relatively easy for an internet 'fact checker' to conclude that Warren's numbers don't add up, and then refuse to run her ads. Another candidate's ad that promises to 'get guns off the streets' could be refused since we all know that the 2A wouldn't allow that to happen.
I honestly don't think Democrats would even be considering this junk had Trump not come along.
Another non engineer commenting on technology they do not understand.
You're just another engineer who doesn't understand how the real world works.
Never heard of the guy. Is he North Korean?
I’m responsible for what I say, nobody else. Commensurate with that responsibility is authority.
Tech giants overstep their authority when they take responsibility for me.
Make no mistake about it, it isn’t benevolence. It’s their play for power over our freedom.
You haven't a clue what an algorithm is or the scope of what he is saying and its axiomaric of your words you chose to write.
What rock did you crawl out from under?
The algorithms being discussed are the “programming” for automated censorship.
Tech giants nor anyone else in a society with free speech have the authority or the responsibility to censor legal speech manually or with automation.
The government has the responsibility to tell tech giants that if they want to operate in our society they must adhere to our constitutional rights.
That or leave their lucrative business to someone who will.
The government gave up, 20 years ago, trying to write software. They now buy whatever is available from the lowest bidder. Government computers run commercial operating systems. Commercial spreadsheets, databases, word processors, all are cheaper than custom ones. And Yang wants to government to approve the design of commercial software? Megadisaster, anyone?
I have been an engineer for 30 years and what he is suggesting is beyond preposterous.
This is why ALL politicians need to shut their stupid mouths when it comes to technology.
Clearly this feckless moron like every idiot of technology hasn't a clue what he's talking about. So this contemptible idiot is proposing MORONS OF GOVERNMENT have mandatory access to code from PRIVATE companies.
Does this idiot think he's running for God?
Mr Yang a leader in the Nanny State Junior Debating Championship.
A true feckless fool and jerk.
What is an algorithm? And since today's students are even more ignorant of math than I am, where will we find regulators to monitor big tech's algorithms? I know. China!
My fine pal's sister-in-law makes $88 each hour on the laptop. She has been laid off for 8 months however remaining month her earnings become $13248 just operating at the pc for a few hours. look those up,,,
http://buzzjobs.com.nu/
Sheesh. As much as I dislike Trump on a personal level, the democratic candidates are going insane.
Saude e bem estar
https://saudeagora.fitness.blog/