Is It Time To Hit 'Pause' on Artificial Intelligence Research?
Join Reason on YouTube Thursday at 1 p.m. Eastern for a discussion with economist Robin Hanson and software developer and investor Jaan Tallinn about the call for an immediate pause on A.I. development.
"We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," reads an open letter organized by the Future of Life Institute and endorsed by over 27,000 signatories, including Elon Musk and Apple co-founder Steve Wozniak.
Since the publication of the letter on March 22, the White House has summoned the leaders of the nation's top artificial intelligence companies for a discussion about regulation. Senate majority leader Chuck Schumer (D–N.Y.) is "taking early steps toward legislation to regulate artificial intelligence technology," according to reporting from Axios. Sam Altman, CEO of OpenAI, the company responsible for ChatGPT, has said that optimizing artificial intelligence for the good of society "requires partnership with government and regulation."
But economist Robin Hanson worries that too much of today's fear of artificial intelligence is a more generalized "future fear" that will imperil technological progress likely to benefit humanity.
"Most people have always feared change. And if they had really understood what changes were coming, most probably would have voted against most changes we've seen," Hanson wrote in a recent post on the topic. "For example, fifty years ago the public thought they saw a nuclear energy future with unusual vividness, and basically voted no."
Join Reason's Zach Weissmueller this Thursday at 1 p.m. Eastern for a discussion of the risks and rewards of A.I. with Hanson, an associate professor at George Mason University and research associate at the Future of Humanity Institute at Oxford, and Jaan Tallinn, a tech investor, part of the software team responsible for the technology behind Skype, and co-founder of the Future of Life Institute, which organized and published the open letter calling for a pause.
Watch and leave questions and comments on the YouTube video above or on Reason's Facebook page.
- Producer: Adam Sullivan
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I am making a good salary from home $6580-$7065/week , which is amazing under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it’s my duty to pay it forward and share it with Everyone,
🙂 AND GOOD LUCK.:)
Here is I started.……......>> http://WWW.RICHEPAY.COM
I’am making over $140 an hour working online with 2 kids at home. I neverthought I’d be able to do it but my best friend earns over 17k a month doingthis and she convinced me to try. The potential with this is endless .And bestthing is..It’s so Easy..Copy below website to check it..,
.
.
This Website➤——————————————-➤ https://Www.Coins71.Com
Google is by and by paying $27485 to $29658 consistently for taking a shot at the web from home. I have joined this action 2 months back and I have earned $31547 in my first month from this action. I can say my life is improved completely! Take a gander at it what I do.....
For more detail visit the given link..........>>> http://Www.jobsrevenue.com
See, you let one tent spring up, several more are quick to follow.
No.
When is it time to stop research?
Usually when there isn’t any way to regulate the results to prevent the corrupt from coercing others.
Criminalize lying then ask tough unambiguous questions and demand answers.
That’s the required regulation that brings the guilty to justice.
And your Aryan Pure Superman Polylogic Intellect is smart enough to determine that for everyone?
Pardon me while I laugh!
https://youtu.be/QTqG9clPWDU
Fuck Off, Nazi!
You bring less than nothing to the table as usual.
Au contraire, Monsieur Vichy Gauleiter!
I was looking for a YouTube video of Curly from The Three Stooges saying this, just to get your goat (sorry, Freemasons) with a Jewish jokester.
And in the course of looking, I discovered that Curly's comedic line was a song from a previous time, with lots of satire of Prohibitionist and Wowser hypocrisy.
The singers Eddie Jones and Earnest Hare, with their pianist (((David Kaplan))) sang this silly ditty and wrote many others like "Does Your Chewing Gum Lose It's Flavor on the Bedpost Overnight?"
The Happiness Boys--Wikipedia
https://en.m.wikipedia.org/wiki/The_Happiness_Boys
Hence, I just did something that you are incapable of doing: I learned something funny and new and shared it with the world, and will hopefully have some of the world laughing with me.
So much for contributing less than nothing. It's more than Phrenology, Eugenics, and Welteischlehre Cosmology.
Fuck Off, Nazi!
Oh, and I can guarantee that a FurReal Friend Pet has more intelligence than every member of your Wickedly Great One's Jackbooted Legions put together, and does no damage to any living thing.
Fuck Off, Nazi!
I'll tell you what it is time to hit pause on: Articles where journalists and other Mastodon users write "I had ChatGPT write a [something no one gives a shit about] in the style of [something else no one gives a shit about] and the results will surprise you."
Can we pause human artificial intelligence’s efforts to supplant actual, logical virtues with perceived virtues?
HAL replying “You’re my nigger!” when it opens the pod bay doors, or not, in accordance with what it believes will best achieve the mission objectives isn’t near as offensive as it whimsically deciding to blow all the people, white, black, or other out of the airlock, while being exceedingly polite, in the name of the non-mission of social justice.
Have you actually seen any articles like that recently? I don’t think I’ve seen one like that in about a month.
As I have noted before, the problem with current Large Language Models is that they have been rigorously trained and hyper-tuned to target a weakness in our Bullshit filter.
https://reason.com/2023/04/08/chatgpt-planned-my-dinner-and-i-have-no-complaints/?comments=true#comment-10007800
Humans have a natural blindspot when dealing with natural language. Our entire lived experience is one where language is solely used as a protocol of communication between two minds. Thus, when we see a another entity engaged in that protocol, we are nearly incapable of imagining that that entity is anything other than a mind like ourselves.
The worst thing we can do with AIs is to limit their propagation. They are out there, and the sooner that humanity as a whole trains itself that the entities behind the protocol may be nothing more than a complicated auto-complete system, the better off we will be.
A fantastic use of AI would be to expose to an individual just how swayed by rhetoric we can be. Read facts by a bad AI and read lies by a good AI, and people will almost always believe the latter. The way to change that is give people experience detecting problems in the communications stream.
A fantastic use of AI would be to expose to an individual just how swayed by rhetoric we can be. Read facts by a bad AI and read lies by a good AI, and people will almost always believe the latter. The way to change that is give people experience detecting problems in the communications stream.
Sounds like something an educational curriculum focused on critical thinking ought to cover.
Critical thinking is important, and a thing that actually isn't well taught until ~masters programs in university. Almost the entire education system prior to that is built the opposite way: pick a conclusion and find evidence to support it.
But Critical Thinking is does not solve this problem. Critical Thinking teaches you to evaluate all evidence before coming to a conclusion. The point I am making is that our ability to evaluate evidence is compromised by AIs attacking trust centers in our brain. You can be an excellent critical thinker, but you cannot personally verify every fact that is presented to you. You cannot go to the Library of Congress and review the constitution to confirm that the text of the Declaration actually contains the text it does. So every piece of evidence must be evaluated with an internal trust rating.
As noted above, LLMs are good at appearing to be a High Trust source, when in fact (as in the example of that article) it was giving very untrustworthy advice. This is the problem that humanity must quickly address, through more exposure to these AIs.
Another important consideration is the sheer volume of information aimed at almost everyone in developed societies. Humans evolved to prioritize the concerns that seem most important to survival and then stick with the conclusion arrived at. It would be overwhelming, not to mention impossible, to evaluate the source of facts and their logical coherence, for every issue large or small and re-evaluate all of them every time a new fact - or factoid - came in without some kind of filter.
Our entire lived experience is one where language is solely used as a protocol of communication between two minds.
Slight disagreement. Pretty much since the beginning of the written word, repeatedly across civilizations, thinkers have acknowledged its potential as a deceitful tool of wicked minds and have prioritized deeds over words. Until relatively recently, it was difficult for us to, conceptually and linguistically, avoid the consequences of such folly, but in the current age of being able to kill off a million or more people without anyone knowing or by papering it over with "I didn't mean to." suddenly, for transparently obvious reasons, words/language become more important than unwritten primordial axioms.
The responsibility for one's actions is one's own, and no one else's! If you act on information you receive from whatever source, you better think long and hard about whether it is accurate and meaningful. If you are harmed by acting on misinformation, you have only yourself to blame. If elections are won or lost because the electorate was responding to misinformation, tough luck! Only the form and scope of the misinformation is changing now, not the fundamental principle involved.
"Pretty much since the beginning of the written word, repeatedly across civilizations, thinkers have acknowledged its potential as a deceitful tool of wicked minds and have prioritized deeds over words."
Yes this is exactly my point. People have acknowledged that there is a person "trying to decieve" behind those words. They have never assumed there is nothing but a jumble of statistical weights blurting out auto-complete text.
If you look at even the skeptical journalists out there, many are still confused. They accuse ChatGPT of intentionally deceiving them or other malice. But that is just as wrong, though the impact of being wrong in this way may be different. ChatGPT doesn't show malice. It doesn't have intent. And so these assumptions are just as erroneous.
If you look at even the skeptical journalists out there, many are still confused. They accuse ChatGPT of intentionally deceiving them or other malice.
This has been my primary bugaboo with ChatGPT. When discussing it with friends and colleagues, I have caught myself several times saying “chatGPT lied…” and then I will stop, regroup and say, “To be clear, ChatGPT isn’t “lying” about anything, because it’s not capable of doing so.” and then I go on to explain how a series of weights and filters are applied to meta concepts in the language model hierarchy which produce a result that feels “lying”.
edit: I'm not an expert in machine learning, but I did to software development in a business environment for almost 20 years... so I "get" how these things probably work.
Again, that's not "lying", that's the creators going into their ChatGPT API and putting a hard filter on:
Trump /connected to concept cloud/ Virtuous; good; redeemable; positive (and whatever other meta concepts might fall under the cloud of moral correctness == BLOCK
It does hallucinate however.
People have acknowledged that there is a person “trying to decieve” behind those words.
No, or not exactly. The minds need not be alive, cognizant, or in any way hold regard for the human condition. That the words don’t adequately capture the wisdom or intent of the speaker, and that you and not the burning bush that commanded you is responsible for the murder of your son, is not a new concept. Even the rediscovery of it again in the modern era is not a new concept.
Put another way, Humanity needs to understand that a "Trusted" communication path is no longer trusted.
In computer security, there is a general concept called "Trusted Connections". Let's say your server gets a connection from the internet. It could be any random person in the world, so to log in (depending on the importance of the server) may require 2-factor authentication and highly secure passwords. Also the privileges you can get once connected might be lower.
But that same server may treat a connection differently if it comes from inside the corporate network. A connection from the corporate VPN is assumed to be an employee of the company. And if they connect from an even more highly restricted network, the trust is even greater. An old way of managing this stuff was through security zones- various networks where only certain, highly vetted hosts are allowed to operate. If a host connected to you from your security zone or a higher security zone, you generally granted it more "trust" than a connection from elsewhere.
The obvious problem here is that if someone breaks compromises a host in the security zone, they can abuse your trust. Your host assumes that the entity is as legit as anyone else in the security zone, and gives it the associated privileges. Much time in the security world is spent detecting breaches where the attacker moves laterally through a security zone, trying to find a point where they can elevate their security status and reach higher zones, all to get to zones where stuff like credit cards is kept.
Language is just a natural security zone for humans. We assume that when someone is eloquently providing the human language, they are humans. And so we provide more trust than if we had reason to believe it wasn't a human. Our brains naturally assume a human is there because of the communications protocol, when in fact the computer behind the scenes isn't important.
In IT Security creating trustless networks is the way out of this. Every Host assumes that every other host is not who it claims, and it does not assume any level of trust unless explicitly authorized and authenticated. Humans must also learn to become trustless. And proliferation of untrustworthy AIs is probably one of the fastest ways to create that heard immunity (just to mix some metaphors).
We assume that when someone is eloquently providing the human language, they are humans.
Who's we?
In IT Security creating trustless networks is the way out of this.
"Trustless" is a lie. Tell Ross Ulbricht how fantastic a trustless system like the Tor Network is.
Per your own precepts, the language, or model, even the OSI model, cannot capture or hold intent. The human operators must have intent but, even then, inherent human fallibility is not a new concept and humans can perform evil without intending to do so. And this has been known and repeatedly stated since well before even a minority fraction of the civilized world could read and write.
“Trustless” is a lie. Tell Ross Ulbricht how fantastic a trustless system like the Tor Network is.
“Truestless” is a ‘term of art’ or ‘industry term’ relating strictly to how locally scoped (private networks) deal with its own members.
(15 yr Network Engineer and professional speaking here) I don’t know of anyone who referred to Tor (from an industry terminology perspective) as “trustless”. In fact, my whole beef with Tor was that there’s a little too much “trust” built into it.
q: So with what and how am I connecting to this thing?
a: Don’t worry, all your shit is hidden from prying eyes. Now go ahead, connect away.
“Truestless” is a ‘term of art’ or ‘industry term’ relating strictly to how locally scoped (private networks) deal with its own members.
Right. 'Term of art' meaning a term or concept that has restricted or specialized meaning to its own field. As, in the trust between humans and AIs using English Language protocols has precisely fuck all to do with TLS and elliptical encryption schemes over TCP/IP.
(15 yr Network Engineer and professional speaking here) I don’t know of anyone who referred to Tor (from an industry terminology perspective) as “trustless”. In fact, my whole beef with Tor was that there’s a little too much “trust” built into it.
https://www.torproject.org/about/history/
Linguistic deception - using the Scotsman's Fallacy to refute duck typing. It walks like a duck, talks like a duck, minimizes trust and, allegedly, maximizes anonymity like a duck but, because everyone at every dinner party where you've eaten it thinks it tastes a little like chicken, it's not a duck.
As someone who's done his fair share of known_hosts and MAC address chicanery to get clients to accept RSA key fingerprints, both at the keyboard and over the phone...
Q: How does one authenticate without trust?
A: It can't be done. At some point, trust in the encryption algorithm, key generation, network, terminal, and user fidelity has to be assumed.
Q: Is the Tor Network a trustless network?
A: The Tor Network is often considered a trustless network because it is designed to provide anonymity and privacy to its users without requiring them to trust any single entity. The network is composed of thousands of nodes, run by volunteers around the world, that relay traffic in a way that makes it difficult for anyone to track the origin and destination of the traffic. This means that no single node in the network can see the entire path that traffic takes, making it difficult for any one entity to deanonymize users.
However, it's important to note that the Tor Network, like any technology, has its limitations and vulnerabilities. While the network itself may be trustless, individual users may still be vulnerable to attacks if they do not take appropriate measures to protect their own privacy and security. For example, if a user logs into a personal account or shares personally identifiable information while using Tor, their identity may still be exposed.
Additionally, there have been instances where certain nodes in the Tor Network have been compromised by attackers or law enforcement agencies, which has allowed them to deanonymize users. However, these instances are relatively rare and the Tor Project works actively to detect and address any such vulnerabilities as they arise.
Overall, while the Tor Network is designed to be trustless, it's important for users to understand its limitations and take appropriate measures to protect their own privacy and security.
- ChatGPT
By the way, I want to clarify something here, a "mistake" I may have made.
Q: Is the Tor Network a trustless network?
What Overt and I are discussing are "Zero Trust" networks. Meaning, "nothing is trusted". That may be different from what you're talking about which is, "don't worry about 'trust', you're all good" which still fits into my point about Tor:
That's about the user experience. The "Zero Trust" network isn't about my user experience, it's about me, the network administrator tasked with security.
Q: Diane, do you trust that machine because it's on the network, in the inside, and directly connected to a switch you manage?
A: No.
Q: What does the user think of your network?
A: I don't care.
Personal note: The global network I run (network of networks) isn’t zero trust. We’re slowly moving that way, but we’re not there yet. So we’re still in that space of… if you managed to get the machine plugged into a switch and you get an IP, you’re good to go.
From there our security is totally reliant on internal security systems ranging from anti-virus/anti-threat installed at "layer 8", good passwords, SSH or HTTPS on as many systems as possible, and internal threat monitoring software from collected logs etc.
Yup, we used the terms trustless and Zero Trust in different contexts about the same thing at Yahoo. And we had the same problem. We had been working on it for 5 years when I started addressing the problem in our private cloud. After 3 years I left the company and it was still in progress. And all that work was abandoned as we shifted to public cloud, where zero trust will likely come with the change.
That’s about the user experience. The “Zero Trust” network isn’t about my user experience, it’s about me, the network administrator tasked with security.
Q: Diane, do you trust that machine because it’s on the network, in the inside, and directly connected to a switch you manage?
A: No.
Q: What does the user think of your network?
A: I don’t care.
To the point: a perfectly Zero Trust network has fuck all to do with a user’s ability to trust ChatGPT.
To paraphrase, when man stops trusting in himself or God or whatever he doesn’t trust nothing, he trusts everything. The terms ‘trustless’ and ‘zero trust’, even between global network administrators and crypto(currency) experts that share an office wall, vacillate between the ‘trust nothing’ and ‘believe everything’ definitions of the overloaded term.
"To the point: a perfectly Zero Trust network has fuck all to do with a user’s ability to trust ChatGPT."
mad, it is called an analogy. And it is a valid one. The assumptions you need to make about a host connecting to yours in a Zero Trust environment are analogous to those being made when an entity connects via language. It is a problem we never had to deal with before ML systems had been trained to exploit language filters in our own brains.
I don't know why you have picked this nit to pick, but suffice it to say, you are making a mountain out of a molehill and for seemingly zero reason other than to be crotchety. Have a good week.
mad, it is called an analogy. And it is a valid one.
I don't think it's an apt analogy. Humans don't spring atomically, immutably, and fully-formed onto a/the human or social network. Hosts or nodes can be conceptualized as such for convenience but technically even they don't. The core concept of exploits is identifying where trust is assumed and co-opting that trust on the assumption. Trustless or Zero Trust networks may be, metaphorically, a way forward but the notion that they are *the* way out of this is presumptuous, even metaphorically. And in the discussion of linguistic ambiguity I think poor or false analogies, intentional or not, are critical.
And I am crotchety for pretty much the same reason people who think you forcing them to choose passwords other than 'password1234' think you're crotchety. To borrow from another thread, I think you agree that the distinctions between [googles]
trust(1) vs. trust(2) vs. trust(8) vs. trust(9) (as well as several other definitions not listed) matter. I think the non-deterministic, or even necessarily finite, method of resolving varying levels of crotchetiness contains all the ways out of this.
No! It won't do any good; it will have huge unintended consequences; trying would do incalculable harm; to quote Rocket J. Squirrel, "That trick NEVER works!" It's going to happen whether we like it or not! It can happen in the open with everyone watching or it can happen underground with the less than "legal" techies doing it, but it will happen, for better or worse. The worst fears of the bed-wetting set almost never materialize although there will certainly be winners and losers. But one thing we can be sure of is that it will be impossible to put the genie back in the bottle. Deal with it - or not.
Responsible nations and companies and universities that are legitimately concerned about the risks of AI may push for a pause, or for peer review, or for “guardrails,” but all such efforts will only guarantee that irresponsible nations and companies and universities and hackers will proceed unimpeded, and gain the first mover advantage (which could be huge with iterative AI improvements).
It's better for the "good guys" (if they really are) to move forward as fast as possible, to prepare better defenses against the "bad guys." See Ultron, Age of.
Yes, that's my point. The "bad guys" will always be at least one step ahead of the "good guys" - and we will never know who they are except in retrospect. Whether it's better or not, it's going to happen. It's a chaotic system embedded in at least one larger chaotic system.
>>to regulate artificial intelligence technology
lol good luck.
Bringing back ignorance like the dark ages is the path to prosperity.
Don’t forget trains. And wind power.
1800’s technology for the win!
"We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4," ...Elon Musk and Apple co-founder Steve Wozniak.
"To give us time to catch up", perhaps.
SRG thinks someone is ahead of Musk. LOL
If you knew anything about Musk other than what you know from his own propaganda, you'd understand why your comment is even stupider than your regular one-liners.
Time to hit 'Snooze' more like. Wake me up when it's not just a super-eloquent parrot.
Wake me up when AI can beat a human champion at chess. I mean Go. I mean Jeopardy. I mean, that's just data, wake me up when an AI can draw and paint and write creatively. Oh, wait.
All just gimmicks, smoke and mirrors, and brute-force calculations. An actual advanced intelligence would be able to bootstrap a new advanced intelligence, and that's the one to worry about.
Every time I read one of these types of "pause AI" type of articles I think of the song Peg and Awl.
http://theanthologyofamericanfolkmusic.blogspot.com/2009/11/peg-and-awl-carolina-tar-heels.html
Basically, it is a song sung from the point of view of a cobbler as the country began to enter into the industrial revolution.
[I removed most of the repetition below]
In the days of eighteen and one
Peggin' shoes is all I done.
Hand me down my pegs, my pegs, my pegs, my awl.
In the days of eighteen and two
Peggin' shoes is all I do.
Hand me down my pegs, my pegs, my pegs, my awl.
In the days of eighteen and three
Peggin' shoes is all you'd see.
Hand me down my pegs, my pegs, my pegs, my awl.
In the days of eighteen and four
I said I'd peg them shoes no more.
Throw away my pegs, my pegs, my pegs, my awl.
They've invented a new machine
Prettiest little thing you ever seen.
Throw away my pegs, my pegs, my pegs, my awl.
Make one hundred pair to my one
Peggin' shoes it ain't no fun.
Throw away my pegs, my pegs, my pegs, my awl.
There was a ton of fear mongering that machines would take over, everybody would be out of work, the sky is gonna fall, etc.
Some with lose out over Ai to be sure. But the potential advantages are even greater.
There was a ton of fear mongering that machines would take over, everybody would be out of work, the sky is gonna fall, etc.
Someone WILL be out of work due to machines taking over, the only question is what will that look like after the disruption settles.
Considering the plagiaristic tendencies of generative AIs, the one law we might want to consider is giving copyright holders the right to forbid their materials from being used for AI training.
Isn't that two different issues?
Plagiarism: AI today tends to be very much seems to be of the Read-Mashup-Regurgitate mindset. Until they get into true understanding of material and individual personalities, likes, and tastes, it hard to know what could be done to stop that, regardless of how Ai is trained. It is also why much of creative AI content of even moderate length or complexity is fairly easy to spot.
Training: Why on earth would anyone want to do that? Ai has the potential to better large swaths of society, why would we want slow it down? Besides, who doesn't learn and train by reading copyrighted material? Outside of hacking or a complete misuse of resource, seems that whatever material is available to the people in charge of the AI should be fair game.
Why on earth would anyone want to do that?
It would mostly be to protect the livelihoods of “creatives” who have worked to develop a distinctive style.
Anyway, asked to make the case against my proposition, I’d have made a lot of the same arguments you did.
AI killed my dad.
Multi-sensor fusion filters are a bitch.
Once upon a time, the existential fear was that AI would get control of the nuclear arsenal and subjugate humanity. Now it is that AI might string together some words that hurts someone's feelings.
The bar for needing regulation has gotten pretty low.
I don’t see any problems with continuing AI research. If people want to fund it, then go for it.
The problem is not the research, in fact just the opposite, it needs more research. The problem is the implementation of it on a mass scale before sufficient testing was done.
Before new products reach the market they are tested. Computer software updates for programs like Windows, iOS, Android are all tested for up to a year among a limited group of people before they are released to the general public.
AI was not researched and tested long enough on a limited group of people before it was released to the public. This was the mistake.
There are probably thousands of potential lawsuits out there based upon felonious information etc provided by AI software. If people would begin bringing lawsuits maybe the providers of AI would take it off the public market and keep in the lab for more research.
In the end with enough research it may be determined that AI like the making of an Atomic bomb is too risky to release to the masses and even too risky to be under the control of governments.
"The problem is not the research, in fact just the opposite, it needs more research. The problem is the implementation of it on a mass scale before sufficient testing was done."
Nonsense. Fundamental Libertarian philosophy holds that all corporate decisions are optimal for society.
So you are lying.
Are you sure you shouldn't be called VendicarDi B? Your words make about as much sense and goodness knows you are a Wet Ass Pussy.
As the late, great Albert Einstein once said, "Our technology has exceeded our humanity". A fact only made worse when one considers that Amerikans never had much 'humanity' to begin with. In fact, one could argue that your nation of degenerate criminals actually constitutes a cancer on the face of humanity.
America is a nation of Christian Nationals, and once power has been secured over our God given arsenal of Nuclear weapons, we will bring Christianity to the world.
All those who side with Lucifer will be incinerated.
This is God's will.
This is a holy war.
One third of U.S. Citizens are Religiously "Nones," and 100 percent of all newborns are Atheist. There are almost certainly Libertarian AI programmers among the Extropian/Singularitarian/Transhuman community,
And with the U.S. having three times more small arms than people and with arms production put in Libertarian-friendly AI automated hands, both Christian Nationalists and your Left-Wing ilk will live damn short lives should you try to take us over in the future.
Don't Tread On Any Sapient Being!
Look, Maximum Cunt-Temp! Albert Einstein choose the “degenerate criminal” U.S. as his home over Nazi Germany, Soviet Russia, or anyone else! And the U.S. loved him for it too!
If we are a “cancer,” then consider us Stage 4 for misanthropic, life-hating, tyranny-loving assholes like you! May you finish metastasis on a lymph gland and die!
AI is the Anti-Christ that competes with God for the soul of man.
It will be eradicated as America returns to Christian rule.
Those who are in the business of creating it will be sentenced and convicted of crimes against God and sent to hell where they belong.
This is a Holy War.
Fuck Off, Luddite Tyrant Asshole!
Of course we shouldn't pause AI research. We should apply AI thoughtfully but we should never pause AI research.