Not Even Artificial Intelligence Can Make Central Planning Work
Revolutionary AI technologies can't solve the "wicked problems" facing policy makers.


The term wicked problem has become a standard way for policy analysts to describe a social issue whose solution is inherently elusive. Wicked problems have many causal factors, complex interdependencies, and no ability to test all of the possible combinations of plausible interventions. Often, the problem itself cannot be articulated in a straightforward, agreed-upon way. Classic examples of wicked problems include climate change, substance abuse, international relations, health care systems, education systems, and economic performance. No matter how far computer science advances, some social problems will remain wicked.
The latest developments in artificial intelligence represent an enormous advance in computer science. Could that technological advance give bureaucrats the tool they have been missing to allow them to plan a more efficient economy? Many advocates of central planning seem to think so. Their line of thinking appears to be:
- Chatbots have absorbed an enormous amount of data.
- Large amounts of data produce knowledge.
- Knowledge will enable computers to plan the economy.
These assumptions are wrong. Chatbots have been trained to speak using large volumes of text, but they have not absorbed the knowledge contained in the text. Even if they had, there is knowledge that is critical for economic operations that is not available to a central planner or a computer.
The Promise of Pattern Matching
The new chatbots are trained on an enormous amount of text. But they have not absorbed this data in the sense of understanding the meaning of the text. Instead, they have found patterns in the data that enable them to write coherent paragraphs in response to queries.
Loosely speaking, there are two approaches to embedding skills and knowledge into computer software. One approach is to hard-code the sort of heuristics that a human being is able to articulate. In chess, this would mean explicitly coding formulas that reflect how people would weigh various factors in order to choose a move. In loan underwriting, it would mean spelling out how an experienced loan officer would regard a borrower's history of late credit-card payments in order to decide whether to make a new loan.
The other approach is pattern matching. In chess, that would mean giving the computer a large database of games that have been played, so that it can identify and distinguish positions that tend to result in wins. When the computer then plays the game, it would select moves that create positions that fit a winning pattern. In loan underwriting, pattern matching would mean looking at a large historic sample of approved loans to find characteristics that distinguish the borrowers who subsequently repaid the money from those who subsequently defaulted. It would then recommend approving loans where the credit report resembles the pattern of a borrower who is likely to repay.
Human beings use both pattern matching and explicit heuristics. An experienced chess player will not try to calculate the advantages and disadvantages of every single possible move in a position. Instead, the player will immediately recognize a pattern in the position, and this will intuitively suggest a few possible moves. The player will then make a more careful analysis to choose from among those moves. In speed chess, a player relies more on pattern recognition and less on heuristics and careful thought.
If you are on a hike, you may instinctively flinch when you see something that resembles the pattern of a snake. But then you will stop and reason about what you see. If it is not moving, you may conclude that it is merely a stick.
In American football, the quarterback may call a play based on careful reasoning about what the defense is likely to do in a situation. But once the play starts, the quarterback has to make instantaneous decisions based on what his instinct tells him about what the defense is doing. For these decisions, the quarterback is pattern matching.
We tend to pride ourselves on our ability to use heuristics and careful reasoning. When we examine our own thought processes, we do not think of ourselves as mere pattern matchers. But the latest advances in computer science rely heavily on pattern matching. ChatGPT has studied an enormous corpus of text in order to find patterns in how words are used in relation to one another, without having been given any instruction about what the words mean. Many experts, who assumed computers would have to be programmed to know the meaning of words, are surprised that this pattern matching works as well as it does. When you type a comment or a question into ChatGPT, not only will it respond by putting words in proper order; the response is usually meaningful, relevant, and appropriate.
It is almost mysterious how this happens. To a chatbot, a word is a mere "token," like a tiny square of cloth with a particular color. All it knows is which squares of cloth tend to appear near each other in the patterns that are in its training dataset. One at a time, it places squares of cloth in a sequence, and when the sequence is read as words it makes sense to a human reader.
Pattern matching also works with images. You can give a computer a prompt to draw an image; based on the patterns it finds, it will produce an image that follows the instructions in the prompt. The same pattern-matching technique can be applied to working with computer code, sounds, and video.
A Natural Language Revolution
These new tools revolutionize the way that people and computers communicate, because now computers can respond to our language. Before, we had to learn the computer's language. The first computers only understood "machine language," consisting of sets of zeroes and ones. An improvement was provided by "assembly language." Beyond assembly language were "programming languages," such as COBOL, FORTRAN, and BASIC.
About 40 years ago, most of us began communicating via the "user interface." We learned to manipulate a cursor and click on a mouse. Later we learned to use gestures on a phone.
With ChatGPT, we can communicate with a computer using "natural language." We type something in English, and we get a response in English. This is a superpower, and we are just starting to learn how to take advantage of it.
I wanted to be able to judge essays based on how well they address differing points of view. Can a computer do this for me? If I had to design, code, and test a program to do so, it would take months of work. But after spending a few hours experimenting with and refining prompts, I can get ChatGPT to perform this essay-grading task.
Pattern matching also lets computers respond to and mimic the emotional content of human writing. Chatbots do not merely retrieve information—for that purpose they may actually be less reliable than a Google search. They respond in paragraphs, using human phrasing and tone.
Because they communicate in natural language, ChatGPT and its relatives can do impersonations. Economist and public intellectual Tyler Cowen used a chatbot to conduct an "interview" with Jonathan Swift, the early 18th century satirist. Prompted to impersonate Swift, the chatbot responded to questions from Cowen. For example, Cowen asked about Swift's attitude toward the Church of Ireland. The chatbot responded, in part: "Ah, dear reader, it is indeed true that I, Jonathan Swift, harbored a certain degree of skepticism towards religious doctrine. Yet, one must not confuse this skepticism with a lack of faith or an aversion to the church altogether. Indeed, I was ordained a priest in the Church of Ireland, and throughout my life, I served in various capacities within its structure.
"Now, you may wonder how a man of my disposition could reconcile these seemingly contradictory stances," it continued. "The answer, I daresay, lies in the understanding that the church and the state must work in concert to maintain order and stability within society. The Church of Ireland, as an established state church, served as an important institution that promoted social cohesion and provided moral guidance to the people."
Chatbots can impersonate celebrities, therapists, companions, and romantic partners. They are able to connect to us emotionally to an extent that far exceeds any previous computer programs. The implications of this advance may be profound.
A few decades ago, math teacher Jaime Escalante became famous for motivating teenagers in a high-poverty school to learn calculus at an Advanced Placement level. His demanding, confrontational style and striking accomplishments were immortalized in a film, Stand and Deliver. Imagine being able to clone Escalante and put him in classrooms across the country.
Or consider the problem of training a robot. Today that involves working in computer code, but within a few years we should be able to communicate with robots using natural language.
Customer support calls are another area with obvious potential. All of us have experienced the frustration of menu systems ("If you are calling about , press 1"). Thankfully, those systems may soon be obsolete. Instead, a chatbot can quickly catch on to the customer's question or respond sympathetically to the customer's complaint.
Some enthusiasts see chatbots becoming lifelong companions. Futurist Peter Diamandis has predicted that "you'll ultimately give your personal AI assistant access to your phone calls, emails, conversations, cameras…every aspect, of every moment, of your day. Our personal AIs will serve (and we may become dependent upon them) as our cognitive collaborators, our on-demand researcher, our consigliere, our coaches…giving us advice on any and all topics that require unbiased wisdom."
Venture capitalist Marc Andreessen has argued similarly that within a few years every child will grow up with a personal chatbot as a lifelong partner. Your personal chatbot would have the ability to understand your abilities and desires. It would be able to motivate you, coach you, train you, and serve you.
It is too early to know which of these forecasts will actually pan out and which will fail to materialize, let alone what unexpected uses will appear out of nowhere. This is reminiscent of the World Wide Web circa 1995, when many of us anticipated rapid disruptions in education or the real estate market that have yet to occur. Meanwhile, nobody was predicting real-time driving directions or podcasting.
Limited Knowledge
Chatbots use pattern matching to provide coherent, relevant responses. But that does not mean that they have encyclopedic knowledge. The answers that a chatbot gives are not necessarily wise. They are not even necessarily true.
I have written several papers on the 2008 financial crisis, in which I make a case for what I believe were the most important causal factors. But when I asked ChatGPT to summarize my views on the crisis, it included explanations that are favored by other economists but not me. That is because the chatbot is trained to identify word patterns without knowing what the words mean.
Some knowledge is not available in any corpus of data. For example, we cannot predict how an innovation will play out.
As of this writing, Apple has introduced a revolutionary product it calls the Vision Pro. No one knows exactly how this product will be used, or whether it will be successful. This knowledge will emerge over time, with the market providing the ultimate judgment. As economist Friedrich Hayek wrote, market competition is a discovery procedure. Even if a computer possessed all of present knowledge, it could not replace this discovery procedure.
Central Planning Still Won't Work
Economic organization is a wicked problem. Your intuition might be that the best approach would be for a department of experts to determine what goods and services get produced and how they are distributed. This is known as central planning, and it has not worked well in reality. The Soviet Union fell in part because its centrally planned economy could not keep up with the West.
Some advocates of central planning have claimed that computers could provide the solution. In a 2017 Financial Times article headlined "The Big Data revolution can revive the planned economy," columnist John Thornhill cited entrepreneur Jack Ma, among others, claiming that eventually a planned economy will be possible. Those with this viewpoint see central planning as an information-processing problem, and computers are now capable of handling much more information than are individual human beings. Might they have a point?
F.A. Hayek made a compelling counterargument. In a famous paper called "The Use of Knowledge in Society," first published in 1945, Hayek argued that some information is tacit, meaning that it will never be articulated in a form that can be input to a computer. He also argued that some information is dispersed, meaning that it is known only in small part to any one person. Given the decentralized character of information, a market system generates prices, which in turn generate the knowledge necessary to efficiently organize an economy.
A central computer is not going to know how you as an individual would trade off between two goods. You may not be able to articulate your preferences yourself, until you are confronted with a choice at market prices. The computer is not going to know how consumers will respond to a new product or service, and it is not going to know how a new invention might change production patterns. The trial-and-error process of markets, using prices, profits, and losses, addresses these challenges.
Economists have a saying that "all costs are opportunity costs." That is, the cost of any good is the cost of what you have to forgo in order to obtain it. In other words, cost is not inherent in the nature of the good itself or how it is produced. It is impossible to know the cost of a good until it is traded in the market. If central planners do away with the market, then they will not have the information needed to calculate costs and make good decisions. Forced to use guesswork, planners will inevitably misallocate resources.
In a market system, bad decisions result in losses for firms, forcing them to adapt. Without the signals provided by prices, profits, and losses, a central planner's computer will not even be aware of the mistakes that it makes.
Learning From Simulations
The problem of organizing an economy is too wicked to be solved by computers, whether they use pattern matching or other methods. But that does not mean that advances in computer science will be of no help in improving economic policy.
New software tools can be used to create complex simulations. The tools that gave us chatbots could be used to create thousands of synthetic economic "characters." We could have them interact according to rules and heuristics designed to mimic various economic policies and institutions, and we could compare how different economic policies affect the outcomes of these simulations.
Among economists, this technique is known as "agent-based modeling." So far, it has been of only limited value, because it is difficult to create agents that vary along multiple dimensions. But it may be improved if we can use the latest tools to create a richer set of economic characters than what modelers have used in the past. Still, this improvement would be incremental, not revolutionary. They will not permit us to hand off the resource allocation problem to a central computer.
The latest techniques for using large datasets and pattern matching offer new and exciting capabilities. But these techniques alone will not enable us to solve society's wicked problems.
This article originally appeared in print under the headline "Wicked Problems Remain."
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I am reminded of a Clarke law: "When a distinguished but elderly scientist states that something is possible, he is almost certainly right. When he states that something is impossible, he is very probably wrong."
Kling is arguing on the basis of current AI and its known limitations - and indeed, of the known limitations of attempts at real-world central planning that make market economies so clearly preferable. But we know why central planning doesn't work, and implicitly arguing that unlimited processing power supporting a system constantly learning (hence improving the modelling with each generation of agents) won't solve the problem reminds me a little of that idiot Koestler's article decades ago about why computers would never beat humans at chess.
But we know why central planning doesn’t work…
What could anything do to change that?
When the right people control disinformation, what will change is what you are told is working. When free markets have been limited to kids in a sandbox, central planning will be best simply because there is nothing to compare it with.
Like roads, dams, and other infrastructure. Tell people that those can be built by private companies, and they scoff at such fantasies. Tell them that used to be how almost all roads were built, and they call you a liar. I have seen it in comments here, absolute refusal to even countenance such radical disinformation, to even look up provided references and learn a little. Private roads and dams, what are you smoking!
Extend that to free markets even now, and no no, impossible, government must dictate vaccines and which hospitals are allowed to own MRI machines and how Apple and Android mess with their customers. These problems are simply not amenable to free markets, doncha know!
The more government controls, the less plausible free markets seem. Eventually it will be limited to kids in sandboxes and poker games.
Like roads, dams, and other infrastructure. Tell people that those can be built by private companies, and they scoff at such fantasies
That's a different thing. One can conceive of a system where a government decides a dam needs to be built somewhere, and then puts the project to tender from private companies and the lowest bidder builds it.
I tend to think more in local terms. I walk down my local main street and see numerous shops offering a variety of goods and services (this being Long Island, there are plenty of nail salons and hairdressers). And some shops and restaurants have been around forever and some close in a year or two and another one springs up. To suppose that at present we have the ability to determine which shops should open and which close is a fantasy. Rather than have some central or even Nassau County bureaucrat or committee deciding which shops should be there - and what prices they should charge, it is so much easier, simpler and more rational to leave it to individual entrepreneurs decide - and take the risk, and let consumers "vote" on what shops they want and how much they're willing to spend.
But that is a limitation of current knowledge, not a theoretical ultimate limitation that takes no account of processing power or information available.
If you think merely adding a few orders of magnitude of processing power can solve such problems, you have a very weak grasp on numbers in general.
Cmon. He did basic subtraction just a few days ago.
The problem remains that government decision making responds to political pressure rather than to economic justifications. This means that people will prefer to have the costs of fulfilling their own wants and needs socialized through government...because who doesn't want someone else to bear the cost of their choices?
Feeding at the trough becomes popular, so popular that fewer remain to fill the trough.
It's probably the oldest and most accurate criticism of pure democracy there is.
Eventually, people discover they can vote themselves largesse from the public coffer and as a result the democracy becomes a totalitarian regime.
Tytler was talking about this in the late 1700's, and so far I don't see that he was wrong.
“Government is the great fiction where everyone endeavors to live at the expense of everyone else.”
-Bastiat
Have you thought about reading one of his books than just looking up quotes? Just curious.
Bastiat was wrong.
"Government is the means by which we place the retaliatory use of force under objective law." - Ayn Rand
https://fee.org/articles/the-nature-of-government-by-ayn-rand/
I wonder if the grey box commented about Bastiat or made a personal attack.
No, I don’t wonder.
Poor sarc.
Nobody brags about his mute list like Sarc. If only his victims cared it would be perfect.
Victims? Some wear it like a badge of honor. Alphabetroll is pissed that I haven't muted him. While others leave pathetic pleas for attention. They all know who they are.
You wear victimhood as a badge of honor. Need to think if this is worth bookmarking.
Remember when your raged when Ken muted your dumb ass. Lol.
Well, if you're curious, he was talking about Bastiat books. Sort of.
Speaking of kids in sandboxes, I heard a depressing tale about Junior Achievement from my wife. She is a retired teacher, old enough to mostly predate the homogenized political lesson plans and stubborn enough to subvert them. She now volunteers with schools and kid groups.
Last month she spent time with a second grade Junior Achievement group. One of the activities required kids to divide into groups to analyze and promote some town planning (first red flag). As she saw the activity, one option included the "creation" of a toy store (second red flag). At the end, all the kids could vote on what the "town" would choose to implement (third red flag).
I fear if what she told me represents JA in the 21st century, teaching kids that civic groups and government make decisions about business enterprise, then we are fucked.
All infrastructure is built by private companies. Much of it is paid for by government, but government doesn’t build it. Government doesn’t create anything of value. It has only one tool: force. It can use force to take money from people and use it to pay for roads. That’s not the same as building them.
It can use force to take money from people and use it to pay for roads.
What about paying for illegal immigrants?
There's two types of force initiatory and retaliatory. The first is immoral the second moral. Proper government holds a monopoly on the retaliatory use of force. Government initiating force is tyranny.
Resolve the reasons. It's not a theoretically impossible task. It has been a practically impossible task. The latter does not imply the former.
Why would this be a desirable thing or even a goal?
There are mathematical problems which can't be calculated even if every atom in the universe was cranking out trillions of calculations per second.
This is one of them. There simply isn't enough compute power possible.
And that's not even counting how you make computers imagine things which haven't been invented yet.
Socialism and central planning are only theoretically possible in a static unchanging society, where no one dies or gets sick, where weather never interferes, where nothing breaks, and where no one ever pontificates on better ways to do anything which would require fewer workers, factory tools, or other resources.
Close but no. Even if there was enough computing power the necessary information is impossible to acquire.
If the information exists, it is possible to document it, write it down, acquire it.
I’m not smart enough to summarize Hayek for you, but that was the basis for his Nobel in economics. Other economist have built upon it.
Nobody believes you read more than quotes from Hayek.
Also. Krugman has a Nobel in economics.
Would love to see your summation of Hayek. Could be hilariously entertaining. Bet it is just wiki posts though.
I think an issue lies in the fact the current technologies could probably manage the economies of the pre consumer age.
Like we could probably go back in time to before the global economy, and make things more equitable. Spread prosperity to everyone. With the computing power and management and manufacturing we have now, but the demand of 150 years ago.
Unfortunately for the progressive utopians, I think we’re still 80+ years behind what will end up getting millions (billions?) killed. Maybe 1000 years from now we’ll be able to shrink that to 10 years, but IMO it’ll always be hindsight.
Couldn't even do that. You might be able to assign work among a small group, say ten people, but that would make no allowances for changing tastes, curiosity coming up with better methods to shape rocks, or what kind of wood is best in which season.
All central planning is good for is robots, and it still makes no allowance for breakdowns.
Have you ever read I, Pencil?
Making a pencil is one of those things that no one single person can ever learn. I doubt AIs can ever juggle the millions of variables to make better or cheaper pencils. And pencils are incredibly simple, as manufactury goes.
Then we get into raw innovation, and computers fall flat.
No, AIs and computers will never make central planning better than free markets.
Argument from personal incredulity, I think
(He hasn’t read it)
And neither have you.
My response is to the last line of his post, which presumably makes his point.
And neither have you.
So I was correct.
Well Alphabet put a link right in his post, and it's shorter than the article above, so you can read it and have one up on everyone who didn't.
What’s funny is that there are any number of superficial arguments one can make about I, Pencil, but I have never heard anyone so blatantly admit they refuse to read what they argue against.
Umm... sarc readily admits it. So does jeff. Seems to liberaltarian standards.
How would you know know it makes my point if that's all you read?
Low-rent skull, that's what you have. State good, collectivism good, central planning good. Jumping to conclusions, good.
Thinking, not so good.
He's a DNC bot. It's what he's been cult programmed to do.
He us a """""""classical liberal""""""". One air quote wasn't enough.
No, you don't think. You assert. You rant. You cherry pick. But think? You do everything but think.
Well, “ARNOLD KLING”, prove you’re not a chatbot! ;o)
One good flag: “These assumptions are wrong,” though many of the commenters think they’re right.
Having learned a dozen different programming languages, technical manuals for computers, component functionality and microprocessor capabilities, I think it correct to say:
We do not have Artificial Intelligence anywhere; we have Natural Stupidity. As you say, no program (no matter how fast) and no database (no matter how large) actually *knows* anything. It only does precisely what the programmer (naturally) requires and the machinery (stupidly) allows.
OK, I’ll grant that pattern matching is a nice thing, but that’s nothing more than a fast, expansive Search: facility. What Chatbots are *really* good at is plagiarizing Wikipedia text, all created by thousands of human experts and novices.
Coupled non-linear system with incomplete knowledge. Good luck.
WRONG. The issue has NOTHING to do with AI. The issue is that central planning is inherently incapable of success. The article is ABSOLUTELY NOT an argument on the basis of AI, as it currently exists, or as it may be developed.
Your alleged reasoning doesn't raise to the level of a 4 year old.
'Could that technological advance give bureaucrats the tool they have been missing to allow them to plan a more efficient economy? Many advocates of central planning seem to think so. Their line of thinking appears to be:
'1. Chatbots have absorbed an enormous amount of data.
2. Large amounts of data produce knowledge.
3. Knowledge will enable computers to plan the economy.'
No, their actual thinking is more like this:
1. I know more than the peasants.
2. Therefore I should be in charge of the peasants' lives.
3. Shut up and fulfill your work quota.
AI is just a shiny distraction.
After decades of mocking Intelligent Design, most of these self declared elites proclaim Intelligent Design.
“Whoops”
“Since the boxes were seized and stored, appropriate personnel have had access to the boxes for several reasons, including to comply with orders issued by this Court in the civil proceedings noted above, for investigative purposes, and to facilitate the defendants’ review of the boxes,” Smith’s team wrote in a new court filing to U.S. District Judge Aileen Cannon.
.
“There are some boxes where the order of items within that box is not the same as in the associated scans,” the prosecutors wrote.
.
Smith’s team in a footnote also conceded it had misled the court about the problem by previously declaring that the evidence had remained in the exact state it had been seized.
.
“The Government acknowledges that this is inconsistent with what Government counsel previously understood and represented to the Court,” the footnote said.
…
“Prosecutors and investigators should never tamper with or alter evidence in their possession, including the order of documents in a box because one never knows what may become relevant or crucial to a court or jury later in a case,” Harvard Law Professor Emeritus Alan Dershowitz said.
.
Prominent defense attorney Tim Parlatore, who worked on Trump’s team earlier in the classified documents case but no longer is involved, said ”this admission is stunning on multiple levels.”
.
He said the revelation “reinforces the incompetence” of prosecutors “in conducting basic criminal investigations and prosecutions that I observed when I was on the team.
https://justthenews.com/politics-policy/all-things-trump/trump-whodunnit-prosecutors-admit-key-evidence-document-case-has
So Jack Smith lied to the judge. Lies to the defendant. Tampered with evidence. And possibly corrupted the order of documents Tumos defense team said was chronological in storing and they didn’t know it had classified materials.
This after a recent admission GSA forced Trump to accept boxes to Mar A Lago with the NARA requests for documents occurring mere weeks after GSA forced delivery.
Also more evidence of the WH, namely Biden's chief of staff, meeting with the prosecution team.
Julie Kelly
@julie_kelly2
NEW: DOJ's degenerate midget Jay Bratt--who led investigation into classified docs then moved to Jack Smith's team after he was appointed--met with aide to WH chief of staff Ron Klain in Sept. 2021
.
BEFORE any alleged classified docs were found.
.
Newly unsealed motion:
IMAGE
https://twitter.com/julie_kelly2/status/1786380401809748040
So much for "independent" council.
"This after a recent admission GSA forced Trump to accept boxes to Mar A Lago with the NARA requests for documents occurring mere weeks after GSA forced delivery."
This is impossible. Jeff and Sarcasmic swore that it was just NARA asking nicely and TRumpelstiltskin refusing to cooperate.
Does this mean Sarcasmic was tricked again for the 1,465,872 time?
"Jack Smith lied to the judge, lied to the defense and tampered with evidence"
Just a reminder that Mike Nifong is still sitting in prison for slightly less.
Sarc will be by shortly to demand we provide the evidence that government has and refuses to release under threat of prosecution if you ask.
If not for this judge releasing information he would still be claiming government is honest and nothing to see.
Dershowitz: 'I Kept My Underwear On' During Massage At Epstein's Mansion
Ah, that always makes any subsequent legal observations invalid.
Underwear on is fine. If the underwear was on, as he claims, everything is good, even at Epstein's mansion. It's those underwear off massages that we have to worry about.
What does that have to do with the statement that "Prosecutors and investigators should never tamper with or alter evidence in their possession"?
That depends. Those prosecutors and investigators, underwear was on or off.
He failed to point out that his daily underwear is closer to one of those harnesses with the ball and cock rings attached.
It kind of sounds like Smith has committed a felony under NY law, doesn't it?
What’s up Peanuts?
“Woodstock for Capitalists” is going on this weekend and Buffett told his club that all individual and payroll taxes could be eliminated if the largest 800 US corporations paid the taxes that Berkshire did this past year ($5 billion).
https://www.cnbc.com/
It isn’t clear to me if he meant the same rate as Berkshire.
#WarrenRealCapitalist-DonnieRealConMan
Obviously, some in the S&P 500 didn't even have $5 billion in revenue.
What’s up, pedophile?
Buffett told his club that all individual and payroll taxes could be eliminated if the largest 800 US corporations paid the taxes that Berkshire did this past year ($5 billion).
What a fantastic deal for Google, Amazon, Walmart and Apple, and what a shitty deal for Visteon, Graham Holdings, Resolute Forest Products and Caleres.
Buffett absolutely ♥s corporatism, and who can blame him, and Buttplug would too if he could understand it.
tl;dr Central planning fails because the information required can’t be known. Even if an AI could process the information it couldn’t acquire it because most of it exists only inside people’s heads. Even the folks in these comments who claim to know what people think based upon what they don’t say couldn’t provide the information.
"Even if an AI could process the information it couldn’t acquire it because most of it exists only inside people’s heads. "
A computer chess engine like Stockfish doesn't know what's inside its opponent's heads. It doesn't have to, and it still outperforms the most skilled humans. Stockfish works by analyzing a large number of past choices from which future choices are supposedly implied. It's not guided by intuition of gut feelings.
"Even the folks in these comments who claim to know what people think"
It's never a sincere claim. It's virtue signalling and cheap rhetorical trickery.
Chess isn’t a great example. I could probably write a winning chess program using recursion and a shitload of memory.
"I could probably write a winning chess program using recursion and a shitload of memory."
I doubt it. There are more moves in chess than atoms in the universe. That's why Stockfish and other engines use the methods they do. Go (Weiqi, Baduk) has even more moves than chess, it uses a 19*19 grid.
"There are between 10^78 to 10^82atoms in the observable universe. That’s between ten quadrillion vigintillion and one-hundred thousand quadrillion vigintillion atoms. Which is a lot. But...amazingly, there are even more possible variations of chess games than there are atoms in the observable universe.
This is the Shannon Number and represents all of the possible move variations in the game of chess. It is estimated there are between 10^111 and 10^123 positions (including illegal moves) in Chess. (If you rule out illegal moves that number drops dramatically to 10^40 moves. Which is still a lot!)"
Is that why eight bit chess programs can kick your ass?
Is this projection again?
Weird use of 8 bits as if that would matter. Did you actually take programming? You can represent however many bits you want even in 8 bit. Just use multiple memory addresses. Memory was always a bigger driver, not bit size.
Just weird the greatest programmer in Maine seems so ignorant to programming.
And Cuban sandwiches
Lol. No you couldn't. Recursion has nothing to do with the solution. In fact you would use some sort of sorting algorithm for subsequent moves, not recursion. Recursion would lead to an infinite loop as there is no final end solution in any chess game.
"You may not be able to articulate your preferences yourself, until you are confronted with a choice at market prices. "
I'm assuming with AI, it's the computer that makes the choices, just as with Youtube's autoplay. It draws on a rich history of previous personal choices, and the user's response to AI, either accepting them or rejecting them. A market requires people to make the choices and an informed choice requires time and effort of the choser. With AI making the choices, this time and effort can be put to more productive use.
Except that AI can’t know what people think.
It has to rely on what people have thought in the past. Just like a computer chess engine. It doesn't know what it's opponents think. It doesn't need to. It analyses thousands or millions of games that were played in the past. They are essentially unbeatable, and have been since the 1990s when Big Blue beat Kasparov.
Which is why AI doesn’t come up with anything new. It just mashes shit up. It doesn’t know wants and desires. It can’t because those things only exist in people’s minds.
"Which is why AI doesn’t come up with anything new. "
It comes up with new combinations of the old.
While people come up with new wants and needs.
You guys have a bunch of great mantras right there. Do they work well as sleep aids, too?
It has to rely on what people have thought in the past
When AI starts making all the choices, the past remains frozen, and no progress can be made.
Wow. An intelligent comment. I’m impressed.
I thought I was muted.
He's talking about himself. I don't think he's tried muting himself yet.
You sound like a little girl in the back seat on a lousy movie.
"When AI starts making all the choices, the past remains frozen, and no progress can be made."
The past is never frozen. The data it holds accumulates with time. I doubt the AI will ever make all the choices. Humans will be given the option of over-riding the AI choices, just like we do with Youtube's autoplay algorithm. I believe progress can still be made. There's a famous quote from Bernie Sanders, something like 'nobody needs 23 different deodorants.' Making an informed choice between 23 deodorants, comparing the ingredients, effectiveness, price, consistency, fragrance, etc would gobble up all the waking hours, especially if the process is repeated for toothpaste, conditioner, body wash, gel, soap, detergent, and that's just a handful of cleaning products. The AI is capable of making adequate choices in a fraction of a second.
With AI making the choices, this time and effort can be put to more productive use.
Therefore, no more human choices to learn from. Progress stops.
Humans making informed choices takes enormous amounts of time and effort. That's what Bernie's comment tells us. An AI can make an informed choice in a fraction of the time. Humans making informed choices comes at an opportunity cost. Removing that opportunity cost by allowing AI to make choices frees up time for humans to spend it more productively, by dreaming up innovations - progress marches on as humans are spared the drudgery of comparisons.
dreaming up innovations, creating, inventing all require making choices.
If AI make all the choices for us, we become but zoo animals.
We've always made choices and this will continue. You can choose between making comparisons between thousands of different consumer products on the market, or letting an AI do it for you, and instead devoting the time and effort saved to more productive pursuits, like innovating.
Thanks for playing, better luck next time!
I'll admit I didn't have "Obey your robot overlords!" on my 2024 Reason comments bingo card.
Wow. Decided to check and there's some kind of gray box orgy going on up there. Yikes!
If you want to see how ignorant sarc is woth programming and well life, it is kind of amusing.
I just told chatGPT that the only language I understand is ebonics, and it's the single funniest conversation I've ever had in my entire life.
I THINK you may have just stumbled across something that even MY un-woke ass believes might-could be no-shit racist.
Oh, and not only racist, but technically wrong.
It be blendin’ English wit’ African languages
I suspect it's wrong. Ebonics, like creoles and pidgins, is a lot about dropping final consonants and simplifying, the copula, 'to be' for example. that's done accurately by the AI, although 'street talk' and 'it be' could maybe do with some dropping. And no transpositions - aks for ask, for example - one of the most charming aspects of black speech. And the vocabulary, it's identical to white American speech. Maybe 'carryin the soul' should be totin the soul, to blacken up the vocabulary a bit.
Sounds like pluggo.
Needs a few more “an shit, you know what I’m sayin’?”s.
Shut the fuck up Uncle Tom
Wait, you think Rick James is black, or do you not know what an Uncle Tom is? Or perhaps it’s screen names that confuse you?
Lol. What an idiot.
'Guns' (Gov-Guns) cannot run economies?
They run gangland's of death, violence and poverty.
'Guns' only asset to humanity is to ensure Individual Liberty and Justice for all.
It's as simple as 1,2,3 yet modern brain-washing from [Na]tional So[zi]alist[s] has confused the very premise of human intelligence. Nothing; nothing at all separates 'government' from any other organization except their legal use of 'Gun-Force'.
Ever wonder why anyone cares what schools were attended, what GPA was earned, what papers were published in peer reviewed journals?
Or how senior someone is and whether or not they are respected in their vocation?
We screen prospective candidates for important positions - we just don't let anyone become a Supreme Court Justice who "feels like it" (and most of them can explain what is a woman). We wouldn't pay attention to Jordan Peterson if he were just a grumpy Canadian convenience store owner.
This process of checking out "experts" before we place them in important roles is called "vetting". We do it all the time in everyday life.
Do you know why we pay more attention to a 65yr old's advice than the noise from a 20yr old's pie hole? Because the senior citizen likely knows much more than the college kid with their preferred pronouns written on their shirt. Older people generally know more because they've lived more.
Who is vetting the AIs in use?
Lets say someone custom designs and trains an AI application to decide who to parole. The idea is to reduce the manpower & time (cost) to make thousands of decisions. Guaranteed that the people using the AI application will NOT understand it, nor will they be able to evaluate it's performance. If the AI has a bias, how will it be detected? How will the errors be rectified? (This exact scenario happened).
Again, who is vetting the AI?
The media, as usual, has got it wrong. The first problem we face with with AI isn't SkyNet (Terminator), the first problem is vetting the AI applications before they are in the hands of unsophisticated users.
We pay attention to Dr. Jordan Peterson because of his education, his teaching, his publishing, his clinical work and his public speaking. About 35 years of "work experience" we can reference and review. Peterson, like him or not, is vetted.
Finally, who is vetting the AI?
The economy is a 3 body problem there's just way to many variables for it to be centrally controlled even by AI.
I'm a columnist and just had my first contact with AI.
Garbage in, Garbage out. Feed Woke BS into your AI, and new thought must be censor as it "provokes" and "could harm political discourse."
Also, hyperbole is verboten. You can no longer say, "I told you a million times to close the door" unless you actually told your kid 1,000,000 times to close the door.