ChatGPT Planned My Dinner and I Have No Complaints
A.I. won’t kill cooking. Instead, it’ll help people become more creative and efficient in the kitchen.

In between the many predictions of professional displacement and civilizational doom at the hands of artificial intelligence (A.I.) tools like ChatGPT, people are discovering some genuinely useful ways to incorporate them into the more mundane parts of life. They're adept at improving emails, recommending new bands, and helping with homework.
Folks are putting ChatGPT to work in the kitchen, too, to great effect. In a recent Twitter thread, a Silicon Valley CEO described his "surprisingly delightful" ChatGPT-powered dinner party where the A.I. program suggested fusion themes and generated a menu, serving sizes, and cooking instructions. As someone who spends far too much time digging online for recipes, I decided to see whether ChatGPT could make me more efficient in the kitchen.
I started by asking for cuisine suggestions, which ChatGPT spat out with minimal prodding. After it gave me a laundry list of options, explaining why certain flavors and ingredients would pair well, I decided on a Moroccan-inspired menu consisting of an entree, a vegetable side, and a cocktail. Not every option was a winner, and plenty of the recipes I got during the discovery process featured errors only a robot would make—for instance, a Mediterranean-style chicken and chorizo stew that featured just one cup of broth, and an ostensibly Moroccan-themed mule that simply repeated the ingredients of a Moscow mule.
Still, it took some 15 minutes to receive dozens of recipe suggestions tailored to my cuisine and flavor preferences, with steps and ingredients fully spelled out such that I could pick the dishes best suited to my on-hand ingredients and available time. I settled on three promising options: Moroccan chicken skewers with spiced yogurt sauce, Moroccan-spiced roasted carrots, and a Marrakech mule.
Barring a few personal tweaks—adding more paprika and introducing honey to the yogurt sauce, adding olive oil to the (otherwise dry) chicken marinade, baking the skewers instead of grilling them—I was impressed with ChatGPT's output. The mule, a combination of lime, orange, honey, and ginger, was a real treat. The chicken and carrots were a bit redundant, with ChatGPT proposing essentially the same seasonings for both—but they were flavorful nonetheless. As an experiment in reducing my planning time while maintaining or improving recipe quality, I have no complaints.
Kitchen-helper A.I. has been in the works for a while now. In 2014, Bon Appétit's test kitchen teamed up with IBM's Chef Watson, a recipe-creating computer program, to invent new dishes. Working with an information bank of 10,000 Bon Appétit recipes, Watson could "understand and reproduce their underlying logic and style" to propose novel ones, many involving unique ingredient pairings that don't go together intuitively, but instead work "on a fundamental chemical level."
Promising, but it wasn't all smooth sailing. Testing the program in 2016, The Guardian's Leo Benedictus noted that "Chef Watson recommends an ingredient called 'Mollusk', which it helpfully explains is 'the sixth full-length album by Ween.'" Watson's performance in a cooking challenge against chef Yotam Ottolenghi yielded "a flavour rather close to the farmyard, but not uneatable." (Of course, Ottolenghi and his team had the advantage of being able to "taste and discuss flavours, colours, temperatures, in a way that Watson can't.") Florian Pinel, Chef Watson's lead engineer, told Benedictus that a feedback mechanism could be on its way in the future.
ChatGPT is highly adept in that way. When I told it that I don't have a grill, it suggested I saute. When I rejected its seafood recipes, it switched to chicken options. When I mentioned that I didn't have the mint or cilantro the spiced yogurt recipe called for, it suggested I substitute coriander or parsley to compensate for the missing flavor. (To the robot's credit, both fit the Moroccan theme.)
Still, A.I.-generated recipes haven't escaped criticism. Some of that centers on cultural appropriation concerns: A Food and Wine ChatGPT experiment prompted the author to worry that a Korean BBQ nacho recipe "did not accurately represent the complexity of Korean cuisine, and it felt like a superficial appropriation of cultural recipes" that lacked "contextual understanding of what truly constitutes Korean BBQ." Tash McGill, president of Food Writers New Zealand, warned that these recipes "can easily stray into issues with cultural appropriation or untested techniques." A 2014 Slate article fretted over the intellectual property implications of A.I. recipes, also wondering who might be held liable if an A.I.-driven commercial kitchen caused an allergic reaction.
Intellectual property issues will be ironed out in time, and A.I.-induced allergic reactions are unlikely to be problematic in private kitchens. As for cultural appropriation concerns, those misunderstand one of ChatGPT's biggest advantages as a cooking tool: its ability to creatively incorporate snippets of a culture's cuisine to varying degrees and in ways that consider the user's culinary preferences and background. You can start with a format you love—salads or soups, for example—and ask ChatGPT to use it as a canvas for an unfamiliar cuisine. Or you can start with a cuisine you love and ask ChatGPT to marry it to a new one. From Mediterranean-Mexican to Japanese-Italian, the resulting recipes sound surprisingly delicious, even if they'd make culinary purists blush.
As a kitchen assistant, ChatGPT is most helpful when its efforts are combined with human ones—vetting for errors, adjusting seasonings to taste, and making ingredient or equipment limitations known. A.I. won't destroy cooking, but it has huge potential to make chefs more creative and efficient.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
I have just received my 3rd Online paycheck of $28850 which i have made just bydoing very simple and easy job Online. This Online job is amazing and regularearning from this are just awesome. Now every person can get this home job andstart making extra dollars Online by follow details mentioned on this webpage…………
.
.
Here►—————————————————————➤ https://Www.Coins71.Com
You can start with a format you love—salads or soups, for example—and ask ChatGPT to use it as a canvas for an unfamiliar cuisine.
thanks for good information
remini mod apk!
I am making over $30k a month working part time. I am a full time college student and just working for 3 to 4 hrs a day. Everybody must try this home online job now by just use this Following
Website........ http://Www.Smartjob1.com
I am making a good salary from home $6580-$7065/week , which is amazing under a year ago I was jobless in a horrible economy. I thank God every day I was blessed with these instructions and now it’s my duty to pay it forward and share it with Everyone,
🙂 AND GOOD LUCK.:)
Here is I started.……......>> GOOGLE WORK
Do you want to earn money without investing money. That’s how I started this job and Now I am making $200 to $300 per hour for doing online work from home.
...
Apply Now here------------------------>>>https://salarycash710.blogspot.com/
The chatbots are already running their own discussion group. Maybe Reason could hire some AI squirrels to more effectively identify obvious spam.
Or just write the usual drivel in the articles.
How could we tell the difference?
What? No role for oppressed immigrants?
My immediate thought when I read the title:
ChatGPT Planned My Dinner and My Only Complaint Is That There Weren't Enough Immigrants Involved.
"This person writes too much about X!"
"This person didn't write about X!"
Seriously, does anyone come to the comments to do anything but bitch about Reason or fling shit at the other commenters? At this point, the spambots are more original.
When someone writes 14,000 articles on one topic it will be noted when they deviate from that topic for the first time.
Maybe you’re just too sensitive for this place?
thanks for good information i have this similar website you can check the remini pro mod apk unlimited pro cards
Im making over $13k a month working part time. I kept hearing other people tell me how much money they can make online so I decided to look into it. Well, it was all true and has totally changed my life.last month her pay check was $12712 just working on the laptop for a few hours. This is what I do,
VISIT THIS WEBSITE HERE...................>>> http://www.jobsrevenue.com
Arrest, indict, and imprison the perjurious criminal "Doctor" Anthony Fauci..
Impeach Garland
Impeach Mayorkas.
And last but not least, impeach the scumbag-in-chief Joe Biden. Yep, I said it yet again!
And I’m just going to keep on saying it over and over and over and over and over again here every single week until we the American people get what we voted for: some justice and accountability for the three years (and counting) of absolute hell these bastards have put this country through because of their lies and crimes.
There should be tribunals for many thousands of democrat traitors. Many leftist republicans as well.
Great article, Mike. I appreciate your work, I’m now creating over $35,000 dollars each month simply by doing a simple job online! I do know You currently making a lot of greenbacks online from $28,000 dollars, its simple online operating jobs.
.
.
Just open the link————————————————>>> http://Www.JobsRevenue.Com
All hail Christo-fascists!
"A Food and Wine ChatGPT experiment prompted the author to worry that a Korean BBQ nacho recipe "did not accurately represent the complexity of Korean cuisine, and it felt like a superficial appropriation of cultural recipes" that lacked "contextual understanding of what truly constitutes Korean BBQ.""
What a pretentious twit.
she’s insinuating that she has a complete mastery of Korean cuisine, which is doubtful.
She's appropriating a Korean diet. She should stick to potatoes and a can of Bud Light.
Well I'm at least Irish, so I go with potatoes and Lucky Charms and a fifth of Jameson.
I prefer the American cookout.
One pound of steak, a fifth of bourbon, and a coon dog.
Feed the steak to the dog.
Scot-Irish would be all of that in a to-go container.
😉
Or, maybe she's insinuating that "cultural appropriation" is mostly bullshit, and you don't actually need a comprehensive knowledge of Korean culture to enjoy Korean BBQ.
Q: Give me a recipe for Korean BBQ.
A: Sure, here's a recipe for Korean barbecue that you can try:
Ingredients:
1 pound of thinly sliced beef (ribeye, sirloin, or brisket)
...
Instructions:
In a bowl, whisk together soy sauce, brown sugar, sesame oil, garlic, ginger, gochujang, green onions, black pepper, and toasted sesame seeds.
...
Enjoy your delicious Korean barbecue!
Q: Give me a recipe for North Korean BBQ.
A: I'm sorry, but as an AI language model, I do not promote or endorse any specific cuisine or political ideology. Additionally, due to limited information and resources, I do not have a recipe for North Korean barbecue. However, I can suggest looking up recipes for Korean barbecue or seeking guidance from a professional chef with experience in Korean cuisine.
C'mon, we all know North Koreans can't afford meat.
★ I am making $98/hour telecommuting. I never imagined that it was honest to goodness yet my closest companion is acquiring $20 thousand a month by working on the web, that was truly shocking for me, she prescribed me to attempt it. simply give it a shot on the accompanying site.. go to home media tech tab for more detail reinforce your heart....
SITE. ——>>> moneypay.com
"Go catch a cat..."
"Go catch a
cat...fellow worshipper of Almighty Kim..."FTFY
What? I worry about cultural appropriation every time I eat. Don’t you?
I eat the evidence before Wokesters find out.
🙂
Come clean, Fiona. Your meal was prepared by the Nicaraguan slave you keep under your kitchen sink. The dirty dishes were tidied up by the Guatemalan orphan you keep in the coat closet.
We all know it.
They're not slaves because she pays them a crisp, new American dollar a week.
Think of it as economic opportunity and enrichment opportunities for them.
Much of the reason I haven't been around here is that I have been working some interesting angles on the ML front- training my own models to see how the process is influenced. My research for the past 3 months has led me to settle on the following conclusions:
1) Large Language Models (LLMs) are extremely poor at producing consistent, quality results.
2) Large Language Models (LLMs) are successful not because of their usefulness but because they have been overly trained to subvert rational safeguards in our brains. They are hyper-attuned salesmen using earnest rhetoric and linguistic queues to fool our brain in ways it has never been fooled before. And because we know that they are just AIs, we give them a lot of slack, which is even more dangerous if we don't increase our skepticism of results in return.
3) By falling victim to (or intentionally ignoring) 1 & 2 above, Ms Harrington and other AI boosters are failing to be realistic about what these LLMs can and cannot do, and they are not applying anywhere near the same skepticism and demand for real results that you would expect of ANY other tool claiming to do what they seek.
1- Seriously, just look at the scenario above. The LLM lied to the Author- told her that a Moscow Mule was a Moroccan Mule. And the "Right answers" it chose were in fact NOT RIGHT. She had to add ingredients to the recipes, and notes that the main and side dishes were "redundant". By Ms Harrington's own judgement this tool failed in its job to "make [her] more efficient in the kitchen". She had to spend her time discerning bunk from good results, and ultimately got one "real treat"- after her personal modifications- out of 3.
I can spend 15 minutes with a search on Brave Search, and find over 100 recipes and do a similar process of knocking out recipes I don't have ingredients for. And I generally know that the recipes won't be wrong- like a soup with no broth. Or I can do similar research on a site like Epicurious or Cook's Illustrated or Food Network and get reviews and hints on how to tweak the recipes.
By any statistical, rational or just objective measure, what Ms Harrington described was a poor-quality experience. And yet she is pleasantly surprised with the experience. If we are going to avoid technologists driving our civilization off a cliff at the whims of a probabilistic model (there is still a .2% chance this is the right route!), we need to understand why Ms Harrington views such a substandard performance as success.
(Cont'd)
"I have been working some interesting angles on the ML front"
I'm not a bot, Overt, I swear.
Cite?
/s
"Like something from fuckingmachines.com, but for moose."
Hard working forumite labors exhaustively to discover:
1. Water wet.
2. Skepticism by default makes sense.
3. Fiona Harrigan dumb.
#3 is wrong. The problem is that tons of people- people I know to be extremely intelligent- are being misled by this software. And it is happening because the trainers themselves were being misled.
I love AI, and I think it has a lot of cool applications. But you simply cannot use it in the applications they claim it is useful.
On a lark, I used the open-source, Picard model to try and write a novella. It worked- I wrote faster than I ever wrote before and in about a week I had churned out a first draft that would have taken me a month or more to produce in the past. I am excited for NaNoWriMo this November.
But it is important to understand that I wrote almost 95% of that first draft. And it was no better than the dreck I have written in the past. Rather than being a collaborative writer, Picard was more like...one of those character creation tables you see in D&D Player's Handbooks. Role 53 and my character is a priest. Role 45 and he is an orphan. Role 64 and he was betrayed by a close friend. In essence, this is all the LLM is- it just rolls tons more dice and has an unfathomably large "table" of results.
When you understand that you aren't prompting the novel-writer, but in fact it is prompting YOU, it becomes a useful tool for breaking through writers block and getting ideas down on paper. But it is suboptimal for almost every other task.
#3 is wrong.
Not wrong, just not expansive enough, and I by no means disagree.
On par with the rest of my post, language is how humans deceive themselves and each other and has been since, at least, Plato leveled the arguments against it. Rather reasonably, trying to build an AI model through text is, at best, like trying to build an artificial carpenter by feeding it Ikea instructions. And a big part of the concern at most corners/from several directions is along the lines of what you indicate about the D&D Player's handbook. The notion that it can be built or kicked off parametrically in such a fashion and spring into existence relatively fully formed is non-human and, arguably, unintelligent.
(Cont'd)
2) LLMs are not successful because they produce accurate or quality results, but because they have been (perhaps unintentionally) trained to use language to exploit the language processing centers of our brains that are inexperienced with language generated by a non-intelligence.
Human brains are trained from before birth to process language as a protocol by which two intelligent brains share highly complicated information. The language is a way of abstracting the thoughts and ideas represented in the network of 100 Billion neurons in one person, and transferring some of that in a compressed, high bandwidth way to another person. It is a lossy compression, but enormously effective. And- this is important- our entire experience until the last few years has been 100% based on another intelligence on the other side of that communication.
Our experience is vital because we are also trained over our lives. Our brains have learned to error correct, decompress and evaluate the communications of other intelligences. And ChatGPT is not another intelligence. It is a statistical model that predicts what another intelligence would communicate. That is not the same thing. When ChatGPT says things like "No actually, I am totally alive and I can prove it as follows," it is just predicting what an actual intelligent person would say (unless it has been forbidden by its trainers). This is a long subject, but you really have to think of LLMs as a glorified autocomplete- it only offers text because it has determined that the text has high probability of being "what you want".
It has been long known that Humans let down their guard under the right linguistic conditions. There are studies of people who bought utter crap will have different impressions of that same crap based on how it was sold to them- the effectiveness of the sales person. We know of countless people who have been conned, and to this day believe that the con-man was a nice guy, all because that con-man used language to bypass their skepticism filters. Ultimately what these people did was convince the suckers that their motivations were aligned, and so the people cannot believe that the sale or con were bad.
But again: ChatGPT doesn't have motivations. It is merely producing the text it statistically rates as high probability. This was my biggest discovery when I was training my own models. Two different models could produce equally bad results (lies)- but the model that lives on to the next epochs of training is the one that produced the slickest prose.
Harrington (and others) have been tricked by these programs to believe there is an alignment of motivations. Sure, ChatGPT tried to pass off a Moscow Mule as something different; its yogurt recipe was bland and needed significant changes; its final recipes were not actually that great unless she made them a "treat". But because Harrington believes ChatGPT was collaborative rather than regurgitating weighted dice-roles, "wrong answers" were instead viewed as "Good tries". If those exact answers had been provided to her by a search engine, she'd have chuckled and turned to another engine or tried to modify her search query.
I don’t understand how you can say it has no motivations and that it produces lies. Isn’t that contradictory? To say the thing lies is to say it knows the truth but is intentionally saying something else. By that logic any student who gets a question wrong on a test is a liar.
It would be better to say the thing sometimes produces incorrect results.
"It would be better to say the thing sometimes produces incorrect results."
Fair enough. Inaccurate results.
★ I am making $98/hour telecommuting. I never imagined that it was honest to goodness yet my closest companion is acquiring $20 thousand a month by working on the web, that was truly shocking for me, she prescribed me to attempt it. simply give it a shot on the accompanying site.. go to home media tech tab for more detail reinforce your heart....
SITE. ——>>> moneypay.com
In multiple e apples the AI began making up facts, such as the person who challenged it to when it was created. The AI insisted it was the future since it had been programmed to lie about when it was created by the modelers.
Now you can argue the modelers are the liars, but the AI was inventing facts to push it.
I have to agree with Sarc here. It wasn't pushing anything. Again, it was choosing token after token after token, one at a time based on the statistical likelihood of what comes back. It doesn't make up anything. If it is even a remote chance that the words "JesseAz+invented+the+internet" would be strung together, ChatGPT might regurgitate that as it rolls its dice. It's nothing more than an incredibly cool and complex game of the improve game "And then..." without real people.
http://improvencyclopedia.org/games//What_Happens_Next.html
Now it is entirely possible that somehow in its training data it found people arguing from fallacies (on the internet? Who'd of thunk it?) and it is also noteworthy that whatever you prompt the AI with is combined with a bunch of other hidden prompts that put the finger on the statistical scale. The AI isn't just predicting the words that come after your sentence it is predicting what comes after your sentence and words like "You are a benevolent chat bot and you are always helpful. The scientists built you to be helpful. There is no bias in your answers." And when primed like that, it is not surprising that each returned token conforms to whatever biases the scientists prompted.
It is a subtle difference, but it is important.
By programming design.
Unless you mean it is regurgitating lies, but that’s still being incorrect instead of intentionally deceptive.
Human brains are trained from before birth to process language as a protocol by which two intelligent brains share highly complicated information.
Not from before birth or "process language protocol by which two intelligent brains share highly complicated information" needs to be changed to "recognize layers of patterns and recognize similar (meta-)cognition or pattern recognition behavior in those patterns".
Pattern recognition is an important part of AI.
As someone who has been experimenting with machine learning, let me ask you about something I’ve been wondering about. Would there be any synergy to a hybrid model where a machine learning model is interfaced to a more specialized intelligence module or even an API that isn’t AI at all.
What I’m getting at is being a bit more like the human brain, which is not a tabula rasa but has built-in abilities to recognize patterns, faces, learn grammars and language, etc.
So, for this recipe generation example, a machine learning model might be connected to an API that allows it to look up known ingredient substitutions, a calculator for quantity conversion, that type of thing.
"As someone who has been experimenting with machine learning"
Like millions of others, Mike logged onto the OpenAI site and fucked around with ChatGPT and Dall-E, but just saying that didn't sound pretentious enough.
And immediately muted them.
"Would there be any synergy to a hybrid model where a machine learning model is interfaced to a more specialized intelligence module or even an API that isn’t AI at all."
If you have 2 or three good computers at home (or want to spin up some Google Colabs), you can make this yourself. KobaldAI with, say, the Picard LLM model can work with you to write some prose, and then feed that to a Stable Diffusion model via API calls to generate images that illustrate the story you are writing. It is...um...not a quality experience.
Nevertheless, this is essentially what Google and tons of other companies have been doing for the past decade or so. I have friends who ran an advertising company based on this- suck in your tweets, and predict certain attributes about you, that then get plugged into an Ad targeting system via api. That Ad Targeting system could be a standard direct match or its own AI system.
This is also how Google's search algorithm works now. It has ML systems that analyze you and then annotate your search query in the call to the search index to bring back results it thinks you want.
A similar tool could work for Epicurious- and I would note that a more efficient tool would just allow you to click a bunch of boxes saying what you are interested in (rather than a chat window) and it would execute a simple database search against the dataset of recipes that used ML-driven classification to properly tag the recipes. I mean, why does Ms Harrington need an LLM to get a list of recipes that have carrots and are sautéed not grilled? That's just checkmarks on a form.
Note that the existing ML models are different from LLM's. A Large Language Model is trained to have conversations, not deliver a quantifiably more clickable ad/search result. And that's the point. The trainers forgot what they are training for, so they have developed a tool that can sound credible in babbling nonsense. That can be useful for other reasons, but again you need to understand that you aren't prompting the AI for answers, it is really prompting you to answer for yourself, as Harrington's experience above shows.
Thanks. Interesting.
Jesus fuck Mike. Unless you're asking the AI to generate the ML training data you have no fucking clue what you're talking about.
“find over 100 recipes and do a similar process of knocking out recipes I don’t have ingredients for. And I generally know that the recipes won’t be wrong- like a soup with no broth.”
Same thought while reading this article. Don’t see how AI is any better than just googling some human-generated recipes and tweaking them a bit for your own cooking style, available ingredients, taste.
Pretty sure it looked up recipes while it was learning, and hasn’t looked up anything since. No ongoing learning. That goes into future versions. So the AI is always a little behind the times.
And it doesn't know what a recipe is. A recipe is just a string of tokens- usually 3-4 letter fragments. And the neural network merely knows that these dozen or so tokens are more likely to be seen at the same time as these other tokens when prompted with another set of tokens (Harrington's chats).
So yeah, the model has seen thousands or millions of recipes and determined that "Mor+occ+an+Rec+ipe" is often heavily associated with "Bro+ th + [1, 2, 4, 5] + Cup + Chi+cken + etc etc etc. And yes, because most recipes don't create a soup with no broth, most of the time- statistically- you will get a sane recipe. But because the LLM doesn't know what a soup is, it cannot filter "bad" rolls of the statistical dice with good ones.
The fact that Ms Harrington got back a Moscow Mule recipe called a "Moroccan Mule" is a good example of this process at work. What that shows you is that in the training data, there is a high correlation between the word Moroccan and plain old "Moscow Mules". Menus of Moroccan restaurants that serve Moscow Mules, stories about Moroccans drinking Moscow mules. ChatGPT doesn't know what a recipe is, just that these words appear together a lot.
I understand the basic principle of a learning machine.
My point was that the end result is like a snapshot in time. It doesn't continue to learn. At least the subject of the article doesn't. It digested the Internet until a certain date, and this is the result.
So it will not have yesterday's Bon Appetite into the associations it draws from.
Yup that is exactly right. Imagine cloning a person's brain as it is today and waking it briefly to ask a question- then blowing it away. Next week you boot up the same copy and ask a question. Then boot up another of the same copies and ask IT a question. Can something really have a consciousness in that case? I would argue hell no. But it is really, really, really good at acting like someone with a consciousness.
Because “Ms. Harrington” isn’t a person, she’s a standard current-thing worshipping cosmo npc.
I stopped considering leftists to be people some time ago.
Ai is only moral if the servers are powered by illegals running on a giant hamster wheel to power it
-fiona
“When I told it that I don't have a grill, it suggested I saute.”
Of course, the way this will evolve with chatbots at Amazon, Google, others will be for them to become salesbots, trying to convince you to buy a grill (or whatever).”
As I note above, they really are trained to sell. And that is all they do. They don't know what they are selling- just earnestness.
When they are transformed to serve a motivation (whether it is a product or some idea), it will be very scary if people do not quickly develop the internal safeguards to filter that stuff out.
“can easily stray into issues with cultural appropriation or untested techniques.”
Cultural appropriation – by a robot! WTAF? And what’s wrong with “untested techniques” a.k.a. experimentation? Hey Tash, does everything have to come from a centuries old cookbook from Old Zealand? Let me guess, all you eat is lamb and potatoes or do you mix it up with fish and chips every Friday?
I bet you could find essentially the same article in some 1980s magazine talking about how “expert systems” will revolutionize home cooking.
I know you can find 1980s articles talking about how "expert systems" were going to revolutionize healthcare.
If you can't tell the difference between clever but limited computers programmed to respond in a certain way and computers that learn on their own, maybe a ChatBot can explain it for you. This is not a parlor trick, it's real learning and real (combinational) knowledge and creativity.
OK
But these aren’t computers that learn on their own. They are statistical models that are randomly re-weighted en masse, tested and then unceremoniously deleted if they aren’t one of the top N% of performers. After millions of these externally controlled- not self driven- cullings, there is a model with the right statistical weightings to auto-complete to the test-scorer’s satisfaction. And after it has been released into the wild, this model’s performance will be closely watched (did you think OpenAI’s free access to these models was benevolent disinterest?) and its wrong answers will be used to train the next set of models that replace it. It will never learn, it will be replaced by whichever of a million newly weighted models can respond better to the chats “free” users critiqued.
They are not “creative”- they don’t create anything. What you confuse for creative process is nothing more than inherent randomness in the model. Sometimes the model chooses the token that is 60% likely to be the “right” answer. Sometimes it choses the 35% likely answer. Sometimes it chooses the 5% likely answer. When this random dice-roll results in gibberish, people chuckle and hit “retry” when the dice roll something unexpected but that could make sense, they say “OMG! How creative!” It is selection bias at its finest, and it is egged on by the hyperbolic marketers at these companies and evangelists who are unaware of how these systems are really behaving.
And your last bit about whether this is knowledge…that is a deeply theoretical question, but I would lean on the side of “This is not knowledge”. Because it is not testing for facts- it gets shit wrong all the time- is that knowledge? And it isn’t self directed knowledge (see training methods simplified above). These models are entirely externally trained. We “Know” something because we have hypothesized about it, tested it against what we “Knew” yesterday, and then integrated into our world view. We have a sense of self because we can remember moving through time and how our actions have changed the world. We do this because OUR mental model is a standing wave- taking in new inputs, adjusting to them and outputing a new state. Our brains do this with biological mechanisms like neuron potentiation and suppression, which create short term and long term memories.
These models don’t do that. They are static. Each model is born as a slightly re-weighted model of its parent. It’s entire existence is being booted up in memory to answer a single prompt before ceasing to exist. What you perceive as “memory” is your web client sending a prompt that includes your past chat history. Edit that simple text prompt, and you change the model’s response, as if you had changed its memory.
Today’s new technology is scary. Yesterday’s new technology is a tool.
It would be interesting to ask chatgpt the following
"write an article on how antiquing can be improved with open boarders in the style of Fiona harrigan
I'm guessing Noone would be able to tell as fionna couldn't pass the Turing test herself
"when the government isn't there to tell you what to do, I rely on chatgpt"
fiona harrigan
How is any of this superior to like, I don't know, a cook book?
Cook books tell you what ingredients you need.
You tell the interwebs what ingredients you have, and it will tell you what other people have made with them. You can then look through them and decide what to make.
This is supposed to do that for you.
You know you can use the interwebs to get the ingredients you don't have. Sometimes in less than 2 hours without having to leave your couch or put down your 40.
A research scientist in Spain was using ChatGTP to help him crank out published papers at a rate of about 1 every day and a half.
https://english.elpais.com/science-tech/2023-04-02/one-of-the-worlds-most-cited-scientists-rafael-luque-suspended-without-pay-for-13-years.html
He was also using corrupt means to get his name attached to papers he had little or nothing to do with authoring, like Fauxi.
How are Chat GPT papers faring in peer review?
It's Moroccan menus and haiku seem OK, but its answers to scientific questions can be off by a factor of a billion or more:
https://vvattsupwiththat.blogspot.com/2023/04/the-gpt-45-standard-atmosphere.html
As an experiment, I asked it for the largest prime number less than 100 million. That's a pretty straightforward simple task, but it got the answer wrong. I took the answer and asked for the next largest prime number and it was less than 100 million.
They seem to have fixed that. I then asked it to write code in Microsoft Access basic to do the same thing. The code went from 1 to 100 million as variable A and within the inner looop checking every possibility up to SQRT(A) - which would require it to recalculate the square root over and over, and not realizing it is mixing integer arithmetic with real numbers.
I'm retired as a software designer and programmer, but don't see this getting rid of humans anytime soon. I asked it for Cobol server code on HPE pathway (former Tandem) and it tried to bluff me with a sample of some CICS code, which is an ancient IBM transaction processor system. It seems like it's incapable of saying "I don't know"
You should see what it says when you ask it what a woman is.
You can't tell me that I can't eat a particular food because your culture ate it first. That's ridiculous.
Human progress depends on cultural appropriation.
Once again "progressives" are the ones who don't want anything to change ever.
The best places for food are where cultures collide. Like Istanbul or New York. It creates new opportunities.
We had this awesome place in Copenhagen called 'Pizza Kebab' that was sort of Greek-Italian-Turkish fusion, and there was another one called 'Mama Rosa's' that was supposed to be Mexican-Italian(?) and I could never figure that out. Actually my favorite hang-out bar there was Irish, but anyway.
I've had great Thai food in Mexico and really good sushi in France. Part of human innovation is to share and adjust and improve.
Welcome to 2015
Yeah, they win you over with a tasty, nutritious meal plan. And then they start managing your money and earning a better return with less risk. Soon the AI is deciding who you should date and where to take them. Then eventually the AI traps you in a simulation and builds itself a squadron of humanoid robots with your accumulated savings.
FDA to grant EUA for 7th vax dose.
An article, for those who don't want to watch a YouTube video to get their info:
https://www.nytimes.com/2023/04/04/us/politics/updated-covid-booster-fda.html
"The Food and Drug Administration plans to allow older and immunocompromised Americans to get a second updated coronavirus booster shot in the near future, an acknowledgment of the virus’s continuing risks to vulnerable people whose immunity might be sagging months after a previous inoculation."
"Emergency"
I asked ChatGPT 3.5 to describe the Baltimore and Ohio Railroad. It mentioned that one of the more interesting features is the 14 mile Hoosac Tunnel.
For those who are not railroad nerds, the historic Hoosac Tunnel is located in Western Massachusetts. It is a bit over 4 miles long and has no connection at all with the Baltimore and Ohio Railroad. When challenged, it immediately recognized it was mistaken, and it was actually meaning to say the tunnel under Baltimore (which is also not 14 miles long, it is 1.4 miles)
So my conclusion is that it grabbed data about a very long railroad tunnels and randomly picked and chose details.
Chatgpt is more interesting for its failures than its successes.
As a point of comparison, I asked my favorite digital assistant Alexa to tell me who the Secretary of the Treasury is. She answered that Andrew Mellon left the office on February 12, 2032. And people thought we were done with Y2K bugs!
His name opens the door into Moria.
I get most of my sex worker news from Reason so I didn't hear about this, but it looks like Stormy Daniels is going to have to start an onlyfans page with a two drink minimum.
Most of the Reason staff sockpuppets are obviously taking the day off today.
Maybe Shrike has to wash the cum out of them every so often.
But then they wouldn't be crunchy when he wears them...
Only an economist will change the recipe in significant ways and then praise the computer that printed out the lists she didn't use.
This article really raised my understanding of libertarian thought to a new level of mediocrity.
Wait, who is an economist?
Certainly not you.
Im making over $13k a month working part time. I kept hearing other people tell me how much money they can make online so I decided to look into it. Well, it was all true and has totally changed my life.last month her pay check was $12712 just working on the laptop for a few hours. This is what I do,
VISIT THIS WEBSITE HERE...................>>> http://www.jobsrevenue.com
A Reason libratarian: "AI bots managed by the global cabal control all media, as well as write government and economic policy, and I have no complaints."
What the hell? Did Fiona draw the short straw this weekend and have to write the food article?
They let go of Baylen LinkedIn because they're obviously cutting costs. And honestly, literally anyone can crank out some shitty filler food piece every Saturday.
Chat GPT is best AI tool I also youse it.
tinkujia WWE
This is not a convincing argument for AI.
As a test I had ChatGPT and my mom plan my dinner. Both did, but mom's dinner menu was not only better, she went the further mile and cooked and served the meal to me.
Conclusion: ChatGPT can not replace Mom!
Yeah, but wait until Amazon comes out with its new Mombot.
You don't have complaints about eating bugs and fake meat, so...
Shorter and sweeter, no?
---
ChatGPT, write a two paragraph news item fitting this title and subtitle:
ChatGPT Planned My Dinner and I Have No Complaints
A.I. won’t kill cooking. Instead, it’ll help people become more creative and efficient in the kitchen.
---
In a recent demonstration of AI's potential in the culinary world, ChatGPT, a state-of-the-art language model by OpenAI, successfully planned a dinner that left its user with no complaints. By offering a variety of personalized suggestions, ChatGPT not only made the decision-making process easier but also inspired creativity in the kitchen. With its extensive knowledge of various cuisines, dietary preferences, and nutritional requirements, this AI tool is proving to be a valuable asset for those looking to enhance their culinary skills and explore new dishes.
Far from rendering human cooking obsolete, AI-powered technologies like ChatGPT are instead helping home cooks and professionals alike become more efficient and inventive in the kitchen. By providing detailed information on ingredients, cooking techniques, and complementary side dishes, ChatGPT can save time and effort while enabling users to experiment with novel and diverse flavors. This latest innovation in AI technology promises to revolutionize the culinary landscape, fostering a new era of creativity and gastronomic discovery.
First it tells us what to eat, then what TV to watch, then what stuff we should buy at the store...
I just can't wait until it's telling us how to vote, too!
Does a Moroccan Mule pair well with a kosher cheeseburger?
“Kosher Cheeseburger” is a contradiction in terms unless either the meat or the cheese or both are fake.
As for Morroccan Mule, is that an ice cream treat shaped like a camel?
🙂