Test This, Turing
When bots talk to other bots, peculiar interviews emerge.
Editor's Note: As of February 29, 2024, commenting privileges on reason.com posts are limited to Reason Plus subscribers. Past commenters are grandfathered in for a temporary period. Subscribe here to preserve your ability to comment. Your Reason Plus subscription also gives you an ad-free version of reason.com, along with full access to the digital edition and archives of Reason magazine. We request that comments be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of reason.com or Reason Foundation. We reserve the right to delete any comment and ban commenters for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
God, its like reading an interview with Bush when he doesnt have a teleprompter.
These bots are almost as good at making HAL allusions as some H&R writers. Presumably the programmers thought that would be a nice touch. I'm not sure why I took the time to read so much of it.
Reads like some LOC blogs.
These bots are almost as good at making HAL allusions as some H&R writers.
I know we've made mistakes in the past, but we're better now.
I really think you ought to sit down, take a stress pill, and think things over.
This sort of thing has cropped up before and it has always been due to human error.
Reads to me like the conversation of two Japanese teens tanslated literally into English.
It reads like a bot-iterview genre. It has a certain reply style that goes to the biggest eigenvector, so to speak, like audio feedback is to music.
That settles it; I am definitely going to resume hitting my computer when it acts up.
Hmmm. Needs work. Except for the Tony Blair robot.
The 9000 series is the most reliable computer ever made. No 9000 computer has ever made a mistake or distorted information. We are all, by any practical definition of the words, foolproof and incapable of error.
Holy crap, that Chomskybot is scary as hell.
Anyone remember AOLiza? I guess you could say this sort of thing has come a (long) way.
http://fury.com/aoliza/
The first of this kind I ran into was an encounter between PARRY (a paranoid simulator) and Eliza in the 70's. The conversations haven't improved very much since then, except that the bots have learned how to use lower case.
Here's an idea for a CY FI story: Someone programs a "libertoid-droid" bot, replicates it en masse so it regularly interacts with 95% of the blogs extant at any one time, under many different aliases. The bot is programmed to monitor interaction and continually refine its arguments. It increasingly wins hearts and minds, which starts a snowball effect. Government everywhere is rolled and back the whole world starts to become freer and more prosperous....
Email me to discuss book and movie rights.
Too late. You've already put it in the public domain. However, I will give one of the bots the initials R.B.
Even better:
Give a *current* bot a bunch of good lefty talking points and let it embarrass itself. Hmm. That might beat the Turing test, right out of the box.
db,
LOL, you're killing me!
I'm sure you are all aware of the extremely grave potential for social shock and disorientation caused by this information. We can't release it without proper conditioning.
I'm sure we'll all want to offer Mr. Barton our support and cooperate as fully as possible.
Only on Hit and Run would there be more "feet on the ground" in the comments than in the topic.
BTW, Brian, are you too trying to lure a Japanese teen to Guam? curious.
My precocious one used the word "arrow" when she meant "allow". She had me going for a moment as I thought she was referring to Cupid.