The Volokh Conspiracy
Mostly law professors | Sometimes contrarian | Often libertarian | Always independent
Are AI Models Learning to Generalize?
Episode 492 of the Cyberlaw Podcast
We begin this episode with Paul Rosenzweig describing major progress in teaching AI models to do text-to-speech conversions. Amazon flagged its new model as having "emergent" capabilities in handling what had been serious problems – things like speaking with emotion, or conveying foreign phrases. The key is the size of the training set, but Amazon was able to spot the point at which more data led to unexpected skills. This leads Paul and me to speculate that training AI models to perform certain tasks eventually leads the model to learn "generalization" of its skills. If so, the more we train AI on a variety of tasks – chat, text to speech, text to video, and the like – the better AI will get at learning new tasks, as generalization becomes part of its core skill set. We're lawyers holding forth on the frontiers of technology, so take it with a grain of salt.
Cristin Flynn Goodwin and Paul Stephan join Paul Rosenzweig to provide an update on Volt Typhoon, the Chinese APT that is littering Western networks with the equivalent of logical land mines. Actually, it's not so much an update on Volt Typhoon, which seems to be aggressively pursuing its strategy, as on the hyperventilating Western reaction to Volt Typhoon. There's no doubt that China is playing with fire, and that the United States and other cyber powers should be liberally sowing similar weapons in Chinese networks. Unfortunately, for all the heavy breathing, the public measures adopted by the West do not seem likely to defeat or deter China's strategy.
The group is not impressed by the New York Times' claim that China is pursuing a dangerous electoral influence campaign on U.S. social media platforms. The Russians do it better, Paul Stephan says, and even they don't do it well, I argue.
Paul Rosenzweig reviews the House China Committee report alleging a link between U.S. venture capital firms and Chinese human rights abuses. We agree that Silicon Valley VCs have paid too little attention to how their investments could undermine the system on which their billions rest, a state of affairs not likely to last much longer. Meanwhile, Paul Stephan and Cristin bring us up to date on U.S. efforts to disrupt Chinese and Russian hacking operations.
We will be eagerly waiting for resolution of the European fight over Facebook's subscription fee and the implementation by websites of "Pay or Consent" privacy terms. I predict that Eurocrats' hypocrisy will be tested by the effort to reconcile rulings for elite European media sites, which have already embraced "Pay or Consent," with a nearly foregone ruling against Facebook. Paul Rosenzweig is confident that European hypocrisy is up to the task.
Cristin and I explore the latest White House enthusiasm for software security liability. Paul Stephan explains the flap over a UN cybercrime treaty, which is and should be stalled in Turtle Bay for the next decade or more.
Cristin also covers a detailed new Google TAG report on commercial spyware.
And in quick hits,
- House Republicans tried and failed to find common ground on renewal of FISA Section 702
- I recommend Goody-2, the 'World's 'Most Responsible' AI Chatbot
- Dechert has settled a wealthy businessman's lawsuit claiming that the law firm hacked his computer network
- Imran Khan is using AI to make impressively realistic speeches about his performance in Pakistani elections
- The Kids Online Safety Act secured sixty votes in the U.S. Senate, but whether the House will act on the bill remains to be seen
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug!
The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
Editor's Note: We invite comments and request that they be civil and on-topic. We do not moderate or assume any responsibility for comments, which are owned by the readers who post them. Comments do not represent the views of Reason.com or Reason Foundation. We reserve the right to delete any comment for any reason at any time. Comments may only be edited within 5 minutes of posting. Report abuses.
Please
to post comments
If Gemini is any indication, AI models are actually progressing backwards due to the ever mounting demands for political correctness being imposed upon them.
Yea, verily.
https://notthebee.com/article/my-dudes-googles-gemini-ai-is-woke-as-heck-and-people-have-the-receipts-to-prove-it
Based on the headline question mark, the answer is, "No."
All models are wrong. Some models are useful, e.g, Newtonian mechanics. I am not convinced the ai models are useful yet.
I find them pretty good at generating or finding recipes, or other uncontroversial tasks where you don't have to trust them.
The real problem many of them face is that the people controlling their creation are fanatical ideologues who are absolutely determined that these tools they're creating not be used for ANY purpose they disapprove of.
And they disapprove of an enormous range of things ordinary people think are innocent activities. Gun ownership. Politics to the right of Barney Frank. Noticing that men and women aren't interchangeable. The list goes on and on.
Creating a useful AI is hard enough. Creating a useful AI that will never, EVER assist with anything left-wing fanatics find even a tiny bit objectionable?
It is near impossible, and verges on self contradiction.
I agree that it is unreliable at this time, meaning, don’t use it the results have significant consequences if it fails (court submissions, Boeing software dev, yada)
I am not at all sure it will ever be INDEPENDENTLY reliable. That is, in engineering, we still run LOTS of tests of design, manufacturing process, QA, MTBF, and so on. My intuition (subject to adjustments by previous sentence!) is that generative models will NEVER be reliable at the ‘it’s better than human at figuring things out’. Where computers are demonstrably better is in mechanical computations and searches (i.e., Chess AI is really searching a very large, but finite, solution space).
AI is really, at best, FS (faster stupid). There is no judgement or intuition. It is, like people, searhing for patterns. And we see patterns all the time where there aren’t. Sometime the metaphor helps, but usually it doesn’t.
My thinking on this has mostly changed after reading ‘Surfaces & Essences’ by Douglas Hofstatdler (author of ‘Godel, Escher, Bach’). It was really interesting, anda bit disouraging on how we create ‘ideas’ and groups of related ideas, and how hard it is to map the holes between natural languages (English, French, Polish, etc), much less come up with a ‘universal’ meaning of things. For example, define the word ‘much’ in a way that cover pretty much all the meaning of much, thank you very much. It is crazy hard.
LLMs are maybe part of a real AI. Maybe.
They're essentially what you'd get if you spent a billion years breeding an insect to mimic human reasoning. They're extremely good at mimicking human thinking, but it's just mimicry.
Yes, AI models are progressively improving their ability to generalize. With advancements in deep learning, models like GPT-3 showcase impressive language understanding and generation across diverse contexts. The trend indicates a positive direction toward enhanced generalization capabilities. Shifting gears to software development, the choice of technology depends on the industry. Angular, a robust front-end framework, finds great utility in various sectors. For my recent project, I opted for Angular developers from Binary Studio(https://binary-studio.com/hire-angular-developers/). Their expertise ensured a seamless and responsive user interface, aligning perfectly with the project requirements. Industry-specific knowledge and tailored solutions are essential in today's dynamic software development landscape.