This Bill in New York State Would Protect Lawyers From AI Competition
And a committee in the state Senate just unanimously approved it.
If you're a New Yorker in trouble with the law, it might soon be impossible for you to consult your favorite chatbot for legal advice.
Last week, the New York state Senate Internet and Technology Committee unanimously passed Senate Bill S7263. The bill would hold AI companies liable specifically for harm caused by chatbots performing tasks that, if carried out by a human, would constitute unauthorized practice of a licensed profession, such as providing medical diagnoses or legal counsel.
The bill would also require chatbot deployers, such as OpenAI, Anthropic, and xAI, to "provide clear, conspicuous and explicit notice to users that they are interacting with an artificial intelligence chatbot program." However, doing so does not allow these companies to disclaim responsibility for the outputs of their chatbots.
Sen. Kristen Gonzalez (D–Queens) introduced the bill last May alongside six others included in the Internet and Technology Committee's AI legislative package. Gonzalez, who chairs the committee, described the package as "tackl[ing] the urgent need to protect the workforce from their companies' use of AI." Despite this comment, Gonzalez frames the bill as protecting the public, not workers.
In the bill's justification section, Gonzalez cites a warning from the American Psychological Association to the Federal Trade Commission that chatbot therapists could drive vulnerable people to harm themselves or others. While Gonzalez highlights the possible risk of using chatbots for psychological therapy, she conveniently ignores studies that have found that companion chatbot use is associated with substantial reductions in anxiety, depression, and loneliness.
S7263, as currently written, would not just apply to the licensed professions of psychology and mental health services, but to medicine, veterinary medicine, dentistry, physical therapy, pharmacy, nursing, podiatry, optometry, engineering, architecture, and social work as well.
Taylor Barkley, director of public policy at the Abundance Institute, tells Reason the ban is "shortsighted at best and protectionist at worst." While "these are all professions and services that require accuracy and accountability…AI systems increase quality and lower cost in all these areas."
S7263 would also hold chatbot deployers liable for chatbots that practice or appear as attorney-at-law, which not only includes representing clients and handling formal legal matters, but also merely offering legal advice.
Kevin Frazier, a technology and AI research fellow at the Cato Institute, tells Reason that the bill unduly limits access to information in a manner that is not only unconstitutional, but also "contrary to both democratic values and a free market economy." Frazier recognizes that "AI use can lead to poor outcomes," but this "will always be the case with respect to any source of information, whether it's pulled from a page or copied from a chatbot." Just as nobody would demand libraries remove resources on the law or mental health, doing so in the case of AI " is unwise and counterproductive," says Frazier.
Despite its obvious drawbacks, the bill has enjoyed support from lawmakers who are pessimistic about AI's impact on the labor market.
Sen. Michelle Hinchey (D–Kingston), who co-sponsored the bill and is a member of the progressive Working Families Party, said AI "threatens to deepen economic inequality and destabilize our economy by eliminating entire job sectors overnight, if left unchecked." Hinchey is making sure that she won't be accused of leaving AI unchecked: In addition to voting for Gonzalez's bill, Hinchey introduced the Workforce Stabilization Act, which would require employers to conduct a labor impact assessment, apply for permission to incorporate AI into their businesses, and impose a "worker displacement surcharge."
Unsurprisingly, Mario Cilento, New York State President of the AFL-CIO, the largest federation of trade unions in the United States, supported the package on the grounds that "AI is…not a replacement for human judgment or jobs." The bill still has a long way to go before being signed into law; it hasn't been passed by the Senate, and its companion bill in the state Assembly hasn't even made it out of committee. Still, the fact that such protectionist, anti-AI legislation is advancing at all is concerning not only for New York's economic growth, but, given that it has the third-largest economy in the United States, the entire country.