California

A California Law Designed To Protect Children's Digital Privacy Could Lead to Invasive Age Verification

While the California Age-Appropriate Design Code Act was hailed as a victory for digital privacy, critics warn of a litany of unintended consequences.

|


The California Age-Appropriate Design Code Act was signed last month by California Gov. Gavin Newsom (D). The law requires that online businesses create robust privacy protections for users under 18.

However, critics of the law have raised concerns about its vague language, which leaves unclear what kinds of business might be subject to the law's constraints and what specific actions companies must take to comply with the law.

For instance, due to the law's strict age requirements, online businesses may resort to invasive age verification regimes—such as face-scanning or checking government-issued IDs. While digital privacy protections are important, particularly for children, California's Age-Appropriate Design Code Act could have unintended consequences in its vague, sweeping attempt to accomplish that end.

The law applies to any "business that provides an online service, product, or feature likely to be accessed by children" and mandates that these businesses act in the "best interests of children." Tech companies must conduct a rigorous Data Protection Impact Assessment, judging whether their products have the potential to "harm" children. Further, businesses must estimate the age of child users, configure children's default settings to have a high level of privacy, and provide and enforce privacy policies and other information in child-friendly language. The law also mandates that businesses make it obvious to children when they are being monitored or location-tracked (by a parent, guardian, or any other consumer) and provide accessible means for children or their guardians to report privacy concerns.

Businesses are banned from collecting, using, selling, or sharing children's personal information—including geolocation—without a "compelling reason" or gathering information estimating the child's age range in a way that is not "proportionate" to the "risks" of the online service. The law does not clarify what makes a "compelling reason" for collecting information or how businesses should gauge whether their data collection practices are "proportionate" to the apparent "risks" of their product.

Further, the law says businesses are prohibited from using "dark patterns" to encourage children to share unnecessary personal information or take actions that the company should know are detrimental to the child's mental and physical health.

Finally, the law prevents businesses from using children's personal information in a way it "knows, or has reason to know, is materially detrimental to the physical health, mental health, or well-being of a child."

Businesses that flout the law face fines of up to $2,500 per affected child for "negligent" violations and $7,500 for each "intentional" violation. However, the bill also contains a "right to cure," giving noncompliant companies 90 days to fix their violations and avoid financial penalties.

Critics worry that the vague language of the law leaves businesses with no clear path to compliance. "The requirement that companies consider the 'best interests' of children is incredibly difficult to interpret," TechNet, a tech sector lobbying group, and the Chamber of Commerce wrote to California legislators.

"As currently written, we believe that the standard businesses are asked to consider—'the best interests of children'—is too vague and perhaps incoherent," wrote Electronic Frontier Foundation Senior Legislative Activist Hayley Tsukayama in a letter to Gov. Newsom. "No service provider that operates with any kind of scale can make such decisions for an individual child unless there is a specific case, incident, or set of facts. Any service provider that operates with any kind of scale will face many different groups of children with different vulnerabilities. There will always be reasonable disagreements about what's 'best' in this larger context."

Further, many critics are concerned that the bill could lead to complex and invasive age verification schemes. "Post-AADC, users will first be required to prove their age before they can visit any new site—even if they just plan to visit for a second, and even if they never plan to return," wrote Eric Goldman, a professor at Santa Clara University Law School, in The Capitol Weekly about the measure. "The actual process of age authentication usually involves either (1) an interrogation of personal details or (2) evaluating the user's face so that software can estimate the age. Neither process is error-free, and either imposes costs that some businesses can't afford. More importantly, the authentication process is highly invasive."

Instagram has already adopted such a scheme. In June, the site announced that if a user attempts to change their birthdate from under 18 to over 18, they will be required to prove their age with a photo of a government ID, a "video selfie," which will be reviewed by a facial analysis company, or by having three adult mutual followers "vouch" that the person is over 18. Ironically, online businesses could soon enact invasive age-verification requirements to comply with what is supposedly a digital privacy law.