Artificial Intelligence

Instagram Announces AI Age Verification on New Teen Accounts

Voluntary AI age verification is preferable to federally mandated verification at the operating system level.

|

Instagram announced that AI age verification will be used to determine which users are teens. The goal is to place 13- to 17-year-old users in newly announced teen accounts. If Meta's artificial intelligence technology works to accurately classify adults, it will be a win for user safety and privacy relative to state-mandated age verification at the smartphone level.

Instagram, a Meta company, has recognized that "young people can lie about their date of birth" since its March 2021 report. Since then, Instagram has been developing and deploying "artificial and machine learning technology" to identify those users who falsely represent themselves as older—and younger—than they really are. The earliest application of this technology was "alert[ing] the [teenage] recipients within their DMs" when they are contacted by adults interacting disproportionately with users under 18 years old.

Instagram partnered with British age-verification company Yoti to verify the ages of users who attempt to change their age from under 18 to 18 or older in June 2022. As of December 2022, Meta reported that "96% of the teens who attempted to change their birthdays from under 18 to 18 or over on Instagram [were stopped from] doing so." Yoti was able to do this by using stills from video selfies provided by Meta to approximate a user's "age based on [the user's] facial features."

To allay privacy concerns, Meta emphasizes that Yoti "technology cannot recognize [user] identity" and deletes the image immediately after verification. Presently, Instagram users can use Yoti, upload government-issued identification documents, or ask mutual friends to verify their age when attempting to change it.

Meta's latest step to ensure the veracity of reported user ages is its AI-enabled adult-classifier, which will be deployed "to proactively find teens that have misrepresented their age [when creating their account] and put them in Teen Account settings," explains Instagram spokeswoman Stephanie Otway in an email. Meta explains in its report published Tuesday how Instagram will use AI trained on "profile information, when a person's account was created, and interactions" to better calculate a user's real age.

Instagram's AI age verification is slated for early 2025 in the U.S. as Meta works "diligently to ensure the accuracy [its] AI models are accurate." If Meta's deployment of its AI age-verification technology is too slow or ineffective, the federal government may intervene. The Social Media Child Protection Act, introduced February 2023, would require social media platforms to force users "to provide a valid identity document issued by the Federal Government or a State or local government, such as a birth certificate, driver's license, or passport" to prohibit those 16 years old and younger from using the platform.

Social media is under increasing scrutiny from lawmakers, as evidenced by a House panel advancing the Kids Online Safety Act on Wednesday and Meta CEO Mark Zuckerberg apologizing to parents of online harassment victims before the Senate Judiciary Committee earlier this year. Feeling the pressure, Meta advocates for federally mandated age verification at the smartphone or app store level in lieu of "comprehensive industry standards or harmonized regulation." Let's hope their private standard is effective, lest federal and state governments continue to impose requirements by law.