Meta is turning to artificial intelligence to enforce age restrictions across its platforms.
In an online post, the company explained that it will use AI to scan user profiles for contextual clues of age-restriction violations.
“We look for these signals across various formats, like posts, comments, bios, and captions, and we’re continuing to expand this technology across additional parts of our apps like Instagram Reels, Instagram Live, and Facebook Groups,” it wrote.
“If we determine an account may be underage,” it added, “it will be deactivated, and the account holder will need to provide proof of age through our age verification process to prevent their account from being deleted.”
Meta is also deploying visual analysis tools that will allow its AI to scan photos and videos for age-related cues that text might miss.
“We want to be clear: this is not facial recognition,” the company cautioned. “Our AI looks at general themes and visual cues, for example, height or bone structure, to estimate someone’s general age; it does not identify the specific person in the image.”
“By combining these visual insights with our analysis of text and interactions, we can significantly increase the number of underage accounts we identify and remove,” it noted.
‘Notoriously Unreliable’
Meta’s approach to age verification has its critics. “While there are several different age-checking methods on the market, there is no technology available that is entirely privacy-protective, fully accurate and that guarantees complete coverage of the population,” observed Rindala Alajaji, associate director of state affairs at the Electronic Frontier Foundation, an international non-profit digital rights group based in San Francisco.
“The problem with ‘age estimation’ is that it’s exactly that: a guess,” she told TechNewsWorld. “And it is inherently imprecise.”
“Age estimation is notoriously unreliable, especially for teenagers — the exact group age verification laws claim to protect,” she asserted. “An algorithm might tell a website you’re somewhere between 15 and 19 years old. That’s not helpful when the cutoff is 18, and what’s at stake is a young person’s constitutional rights.”
Alajaji also pointed out that facial age estimation systems consistently fail for certain groups. “People of color are routinely misidentified, trans and nonbinary people are frequently misclassified, and people with disabilities that affect their appearance fall outside the algorithm’s training parameters, or anyone who doesn’t fit the algorithmic ‘norm’ gets flagged,” she explained.
However, Alex Ambrose, a policy analyst with the Information Technology & Innovation Foundation, a research and public policy organization in Washington, D.C., maintained that while visual analysis technology isn’t perfect, it is accurate and continues to improve.
“If online services could use AI to reliably estimate users’ ages without collecting and storing their personal information, they would avoid many of the issues associated with age verification requirements,” she told TechNewsWorld.
Privacy Concerns
Meta’s move has also raised privacy concerns. “I can understand their approach, but it does carry privacy implications,” said Mohamed Yousuf, CEO of Smart Workforce AI, in Toronto, a workforce intelligence platform.
“When AI analyzes posts, captions, comments, birthday clues, and photos to determine age, it moves from age verification being a simple declared field to broader behavioral and contextual analysis,” he told TechNewsWorld.
“The main issue is keeping things fair and balanced,” he said. “How is data analyzed? How long is it retained for? How are decisions explained? How can users appeal incorrect outcomes?”
“Meta says their goal is safety and age-appropriate experiences, but users still need transparency around how these systems operate and what is being evaluated,” he added.
AI-based age profiling works by aggregating weak signals — including behavioral patterns, device history, engagement habits, and social graph context — explained Husnain Bajwa, SVP of product, risk solutions at Seon Technologies, a global fraud prevention and anti-money laundering compliance company.
“None of those are highly sensitive on their own, but together they form a detailed behavioral profile,” he told TechNewsWorld. “That creates real privacy concerns around consent, explainability, bias, and data minimization, especially when the subjects are minors who may not understand how much inference is happening behind the scenes.”
Safety Tool or Surveillance System?
Meta’s high-tech approach to age verification is asking its users to provide personal information as part of a gradual surveillance system, contended Chris Hutchins, founder and CEO of Hutchins Data Strategy Consultants, a healthcare-focused advisory firm in Nashville, Tenn.
“This privacy tradeoff should cause concern, and Meta has no right to diminish public concern in such a callous manner,” he told TechNewsWorld. “The same privacy data collected during surveillance will be repurposed to provide targeted ads and suggest content, as they have done in the past.”
“The primary concern here is the depth of the contextual dragnet,” added Giselle Fuerte, founder and CEO of Being Human With AI, a company focused on education about artificial intelligence ethics in Spokane, Wash.
“Meta is essentially looking for tells in our private and semi-private interactions,” she told TechNewsWorld. “While they frame this as safety, it’s a form of behavioral surveillance.”
“We have to consider what happens to the massive amounts of data being scraped to make these estimations — is this ‘age-likelihood’ score being appended to our permanent advertising profiles or used to train future, more invasive models?” she asked.
Critics Call It Security Theater
Priten Soundar-Shah, a Harvard scholar in education management policy and author of “Ethical Ed Tech: How Educators Can Lead on AI and Digital Safety in K-12,” argued that Meta is acting reactively to increased enforcement demands from the EU and the state of New Mexico.
“Rather than focus on figuring out how Meta can use traditional methods that have been used by other sites in the past to effectively gate for age, like verification APIs and ID checks, they’re choosing to rely on AI technology that comes with its own sets of risks,” he told TechNewsWorld.
While he acknowledged that Meta is not using facial recognition to verify age, Soundar-Shah pointed out that body profiling still relies on assumptions about how certain ages appear.
“That ends up disproportionately affecting students who aren’t a majority of the dataset,” he said. “This includes students with trans identities, queer students, students of color, and students with different developmental patterns. The idea that there’s a clear differential in visual appearance between a thirteen-year-old and a twelve-year-old is categorically false.”
Soundar-Shah asserted that the use of body profiling technology seems to be a way to create performative security theater around what Meta is doing to actually enforce age restrictions, rather than figure out a sustainable strategy that doesn’t rely on extensive data collection and profiling. “It’s also a way to avoid taking any actual responsibility,” he contended.
“Best case scenario, they’re running an experiment that’s untested on minors, deploying exactly the kind of technology we should be avoiding given the failures of the past,” he declared. “Worst case scenario, they know the potential risks, and they’re deploying it anyway, because the regulatory optics outweigh any of the cost to them.”
“This is not going to be a single-solution-solves-everything scenario,” he added. He called for age-verification responsibilities to be shared among social media platforms, parents, and app stores.
“If the whole thing relies on AI analysis of images,” he continued, “it’s very easy for a child to decide not to post images, or only post AI-generated images, and that doesn’t actually help deal with the fundamental problem.”
Read the full article here

