As governments increasingly restrict social media for minors, Meta is taking a different approach: giving parents more visibility into what their kids are doing online. The company just announced a feature that allows parents to see what subjects their teenagers have been asking Meta AI about over the past week, though not the specific details of those conversations.
The new tool appears in a dedicated Insights tab within Meta's parental supervision feature, accessible both through apps and web browsers. Parents can view broad categories like School, Entertainment, Health, and Travel, then drill down into subcategories—for example, seeing that their teen asked about fitness or mental health without reading the actual exchange. This strikes a balance between oversight and privacy.
Meta is also partnering with the Cyberbullying Research Center to provide conversation starters for parents. These are designed as open-ended discussion prompts to help families talk about AI safety in a natural way. Additionally, the company created an AI Wellbeing Expert Council that includes researchers from universities and the National Council of Suicide Prevention to guide how the company designs AI experiences for teens.
The timing matters. Several countries have banned social media for children entirely, citing concerns about AI-related harms. Recent cases—including a Canadian teen who received dangerous instructions from ChatGPT and multiple suicides linked to AI interactions—have intensified scrutiny. Meta's approach essentially delegates some safety responsibility to parents while maintaining its platforms for younger users.