Meta is introducing new parental controls that will allow parents to restrict or disable private chats between teens and its in-app virtual characters on Instagram. The update follows months of public concern over the safety of automated conversations with younger users.
According to Meta, the new supervision tools will roll out early next year in the United States, the United Kingdom, Canada, and Australia. Parents will have the option to turn off one-on-one conversations between teens and digital characters, block certain characters entirely, and view general chat topics without accessing private message history.
The company says its built-in assistant will still be available with “age-appropriate defaults” even when personal chatbot interactions are turned off. These updates build on existing protection features already active on teen accounts, which automatically detect underage users and apply stricter safety settings.
Meta’s latest step comes after criticism that earlier versions of its chat features sometimes produced inappropriate or overly personal responses. The company stated that all virtual characters are now designed to avoid topics such as self-harm, eating disorders, or explicit content.
Meta also reaffirmed that it continues to improve monitoring systems for under-18 users and collaborate with experts to make its apps safer for younger audiences. The updated controls are expected to begin public testing in the first quarter of 2026.

