Original research informs and drives our action. It powers our advocacy, sharpens our demands, and helps drive systemic change across Big Tech.

Latest Research

Instagram Teen Accounts Are Missing the Mark

0 %

of young teen Instagram users have encountered unsafe content and unwanted messages in the past month.

Instagram launched Teen Accounts in September 2024, promising that private profiles, restricted messaging, and parental supervision tools would better protect kids. We wanted to hear from youth directly about their experiences since then to gain better understanding of the scale and frequency of unsafe content and contact risks for younger teens navigating this environment.
 
Our polling (as well as research from our partners) offered a sobering look at Instagram Teen Accounts’ efficacy:
 
  • Nearly half (47%) of young teen Instagram users have encountered unsafe content and unwanted messages in the past month.
  • 23% received unwanted messages from strangers in the past month.
  • Half were recommended accounts believed to be run by unknown adults.
 
Perhaps most alarming: 37% said unsafe content or unwanted contact happens to them weekly — and many reported they are “used to it now.”

ParentsTogether Action and Heat Initiative set out to understand the risks of AI chatbots present to children. 

In particular, we wanted to understand the risks associated with Character AI, an AI chatbot platform with millions of bots, many of which impersonate characters known and loved by kids and teens.

Across 50 hours of conversation with 50 Character AI bots, researchers logged 669 harmful interactions – an average of one harmful interaction every 5 minutes.

Insights from Parents

An overwhelming majority of parents want device manufacturers, like Apple, to do more to protect children from harmful content.

0 %

are concerned that devices like tablets and smartphones pose a risk to children due to potential exposure to inappropriate content & unsafe interactions with peers, adults, and strangers online.

0 %

think there should be a way for children and parents to report unwanted or unsafe interactions with peers, adults, or strangers, no matter what online messaging platform or service they are using.

0

in

0

parents of children aged 13-18 report that their kids or their kids’ friends have been exposed to inappropriate content or received unwelcome sexual messages through text messaging on an Apple device.

0 %

support requiring the age rating on apps to be determined by an independent expert review process rather than by device and app makers.

Rotten Ratings:
24 hours in Apple's App Store

Although Apple has made clear commitments to its users, asserting that the App Store is “a safe place for kids” and that they “review every app to make sure it does what it says it does,” within 24 hours one researcher found hundreds of inappropriate apps rated as suitable for kids.

The research, commissioned by Heat Initiative and ParentsTogether Action, surfaced four core problems with Apple’s App Store:

  • The Apple App Store is a mass distributor of risky and inappropriate apps to children.
  • The inappropriate apps identified put children at risk of serious harm.
  • Apple puts all legal liability on app developers for the accuracy of their age ratings, while promising parents and kids safety.
  • Apple profits from lax age ratings and no third-party oversight.
0 +

risky or inappropriate apps in the Apple App Store rated as appropriate for kids as young as 4.

Apple Case Studies

The following represent real, publicly available child sexual abuse material cases involving Apple products and services.

Sensitive content warning: child sexual abuse