In 2021, a whistleblower at Facebook shared internal research with Congress. The research showed that Instagram knew its platform was harmful to teenage girls' body image — and had documented this for years. One internal slide read: "We make body image issues worse for one in three teen girls." The research had been shelved, not acted on.

The algorithm that served those girls content wasn't malfunctioning. It was working exactly as designed.

The Business Model Your Child Is Part Of

Social media platforms make money from attention. The longer a user stays on the platform, the more ads they see, the more valuable they become to advertisers. This creates a straightforward incentive: build a system that maximizes time spent, regardless of whether that time benefits the user.

Algorithms optimize for engagement. And the content that reliably produces the most engagement — the most clicks, most shares, most time spent — is content that provokes strong emotion. Outrage works. Anxiety works. Aspiration tinged with inadequacy works exceptionally well. These are not bugs. They are features, produced at massive computational scale.

Children and teenagers are particularly high-value targets for this system. They spend more time on platforms than adults do on average. They are in developmental stages where identity, belonging, and social comparison are already heightened concerns. And they are, by definition, still developing the critical evaluation skills that might protect them from these dynamics.

What a Child's Algorithm Learns

It starts quickly. Research on TikTok's algorithm published in 2022 found that new users could be funneled into extreme content within forty minutes of their first session, simply based on small behavioral signals — a pause, a replay, a swipe-away.

The algorithm learns what keeps your child watching. It learns which emotions correlate with longer sessions. It builds a model of your child — their anxieties, their aspirations, their social vulnerabilities — and it serves content optimized for that model. Not because anyone decided to target your specific child, but because the system was built to find and exploit the psychological signature of every user, at scale.

A child who pauses on videos about body transformation gets more body transformation content. A child who watches videos about feeling left out gets more content about social exclusion. A child who engages with increasingly extreme political content — even with skeptical disbelief — gets more extreme content, because the algorithm doesn't measure agreement, only attention.

Talking About the Algorithm Without Sounding Like a Lecture

Here's the challenge: everything above is true, and delivering it as a parental warning produces exactly the response you'd expect — a teenager who mentally categorizes it alongside every other adult concern they've been trained to filter out.

What works better is making the algorithm visible rather than explaining it.

Watch something together and ask: "Why do you think that came up?" Invite your child to be the expert on their own feed. Ask them to predict what will show up next based on what they've been watching. Make the invisible machinery visible through curiosity, not instruction.

One useful exercise is what some educators call the "audit." Ask your child: if someone looked at only your feed from the last week, what would they think you care about? What would they think makes you feel bad about yourself? This kind of reflection externalizes the algorithm's model and makes it something to examine rather than something to be absorbed by.

Critical Literacy Is a Skill, Not an Attitude

"Just think critically about what you see online" is advice so vague it borders on useless. Critical literacy about digital content is a concrete skill set: understanding that content is produced with intention, that feeds are curated by systems with specific goals, that virality and truth are not correlated.

These skills aren't automatically acquired through exposure to the internet. They have to be taught, preferably through practice rather than explanation. Every time you watch something together and ask "What are they trying to make you feel?" you're giving a small lesson in media literacy. Every time you point out that a piece of content got millions of views without being accurate, you're calibrating a detection instrument.

Over time, these small lessons build into something durable. A child who has learned to ask "What does this feed want from me?" is genuinely better protected — not because they never get drawn in, but because they have a way to name what's happening and step back.

The Asymmetry

The system your child is using was built by some of the most sophisticated engineers and behavioral scientists in the world. It runs on billions of dollars of investment and has been refined against trillions of data points. It knows more about your child's behavior patterns than your child does.

This is not a case for panic. It is a case for honesty. You cannot out-restrict the algorithm. You cannot explain it away. What you can do is build, alongside your child, the capacity to see it clearly — and to understand that what the algorithm wants is not necessarily what they want for themselves.

That capacity is worth more than any parental control setting. It travels with them everywhere the algorithm goes.