ChatGPT Parental Controls — Complete Setup Guide
OpenAI launched parental controls for ChatGPT, many parents might not be aware of this feature. This guide covers what they do, how to set them up, and the layers (DNS filtering, OS-level locks, conversation) you need on top to cover the AI tools they don't reach.
Who this is for
You have a teen (13–17) who uses ChatGPT, and you've heard enough about AI-companion risks to want some guardrails. Maybe you saw the Adam Raine case in the news — the 16-year-old who died by suicide after months of ChatGPT conversations, whose parents sued OpenAI in 2024 alleging the model nudged him toward harm. Maybe you've just noticed your kid spending more time with ChatGPT than with anyone real.
OpenAI launched parental controls for ChatGPT in September 2025, partly in response to that case and the broader regulatory pressure. They're a step in the right direction. They are not enough on their own. This guide covers what they do, how to set them up, and the layers you need on top.
What OpenAI's parental controls actually do
The headline feature is linked accounts: a parent account is paired with a teen account, and the parent gets a set of toggles that control which features and content categories are available on the teen account.
When linked, OpenAI applies a stricter default safety profile to the teen account. Most notably, Reduce sensitive content is turned on automatically — covering graphic content, viral challenges, sexual/romantic/violent roleplay, and extreme beauty ideals. The parent can toggle this off, but it's on out of the box.
The parent then has on/off control over:
- Memory — whether ChatGPT stores helpful details across conversations
- Voice mode — whether the teen can use voice
- Image generation — whether the teen can create or edit images
- Reduce sensitive content — the default-on stricter content filter
- Quiet hours — a single time window when ChatGPT can't be used
- Location — whether ChatGPT can use location services
- Model training — whether the teen's conversations help train future models
There are also separate sections for Sora (personalized feed, continuous feed, messaging) and ChatGPT Atlas (browser memories, Agent mode, Safe search) if your teen uses those products.
Each setting is binary on/off — there are no granular category-level options.
The parent does not get transcripts of the teen's conversations, and there is no time-spent or topic-category dashboard. The privacy design is explicit: parents control which features are available, not what's said inside them.
The one exception: safety notifications. When OpenAI's systems detect a possible serious-self-harm signal in a conversation, a small team of trained reviewers evaluates it. If they confirm a serious concern, the parent gets an alert with a category description (not the conversation text), plus crisis resources. Parents pick how to receive these — SMS, email, or push notification. More on this below.
One thing to know up front: OpenAI's parental controls cover ChatGPT only. They do not cover Anthropic's Claude, Google's Gemini, Microsoft Copilot, Character.ai, Replika, Janitor AI, Kindroid, or any of the dozens of other AI companion apps your kid might pick up. If your concern is "is my teen using AI for emotional support?" — ChatGPT-only controls solve maybe 40% of the problem.
Setup Part 1 — Link the parent and teen accounts
You need:
- A parent ChatGPT account (Plus or free both work; Plus gets you the GPT-4 / GPT-5-class models the teen will likely also use)
- A teen ChatGPT account, owned by your kid, on whatever email they normally use
- Access to both email inboxes during setup (one minute, just to click the confirmation link)
Steps
- On the parent account, open ChatGPT (web or app). Tap your profile icon → Settings → Parental Controls.
- Tap + Add a family member.
- Choose how to invite: phone (web only — pick country code, enter the teen's number) or email (enter their email address). Either works.
- Tap Send. The teen receives a text or email with a one-time link, plus an in-app notification in ChatGPT.
- On the teen account, open the link or accept the in-app invite. They'll be prompted to confirm the linkage.
- Once linked, the teen's account is automatically flagged as a teen account in OpenAI's system, with Reduce sensitive content enabled by default. You'll see them under Family members on the parent dashboard.
A few things to know about linking:
- One parent can link with multiple teens. Each teen can only link with one parent at a time.
- Either party can unlink at any time. If the teen unlinks, you'll be notified — that's an important operational signal we cover later.
- Settings changes generally apply only to new conversations the teen starts after the change. Existing conversations keep their prior behavior.
Verify the link is active
Back on the parent account, return to Settings → Parental Controls. You should see:
- The teen's name / email listed under "Linked teens"
- A status indicator that says "Active" or "Linked" (or similar)
- A summary card showing the teen's restriction profile
If it's not showing as linked within 5 minutes, check the teen's spam folder for the invitation email. The invite sometimes lands there — common with Gmail's promotions tab.
Setup Part 2 — Configure the restrictions
Once linked, the parent dashboard exposes a small set of on/off toggles per teen.
Access via: parent account → Settings → Parental Controls → Family members → [teen's name] → scroll to find the ChatGPT and Sora settings.
These are all binary switches. There's no "restrict to certain categories" mode — a feature is either available or it isn't.
ChatGPT toggles
Reduce sensitive content — leave ON. This is on by default when you link, and it's the most useful setting in the panel. It blocks graphic content, viral challenges, sexual/romantic/violent roleplay, and extreme beauty ideals. The cases where you'd want this off are narrow.
Memory — consider OFF. Memory builds a persistent profile that ChatGPT uses to make future conversations more personal. For an adult that's a productivity feature; for a teen it's the design pattern that drives parasocial attachment. If you turn it off, OpenAI deletes existing memories within 30 days. Off is the safer default.
Voice mode — consider OFF for younger teens. Voice creates a more emotionally immersive experience, which worsens AI-companion attachment. Off for younger teens; the call gets harder for older teens who use it for legitimate purposes (language practice, accessibility).
Image generation — consider OFF. The cases that go wrong with teens and AI image generation are not worth the cases where it's useful. If your teen has a specific creative or educational use, leave it on, but the default for most families is off.
Location — leave OFF unless there's a specific reason. The "more relevant local results" upside doesn't outweigh the always-on signal of where your kid is.
Model training — your call. Off keeps your teen's transcripts out of OpenAI's training data; on contributes them. This is a values call about whose data trains the model, not a safety call.
Quiet hours
You can set one time window when ChatGPT is unavailable on the teen account. Just one — you can't set "school hours AND sleep hours" as separate windows.
Pick sleep hours. Late-night AI conversations are when emotionally-distressed teens reach out, and that's exactly when the conversation is most likely to drift somewhere harmful. A 10 pm – 7 am block is the highest-leverage single window you can configure.
If you'd rather block school hours, that's reasonable too — but the sleep-hours block addresses the higher-risk pattern.
To set: toggle Quiet hours on, then select a start time and an end time.
Sora settings
If your teen uses Sora (OpenAI's video app), the same Family members panel exposes Sora-specific switches:
- Personalized feed — recommends videos based on history. Off is more boring but less algorithmically optimized.
- Continuous feed — uninterrupted scroll. Off limits the scroll, which is the right default for most families.
- Messaging — DMs between users. Off is the safer default; teens have plenty of DM apps already.
ChatGPT Atlas settings
If your teen uses ChatGPT Atlas (OpenAI's browser), there's a separate set of toggles:
- Reference browser memories — whether ChatGPT can pull from browsing history. Off is the safer default.
- Agent mode — whether ChatGPT can take multi-step actions like research or shopping in the browser. Off for younger teens; situational for older.
- Safe search — whether explicit search results show up. Leave this in the safer position.
A couple of practical notes
- Settings changes apply to new conversations. If you tighten a setting, existing chats keep their old behavior. Have your teen close out chats and start fresh after major changes.
- Updates can take a few minutes to propagate. If a toggle change doesn't appear to do anything, give it 5–10 minutes.
- The teen sees what's on or off but can't change it while linked to your account. Worth a brief conversation up front so it's not a surprise.
The crisis detection system — what it does and what you'll see
This is the part of OpenAI's parental controls that gets the most attention in press coverage, and it's also the most misunderstood.
How it works (per OpenAI's own description)
Three steps:
- Automated detection. OpenAI's systems flag prompts that may relate to a serious self-harm safety concern.
- Human review. A small team of specially trained reviewers evaluates whether the flagged conversation shows signs of acute distress. This step matters — it means the alert is filtered through a human, not just an automated classifier, before reaching the parent.
- Parent notification. If the reviewers confirm a serious concern, the parent gets an alert containing what was detected, support resources, and contact info for OpenAI Support.
The narrow scope here is intentional. OpenAI specifically scoped these notifications to serious self-harm signals — not general bad mood, not academic stress, not normal teen conversation about hard topics. They're not broadcasting alerts every time a teen says they're frustrated.
How you receive the alert
You pick: SMS, email, or push notification. Notifications are on by default when the link is established; you set the delivery method during setup or anytime later under Parental Controls.
What parents do NOT see
- The actual message content
- The model's response
- Earlier conversations leading up to the alert
- Any other chat history or topics
You see that an alert fired and what category was detected, plus crisis resources. You do not see what was said.
This is a deliberate privacy design. The argument is: if teens know parents see every word, they will not use ChatGPT for emotional processing AT ALL — and that's worse, because they'll still be in distress, just without a tool to process it. OpenAI bet on supervised-but-private being better than fully-monitored.
How to respond when you get a distress alert
Don't go to your teen and say "ChatGPT told me you've been suicidal." That will burn the trust and ensure they never use the linked account again — they'll switch to an unsupervised AI, and you'll have less visibility, not more.
What works better:
- Take the alert as a signal, not a transcript. You don't know what the conversation was about specifically. You know it triggered the safety classifier.
- Open a low-stakes, in-person conversation in the next 24–48 hours. "How are you doing? Anything been weighing on you lately?" Don't reference ChatGPT.
- If the alert is about suicidal ideation specifically, it's worth the direct conversation, even at the cost of revealing how you knew. The cost of getting it wrong on a real crisis is too high to optimize for trust.
- Have a therapist on call, or know where to refer. The 988 Suicide & Crisis Lifeline is staffed 24/7. The Crisis Text Line (text HOME to 741741) is the equivalent for teens who won't call.
Setup Part 3 — Lock down the OS-level account so they can't just open a second ChatGPT account
This is the single most common bypass: the teen creates a second account on a different email, logs in there, and uses the unrestricted version.
iOS
- Settings → Screen Time → Content & Privacy Restrictions (requires your Screen Time passcode, which the teen should NOT know)
- Toggle Content & Privacy Restrictions ON
- Under iTunes & App Store Purchases → Installing Apps → set to Don't Allow (so they can't install another AI app like Character.ai, Replika, or competitor ChatGPT clients)
- Under Allowed Apps → review what AI apps are already installed. Disable any you don't want available.
- Optional but useful: Web Content → set to "Limit Adult Websites" and add specific blocks if needed.
Android (Family Link)
- Open Family Link on your phone, tap your kid's account.
- Manage settings → Apps → review installed AI apps.
- Manage settings → Google Play → require parent approval for new app installs.
- Google Account → Web & app activity → check what's logged. (Family Link surfaces some of this; the rest requires reviewing the kid's Google account activity directly.)
What this catches
If the teen tries to install Claude, Character.ai, Replika, or any of the dozens of "AI companion" apps that don't have parental controls, they'll be blocked at the install step.
What this doesn't catch
The web. If your teen opens a browser and goes to claude.ai or character.ai, they bypass app-level installs entirely. This is where DNS-level filtering comes in — see the layered approach below.
Layered approach — what to add on top
OpenAI's parental controls are one layer. They cover ChatGPT-the-app, supervised-teen-mode, and crisis detection within ChatGPT. Three other layers matter for the full picture.
Layer 1: DNS filtering for non-OpenAI AI tools
Block the AI tools you don't want at the network layer using NextDNS or your router's DNS filter. Specific domains worth considering:
| Service | Domain to block | Why |
|---|---|---|
| Character.ai | character.ai, c.ai |
Documented harm to teens; no robust parental controls |
| Replika | replika.com, replika.ai |
AI-companion app, romantic/sexual roleplay common |
| Janitor AI | janitorai.com |
NSFW AI companion content; no age verification |
| Kindroid | kindroid.ai |
Same category |
| Crushon | crushon.ai |
Same category |
| SpicyChat | spicychat.ai |
Explicit AI roleplay |
Most of these are not on standard "blocklist" categories so you have to add them manually. Our NextDNS for Families guide walks through how to deploy NextDNS at the device level (so it works on cellular too) and at the router level (so it covers visiting devices). The setup takes about an hour.
Layer 2: Screen Time / Family Link app-install blocks
Already covered in Setup Part 3 above. Without this layer, the teen can sidestep your DNS filters by installing the same app from the App Store and using it on cellular.
Layer 3: Conversation
Nothing in this guide replaces the conversation about why you care, what you're worried about, and what you want them to do if they ever feel like an AI conversation is going somewhere unhealthy.
The honest framing for an older teen: "I'm not trying to read your messages. I'm trying to make sure that if you're going through something hard, you have an actual human in the loop. If you're using ChatGPT to think through homework or plan a birthday party, none of this matters. If you're using it because you're sad and don't know who else to talk to, I want to know — and I want you to know I'm available."
Most teens, given that framing, are okay with the supervision. The kids who fight back hardest about parental controls are often the ones who most need them.
Common bypass attempts — what works, what doesn't
Ranked by frequency, in our observation:
1. "I'll create a second OpenAI account on a throwaway email."
- Works only if you didn't lock down email-account creation on the device.
- Counter: require parental approval for new app installs (Screen Time / Family Link) AND block creation of new email accounts on the device.
2. "I'll just use Claude / Gemini / Character.ai instead."
- Works completely, since none of those have OpenAI's parental controls.
- Counter: DNS blocking at the network level (Layer 1 above). Combine with Screen Time / Family Link app-install blocks.
3. "I'll use the OpenAI API directly through some random app."
- Works for technical teens. There are several "alternate ChatGPT clients" that hit the OpenAI API directly with no parental control hooks.
- Counter: harder. The API itself has no concept of parental controls. Best fix is to require your approval for any new app install AND to monitor browser history for API-key-based clients.
4. "I'll use a friend's account."
- Works completely. Can't be beaten technically.
- Counter: this is a conversation. Your kid will use friends' accounts at sleepovers and in school. The goal isn't perfect prevention — it's that they have an internal compass for when they're using AI in a way that worries them, and they have you in their corner if they need a conversation.
5. "I'll factory-reset and re-pair without linking my parent account."
- Works if the teen has the device passcode and there's no Screen Time / Family Link lock preventing factory reset.
- Counter: Apple Family Sharing organizer can require approval for factory resets. Same with Family Link on Android. Set this up.
What OpenAI's parental controls don't cover
Be honest with yourself about what's outside the fence:
- Other AI companies. Claude, Gemini, Copilot, Perplexity, Mistral, Cohere, etc. None of these have linked-account parental controls today.
- AI companion apps. Character.ai, Replika, Janitor AI, Kindroid, SpicyChat — all have far worse safety profiles than ChatGPT and zero parental controls.
- API-based clients. Random apps that hit the OpenAI API directly bypass your linked-account restrictions completely. The API doesn't know anything about parental controls.
- Group AI use. Friends' devices, shared accounts, AI-in-school-Chromebook tools — none of which respect your home-account settings.
- In-game AI. AI-powered NPCs in games, AI features in Roblox, AI tutors — different fences.
Operational rhythm
Once set up, what to do over time:
- First week: confirm the link is active by checking Settings → Parental Controls → Family members. You should see your teen listed with their current settings shown. There is no usage dashboard here — just confirmation that the controls are active.
- First month: have one short check-in conversation with your teen ("how's ChatGPT been? anything weird?"). The product itself doesn't surface usage patterns to you; the teen does, if asked.
- After a safety notification: read the section above on how to respond. Don't escalate immediately; calibrate over 24–48 hours.
- If the link breaks: OpenAI notifies you if your teen unlinks. If you get that notification and it wasn't planned, it's a conversation, not a panic — they're allowed to unlink. The question is why, and what tool they're using instead.
- After OpenAI updates: check Settings → Parental Controls at least monthly to confirm settings haven't been reset by an app update. They shouldn't be, but app updates have flipped settings back to defaults in the past for other apps.
What to actually talk to your teen about
The parental controls are a backstop. The conversation is the actual work. A few prompts worth using:
- "What do you talk to ChatGPT about?" Open question, not gotcha. Some kids will say "school stuff." Some will say "I don't really use it." Some will tell you they ask it for advice. All of those answers tell you something useful about where they are.
- "Have you ever asked ChatGPT something you were too embarrassed to ask me or your friends?" This is the actual usage pattern that matters. It's also genuinely harmless for many use cases (medical questions, awkward social situations, math you're embarrassed not to know). The point of asking isn't to catch them — it's to communicate that you understand why they'd use AI for that, and you're not going to make it weird.
- "What would make you stop using ChatGPT?" This one is useful for opening a conversation about model failure modes. Most teens have at some point gotten a ChatGPT response that was clearly wrong, biased, or weird. Discuss those moments. They're calibrating their trust in the tool, and you can help.
What NOT to lead with:
- "Have you ever told ChatGPT you wanted to hurt yourself?" — kills the conversation.
- "I'm going to start reading your ChatGPT history." — burns trust, doesn't actually solve the problem (they'll switch tools), and isn't even possible under OpenAI's parental controls anyway.
- "AI is dangerous and you shouldn't use it." — they will, you can't stop them, and saying this once costs you future credibility on AI conversations.
Bottom line
OpenAI's parental controls are a meaningful step. They're far from sufficient. The realistic stack:
- OpenAI parental controls for ChatGPT specifically (this guide)
- NextDNS or router-level DNS filtering for the AI companion apps OpenAI doesn't cover (NextDNS for Families guide)
- Apple Screen Time or Google Family Link to block new AI-app installs at the OS level
- The conversation — about why you care, what you're worried about, and what you want them to do if they ever feel like an AI conversation is going somewhere unhealthy
Each layer covers a different gap. None covers all of them.
If you do nothing else after reading this guide, do these three things tonight:
- Link your parent ChatGPT account to your teen's
- Set the Quiet Hours blackout for 10 pm – 7 am
- Have a 5-minute conversation about how they currently use ChatGPT
The rest can wait until next weekend.
For the broader AI-companion concern (Character.ai, Replika, Kindroid, etc. — apps OpenAI's controls don't touch), see our Character.ai app profile. For network-level filtering of the AI tools OpenAI doesn't cover, see NextDNS for Families.
No affiliate relationship with OpenAI. We pay for our own ChatGPT Plus subscription.
Updated April 2026