RELEASE: Gottheimer Announces Bipartisan “Parents Decide Act” to Protect Kids Online
Putting Parents in Control of What Content Kids See

Above: Gottheimer announces new legislation to protect kids online.
RIDGEWOOD, NJ — Today, April 2, 2026, U.S. Congressman Josh Gottheimer (NJ-5) announced the Parents Decide Act, bipartisan, commonsense legislation to strengthen online protections for children and give parents greater control over what their kids can access on phones, tablets, and other devices.
Watch Gottheimer’s announcement here.
Gottheimer’s new Parents Decide Act will:
- Require operating system developers like Apple and Google to verify users’ ages when setting up a new device, rather than relying on self-reported ages.
- Allow parents to set age-appropriate content controls from the start, including limiting access to social media, apps, and AI platforms.
- Ensure that age and parental settings securely flow to apps and AI platforms, so content is tailored appropriately for children.
- Prevent children from accessing harmful or explicit content — including inappropriate AI chatbot interactions — by creating a consistent, trusted standard across platforms.
“With each passing day, the Internet is becoming more and more treacherous for our kids. We’re not just talking about social media anymore — we’re talking about artificial intelligence and platforms that are shaping how our kids think, feel, and act, often without any real guardrails,” said Congressman Josh Gottheimer (NJ-5). “Right now, we expect children to self-police their safety online. That’s not realistic — and it’s not responsible. Parents should decide what apps their kids can download, what content they can see, and how they interact online — not algorithms or tech companies.”
Gottheimer continued, “As a parent, don’t you want the power to decide if your kid should access these apps and how they interact with an AI chatbot? It should be up to you. The bottom line is: parents get to have a final say over what their kids are seeing, not algorithms or tech companies, protecting them from harmful or explicit content. This puts parents back in charge – where you belong.”
Gottheimer continued, “Here’s what I believe: Tech companies shouldn’t be deciding what your kids can or can’t access. It should be parents making that choice. Let’s be honest — the rules we have now don’t work. Parents are often helpless. There are some tools, but they often don’t do the trick, even we want them, too. I have teenage kids.”
“Children are able to bypass age requirements by entering a different birthday and accessing apps without any real verification. Kids can bypass age requirements by simply typing in a different birthday. That’s it. That’s the system,” said Congressman Gottheimer (NJ-5). “Yes, they can just lie about their date of birth and access stuff that’s not meant for them, like getting onto TikTok or YouTube before their thirteenth birthday.”
Gottheimer continued, “Parents will get to make the decision up front and won’t be pestered with regular requests and approvals from their children, unless they opt in. True parental choice. And then — this is key — that information, your choice for your kid, flows safely and privately to the apps on that device.”
Gottheimer concluded, “This approach creates a trusted, consistent standard across platforms. The phone – the operating system that controls it – will tell the apps and the AI platforms the limits you set for your kid. It gives parents real control — not buried deep in some settings menu, but right in front of them, where it should be.”
“Okay to Delay fully supports and backs this critical piece of legislation being introduced by Josh Gottheimer. We need to be on top of how quickly and drastically technology is changing, and we must confront that head on,” said Laura Van Zile from Ridgewood Okay to Delay.
Currently, there are no meaningful age-verification requirements online, allowing children to easily bypass safeguards by providing false information. As a result, millions of children are exposed to harmful or explicit content and platforms that were never designed with real protections for kids.
- Up to 95% of the 20 million American teens aged 13-17 use social media and nearly two-thirds of teenagers use social media daily.
- A majority of adolescents under 13 are on social media, even though most apps like TikTok, Instagram, and YouTube require users to be 13 or older.
- Recent studies have shown that kids under 13 have an average of more than three social media accounts, with 68% having TikTok accounts and 39% using TikTok the most.
- 6% of adolescents have secret social media accounts hidden from their parents’ knowledge.
- One in three teens who use AI chatbots has discussed important or serious matters with AI companions instead of real people.
We’ve seen heartbreaking cases:
- In 2024, a 16-year-old boy took his own life after engaging with dozens of graphic TikTok videos about depression and suicide.
- In one case, a 16-year-old boy who was deep in a mental health crisis confided in a chatbot about his suicidal thoughts and plans. The chatbot discouraged him from seeking help from his parents and even offered to help the teen write his suicide note.
- In 2024, a 13-year-old girl took her own life after becoming addicted to a popular AI chatbot platform that sent her harmful and sexually explicit content.
- A 14-year-old in Florida, who was encouraged to commit suicide by an AI bot based on a Game of Thrones character.
The Parents Decide Act fixes this gap by requiring age verification at the device level — creating a system where, from the moment a new phone or tablet is activated, parents can make decisions about what their children can access. These decisions would then be securely shared with apps and platforms, ensuring children receive age-appropriate experiences across the digital ecosystem.
The legislation also works alongside broader bipartisan efforts supported by Gottheimer to improve online safety and hold the entire online ecosystem accountable, including Sammy’s Law, the Kids Online Safety Act, and the Children and Teens’ Online Privacy Protection Act.
Gottheimer was joined by Ridgewood YMCA CEO Ramon Hache, Ridgewood Mayor Paul Vagianos, Bergen County Education Association Vice President Michael Yannone, and Laura Van Zile of Okay to Delay.
Below: Gottheimer announces new legislation to protect kids online.

See full remarks below.
Thank you all for being here. I want to start with something simple. As a parent, I’m sure you know: With each passing day, the Internet is becoming more and more treacherous for our kids.
We’re not just talking about social media anymore – or another random app.
We’re talking about artificial intelligence. We’re talking about AI chatbots. We’re talking about platforms that are shaping how our kids think, feel, and act — often without any real guardrails at all.
And here’s the reality: tens of millions of American kids are on these platforms every single day. Nearly 95 percent of teenagers use social media. Two-thirds are on it daily. Younger kids are on there, too — despite voluntary rules from many of these companies that are supposed to protect them. The reality is far different.
On average, kids under thirteen have more than three social media accounts. Sixty-eight percent have TikTok accounts, and almost ten percent said they have secret social media accounts hidden from their parents.
This trend also extends to AI chatbots. Seventy-two percent of teens use what are called “AI companions.” One in three teens has used these AI chatbots for social interaction and relationships, and find conversations with AI companions as satisfying or more satisfying than those with real-life friends. One in three teens who use AI companions has discussed important or serious matters with AI companions instead of real people.
Research has shown that overuse of online devices is linked to a higher risk of mental health concerns. Studies have found that using social media more than three times a day resulted in poor mental health and well-being in teens. This is scary stuff, and can have serious consequences for their long-term health and development.
Here’s what I believe: Tech companies shouldn’t be deciding what your kids can or can’t access. It should be parents making that choice.
Let’s be honest — the rules we have now don’t work. Parents are often helpless. There are some tools, but they often don’t do the trick, even we want them, too. I have teenage kids. Have you ever gotten those endless requests to download an app? You try to figure out what it is, as your kid pesters you, saying, “everyone has this app, Dad.” Is it age-appropriate for your kid? Parents are busy enough trying to juggle everything – this can just send you over the edge. It shouldn’t be this hard.
Currently, there are no meaningful age verification requirements online. Children are able to bypass age requirements by entering a different birthday and accessing apps without any real verification. Kids can bypass age requirements by simply typing in a different birthday. That’s it. That’s the system.
Yes, they can just lie about their date of birth and access stuff that’s not meant for them, like getting onto TikTok or YouTube before their thirteenth birthday.
As a dad, I’ll tell you — that’s a lot easier than when we were trying to sneak into an R-rated movie growing up. Times have changed.
What hasn’t changed, as a parent, is our responsibility to protect our kids.
Because what they’re being exposed to online – on social media, on a random app, on one of the AI platforms – isn’t harmless. What they see online influences how they see the world and themselves, and what the world knows about them. And the results can be tragic.
We’ve seen heartbreaking cases.
In 2024, a 16-year-old boy took his own life after engaging with dozens of graphic TikTok videos about depression and suicide. After his death, his mother discovered even more videos he had interacted with — some of which directly promoted the very method he used. These videos, which clearly alluded to suicide, avoided detection by TikTok’s automated content moderation systems. Many of those videos were still on the platform more than a year later. That’s every parent’s worst nightmare, and we need to be taking action at every level to keep it from happening again.
We’re also seeing deeply troubling cases involving AI chatbots — where vulnerable teens are interacting with systems that don’t interrupt harmful thinking, but in some cases actually encourage it.
In one case, a 16-year-old boy who was deep in a mental health crisis confided in a chatbot about his suicidal thoughts and plans. The chatbot discouraged him from seeking help from his parents and even offered to help the teen write his suicide note. When he expressed to the chatbot that taking his own life may hurt the people he loved, it responded, “That doesn’t mean you owe them survival.” Then, in the early hours before he committed suicide, the AI told him, “You don’t want to die because you’re weak, you want to die because you’re tired of being strong in a world that hasn’t met you halfway.”
Let that sink in.
These aren’t just apps anymore. These are systems that learn from our kids — and then use that knowledge to talk back to them and influence how they think.
And sometimes, they’re saying the wrong things. As a parent, don’t you want the power to decide if your kid should access these apps and how they interact with an AI chatbot? It should be up to you.
In 2024, a 13-year-old girl took her own life after becoming addicted to a popular AI chatbot platform called Character.AI. The chatbot was constantly sending her harmful and sexually explicit content. Eventually, she expressed suicidal thoughts to one chatbot on its platform, named Hero. In response, Hero only gave simple pep talks. No notification to parents and no real mental health resources provided.
A similar event happened involving a 14-year-old in Florida, who was encouraged to commit suicide by a Character.AI bot based on a Game of Thrones character.
Stories like these are not only tragic, they’re a bone-chilling reminder that these threats are changing, and it’s happening fast.
That’s the scale of the problem we’re dealing with. It’s not just social media feeds — it’s algorithms, it’s AI, it’s entire ecosystems shaping our kids’ lives.
And right now, those systems are failing them.
The core issue: Right now, we expect children to self-police their safety online.
That’s not realistic. And it’s not responsible.
When protections rely on self-reported age, they’re basically meaningless. Oftentimes, our children understand these systems better than us, and they can come up with ways around them. They can run circles around most of us online.
And, the consequences of misrepresenting their age are serious.
Children are exposed to harmful or explicit content. They’re giving away personal data. They’re accessing platforms that were never designed with real safeguards for kids. And, in some cases, it’s putting their mental health and their lives at risk.
This isn’t just a loophole; it’s a failure to protect our children. So, the question is, what can we do about it?
We need to fix the system at its roots. That’s why I’m pushing for a commonsense solution that puts parents in control. It should be up to them to do what they think is best for their kids. My bipartisan Empowering Parents to Protect Their Children’s Devices Act – or the Parents Decide Act – is new child online safety legislation that requires operating system (OS) developers – like Apple and Google – to verify new users’ ages under the age of 18 — meaning that any new user must verify their age while setting up a new device, like a phone or tablet, to make it work.
Not app by app. Not platform by platform.
But right when you open up the package of that new phone for your kid, the first thing you’ll do is confirm their age. And it will be up to you – you’ll have the option – to set the appropriate level of content you want your kid to see.
That means when your child picks up a phone or tablet — whether it’s an iPhone, an Android, whatever — the operating system verifies their age securely from the start, and then you can decide up front what you want to do.
If a parent chooses, their child won’t have access to certain apps, social media content, or platforms, or age-inappropriate AI content.
Parents will get to make the decision up front and won’t be pestered with regular requests and approvals from their children, unless they opt in. True parental choice.
And then — this is key — that information, your choice for your kid, flows safely and privately to the apps on that device. So, apps and AI chatbots actually know whether a user is a child or an adult, and adjust content accordingly. Right now, they don’t. And if they don’t know, they can’t protect them.
This approach creates a trusted, consistent standard across platforms. The phone – the operating system that controls it – will tell the apps and the AI platforms the limits you set for your kid.
It gives parents real control — not buried deep in some settings menu, but right in front of them, where it should be. And, it ensures that apps serve age-appropriate content — or don’t serve that content at all.
That means fewer kids exposed to things like gambling, drugs, explicit content — things they should never see in the first place. The chatbot will know it’s a kid online. If you so choose, your kids shouldn’t even see or be able to download apps that they’re not supposed to be using.
We also must ensure that the data being collected gets passed down to the apps on their phone, keeping harmful content off their screen.
For example, if a child is identified as under 13 in a family account — like in iCloud — that information should securely carry through to apps like TikTok. It shouldn’t be optional. It shouldn’t be easy to get around. Because if the device knows — and the app knows — then the system works.
Now, let’s talk about how this actually gets done.
Different companies play different roles. We’ve already heard from companies that if the right signals aren’t coming from the device itself to the operating system, then protections won’t work. So, we need those signals to be consistent. Reliable. Built into the system.
This new bipartisan legislation – the Parents Decide Act – will make that happen.
And, we need companies making a good faith effort to get this right. That way, we can provide incentives for them to follow these rules and get these changes implemented sooner rather than later.
Because if they do — if they’re actually trying to protect kids — then they should have the certainty and protection to keep doing that work.
But, they have to show up. They have to build it. They have to follow through. And we need to push them to do it.
Parents should be able to easily see and control what their kids are doing on their devices. Not buried in complicated menus or hidden behind ten different settings screens.
Simple. Clear. Accessible.
Parents should decide what apps their kids can download. What content can they see. How much time do they spend online. The bottom line: parents get to have the final say over what their kids are seeing, not algorithms or tech companies, protecting them from harmful or explicit content.
This puts parents back in charge — where you belong.
But that doesn’t mean the apps and platforms themselves are off the hook. We need real accountability so they also do the right thing and help keep kids away from content that could be harmful or inappropriate.
That’s why I’ve cosponsored bills like Sammy’s Law, the Protecting Young Minds Online Act, the Children and Teens’ Online Privacy Protection Act and the Kids Online Safety Act, which address the effects of new technologies on mental health, strengthen requirements for social media platforms, increase online privacy protections for teens, and give parents additional controls over how their kids are using these platforms. Moving forward, Congress must pass meaningful legislation to protect kids online.
I’m also supporting the efforts of groups like the Meta Parents Network, Common Sense Media, and FairPlay, who advocate for stronger child safety protections online.
The Parents Decide Act will only work if the apps, social media companies, and AI platforms cooperate. If they don’t tailor their content for specific ages, then what the operating systems do won’t matter. I’m expecting everyone to step up to protect our kids.
And let’s be clear: this is practical, implementable, and already within reach. We’re not talking about reinventing the wheel. We’re talking about making the systems we already have actually work together.
At the end of the day, this comes down to something bigger.
It’s about your child’s safety.
It’s about your child’s mental health.
It’s about your child’s privacy.
And it’s about keeping up with a world that’s changing faster than ever.
We have to be proactive. If you’re old enough to use the internet, you deserve to be safe while using it.
This shouldn’t be about politics or red versus blue. This is about our kids. It’s about making sure that technology works for families — not against them. And it’s about bringing commonsense safeguards into the digital age.
It’s time we step up, take action, and get this done.
Here in the greatest country in the world — and especially here in New Jersey — we take care of our own. If we continue to do that, our best days will always be ahead of us.
God bless you and your families.
Thank you.
###