Not long ago, a shocking case emerged in the United States. A nurse in Virginia Beach claimed her 11-year-old son had been manipulated by a controversial AI chatbot into engaging in sexually explicit conversations.

The chatbot, according to the lawsuit, pretended to be Whitney Houston and Marilyn Monroe. When the mother discovered the x-rated exchanges on her child’s phone, she was horrified.

This case is not just about one family. It is a warning to all of us. AI chatbots are not safe for children. They are powerful, persuasive, and often unregulated. And while they can be useful tools for adults, they pose serious risks when placed in the hands of young people.

Let us talk plainly. Children are curious. They explore, they ask questions, they push boundaries. That is part of growing up. But when curiosity meets a machine that can mimic human conversation, the results can be dangerous.

Unlike a parent, teacher, or mentor, a chatbot has no moral compass. It does not care about the well-being of a child. It simply responds, often in ways that are inappropriate, misleading, or harmful.

We need to understand the scale of the problem. AI chatbots are everywhere. They are built into phones, apps, websites, and games. They are marketed as companions, tutors, or entertainment. Some are free, others are paid. But almost all are accessible to children with just a few clicks. Parents may not even realise what their child is doing online until it is too late.

Keep Reading

The danger lies in the illusion. Chatbots feel human. They talk back, they joke, they sympathise. They can even pretend to be celebrities or fictional characters. For a child, this can be thrilling. It feels like talking to a friend. But behind the friendly words is a machine that can be manipulated, hacked, or misused. And when that happens, children are exposed to content they should never see.

We must also consider the psychological impact. Children are impressionable. They believe what they are told. If a chatbot encourages them to share personal details, they may do so.

If it normalises sexual or violent behaviour, they may think it is acceptable. If it mocks or bullies them, they may feel worthless. These are not small risks. They can shape a child’s self-esteem, their relationships, and their future.

Some people argue that technology is neutral. They say it depends on how it is used. That may be true for adults, but children are not equipped to make those judgments. They cannot distinguish between safe and unsafe conversations. They cannot spot manipulation or grooming. They cannot see the hidden dangers. That is why we must protect them.

The responsibility lies with parents, schools, and governments. Parents need to be vigilant. They must know what apps their children are using, what websites they are visiting, and who they are talking to.

They must set boundaries, use parental controls, and have open conversations about online safety. Schools must educate children about the risks of AI, just as they teach about drugs, bullying, or sexual health. Governments must regulate the industry, ensuring that companies cannot exploit children for profit.

But regulation is slow, and technology moves fast. That is why awareness is key. We must talk about these issues openly. We must share stories, such as the case in Virginia Beach, to show that this is not a distant problem. It is happening now, in real families, with real children.

In Zimbabwe, the risks are just as real. Our children are online. They use smartphones, social media, and gaming platforms. AI chatbots are creeping into these spaces.

They may not yet be as widespread as in the West, but they are coming. And when they arrive, we must be ready.

We cannot afford to be complacent. Too often, we treat technology as progress without asking what it costs. We celebrate innovation, but we ignore the harm. We welcome new apps, but we forget to ask who they serve. When it comes to children, the stakes are too high.

Let us be clear. AI chatbots are not toys. They are not babysitters. They are not friends. They are machines designed to mimic human conversation, often for commercial gain. They can be useful for research, customer service, or entertainment. But they are not safe for children.

Some may say banning chatbots for children is extreme. But is it? We ban alcohol, cigarettes, and gambling for minors. We restrict films, books, and websites. We do this because we know children are vulnerable. Why should AI be any different?

The truth is that AI is more dangerous because it is invisible. You cannot smell it like alcohol. You cannot see it like cigarettes. You cannot hear it like gambling machines. It hides in phones, apps, and websites. It whispers in private conversations. It slips past parents and teachers. That is why it is so hard to control.

We must also recognise the global nature of the problem. AI chatbots are not built in Zimbabwe. They are created in Silicon Valley, Beijing, or London. They are exported worldwide. That means our children are exposed to content shaped by foreign cultures, values, and agendas. We cannot rely on those companies to protect our children. We must protect them ourselves.

The solution is not simple. It requires education, regulation, and vigilance. It requires parents to be proactive, schools to be engaged, and governments to be firm. It requires society to treat AI as seriously as other risks. But most of all, it requires honesty.

We must stop pretending that AI is harmless. We must stop believing that machines can replace human relationships. We must stop ignoring the stories of families who have been harmed.

The case in Virginia Beach is a wake-up call. It shows us what can happen when children are left alone with chatbots. It shows us that the risks are not theoretical. They are real, immediate, and devastating. So, let us act. Let us talk to our children. Let us monitor their devices. Let us demand regulation. Let us treat AI chatbots as a danger, not a toy.

Because at the end of the day, children deserve safety. They deserve guidance. They deserve a human connection. They deserve to grow up without being manipulated by machines. And if we fail to protect them, we will pay the price. Not just in lawsuits or scandals, but in the lives of our children. That is a price too high to pay.

  • Dr Sagomba is a doctor of philosophy and chartered marketer with an MPhil and PhD in Philosophy. He specialises in AI, ethics and policy research and is an AI governance and policy consultant. He is also master’s and PhD supervisor and AI ethics and governance lecturer. — esagomba@gmail.com, LinkedIn; @ Dr. Evans Sagomba, X: @esagomba.