Meta Platforms Inc., the parent company of Instagram, is again under heavy legal and public criticism. The issue is not that Meta failed to notice the danger, but that it allegedly knew about the risk early, recorded it in internal reports, and responded too slowly. New wrongful death lawsuits filed by the families of two teenage boys who died by suicide after being caught in online sextortion claim that Instagram’s design made such abuse easier. This has raised serious questions about Meta’s duty to protect children.
The cases involve Levi Maciejewski, a 13-year-old from Pennsylvania who died in 2024, and Murray Dowey, a 16-year-old from Scotland who died in 2023. The lawsuits say that Instagram’s design allowed predators to quickly find, contact, and exploit minors, and that Meta had known for years that these design choices increased the risk.
The key issue is not whether sextortion happens, but whether Meta’s delay in putting known safety measures in place turned a known risk into a tragedy that could have been prevented.
How Sextortion Uses Platform Design
Sextortion scams targeting teenagers usually follow the same pattern:
- A stranger contacts a child through direct messages, often pretending to be a friend or a romantic partner.
- The scammer gains trust quickly and asks for private photos or videos.
- After getting the images, the scammer threatens to share them with friends, family, or schoolmates unless the child sends more images or money.
These crimes do not need advanced technology. They take advantage of platform features like open messaging, public profiles, and recommendation systems that connect strangers easily.
According to U.S. law enforcement, thousands of children have been targeted in these scams, and at least 20 teen suicides in the United States have been linked to sextortion. In some cases, children were targeted just days after opening an account, giving parents and authorities very little time to step in.
The Core Allegation: Known Risks, Delayed Action
The lawsuits against Meta do not say the company was unaware of the danger. They say Meta knew about the risks but acted too late.
According to court filings and publicly reported internal documents:
- By 2019, Meta’s own safety teams had suggested that teen accounts should be private by default to limit contact from strangers.
- Internal studies showed that this change would greatly reduce unwanted messages, but it could also lower user activity and growth.
- A 2022 internal audit found that Instagram’s recommendation system suggested about 1.4 million potentially inappropriate or predatory accounts to teenagers in just one day.
- Even with these warnings, strong Teen Account protections—such as default privacy, limited messaging, and stricter recommendations—were not fully introduced until late 2024, after the deaths mentioned in the lawsuits.
The families argue that Meta chose small, gradual fixes instead of major design changes, putting user engagement ahead of child safety even though it knew the platform made it easier for predators to reach minors.
Meta’s Position: Efforts, Safeguards, and Disagreement
Meta has repeatedly said that sextortion is a “horrific crime” and states that it works closely with law enforcement around the world. The company points to several safety steps it has introduced over time, such as:
- Blurring suspected explicit images in direct messages
- Showing warning messages and safety tips to teenagers
- Automatically finding and removing suspicious accounts
- Limiting messages between teens and unknown adults
- Launching new Teen Account features in 2024–2025, with stronger filters suited to a child’s age
Meta challenges parts of the lawsuits, especially the claims about timing. It says safety improvements were made continuously and not delayed.
Critics, however, argue that most of these steps respond only after harm has already started. They say Meta failed to make stronger design changes that would prevent strangers from contacting children in the first place.
A Familiar Pattern in Big Tech Accountability
This case reflects a common pattern seen across many large technology companies. Their own research often shows serious risks to children long before strong safety measures are put in place—especially when those measures could reduce growth, user activity, or profits.
When a company’s internal studies clearly warn about likely harm, and safer options are available, the issue is no longer about accidents or mistakes. It becomes a question of whether the company knowingly allowed the risk to continue.
As lawyers for the families argue, the claim is not that Instagram directly caused suicide. Instead, they say Meta designed and ran systems that connected vulnerable children to predators on a large scale, even though it knew safer design choices were possible.
The Question Before the Courts
As internal reports, audits, and research become public through court cases, it is harder for Meta to say it did not know or that it did its best. The main question before the courts is not whether Meta took some action, but whether it acted at the right time.
These cases raise a much bigger issue than Instagram alone: who decides when protecting children is more important than increasing user engagement, and what responsibility follows if that decision is delayed?
For the affected families, the lawsuits seek justice for losses that can never be undone. For the tech industry as a whole, these cases could reshape the legal rules on platform design, known risks, and corporate responsibility in a digital world where harm can spread faster than traditional safeguards can stop it.
Reference: LinkedIn post of Chiara Gallese, Ph.D.


