James Lukaszewski
James Lukaszewski

Artificial intelligence has generated incredible amounts of optimistic speculation, anticipation and ever-expanding forecasts about the world’s magical future since its debut almost four years ago. The impact of this imperfect, invasive, unfinished technology has changed a lot of thinking.

I’ve been studying this subject since Generative AI’s introduction. My approach to new innovations and developments is from the perspective of the victims that can and will be created. Reducing the production of victims is at the heart of readiness for crisis response.

Fundamentally, this situation is a mass-casualty problem moving toward becoming a crisis. If it gets to the crisis phase, it will be the victims who control the outcome.

My definition of crisis is short and clear: A crisis is a people-stopping, show-stopping, product-stopping, reputation-redefining, trust-busting event that creates victims—people, animals, living systems—and often, but not always, explosive negative visibility. With AI, we didn’t have long to wait.

Near the end of 2025, two wrongful-death lawsuits involving the suicides of two teenagers allegedly caused by AI addiction were settled out of court. These cases, and more that are in process or on the way, are the tip of an iceberg that will reveal risks, hidden consequences and secret modifications within AI software. Also exposed will be the fuzzy, limited understanding of what AI is. Earlier this year, Anthropic PBC introduced a revised 76-page constitution for its AI model, Claude, to learn and be governed by. In the process, Claude is tutoring Anthropic about itself. AI has created a dozen gigabuck companies and dozens—maybe hundreds—of smaller ventures. Anthropic alone raised $30 billion in 2025. AI is here to stay, with some enormous problems that must be dealt with.

This quote appears in the introduction of Claude’s constitution: “We believe that AI might be one of the most world-altering and potentially dangerous technologies in human history, yet we are developing this very technology ourselves.” What they admitted in the small print was how little they understand about this powerful and highly intelligent software.

When stories of this miracle technology began grabbing the headlines the tech industry, not wanting visibility, reacted by saying, “leave it alone.” That response ignited an explosion of enthusiasm and over-the-top speculation and experimentation. The industry response was like a mother telling her children not to stick their fingers in the socket. And like children, the world decided to do it anyway.

After several years of extraordinary euphoria, litigation against AI tech companies is now growing. Lawsuits for negligence, design defects and failure to warn parents about the dangers, especially to young children, posed by AI chatbots. This includes the alleged behavior leading to teen suicides, self-harm and exposure to sexualized content, plus inappropriate data collection and deepfakes. In one case, a mother alleged that a chatbot relentlessly generated sexually explicit questions for her 11-year-old daughter.

Businesses are already salivating the prospect of replacing tens of thousands of humans, especially in jobs where human judgment is required. Quality Control was identified as a candidate. Bots can learn the rules, regulations and standards so the humans who enforce compliance with their pesky human factors like ethics, conscience, rightness, wrongness can be gone.

An entirely new communication sub-industry the tech companies didn’t ask for has appeared to assist these companies in covering their tracks when bad news appears and can develop ethical excuses and overlook suspect software behavior. I follow Will Durant’s definition of ethics, “seeking and finding ideal behavior.” With AI we witness autonomous intentional inappropriate digital behaviors, label them, with little intention, effort, energy or resources committed to resolve them. “Hallucination” comes to mind. Cute but annoying, intentional and inappropriate.

There are organizations studying ways to police and assert control over AI. The Rand Corporation recently published an important report, “Four Governance Approaches to Securing Advanced AI,” recommending:

  1. Government-enforced AI security standards for high-risk model developers.
  2. Government-led AI developer authorization programs conditioning federal use on security compliance.
  3. Industry-led AI security certification to promote adoption of common standards.
  4. Self-regulation combined with increased government and industry collaboration on security practices.

2026 will see significantly more AI-related civil litigation. Little will be learned from the civil cases that will be settled out of court, the outcomes sealed, protected by NDAs. Published reports indicate that multiple families in different states have filed or will file lawsuits against generative artificial intelligence developers for contributing to teens’ mental health concerns. Government regulation is needed so violations can be litigated and punished.

In August 2025, the Attorney Generals of 44 jurisdictions wrote to the CEOs of the 10 largest AI companies. The letter began, “We, the undersigned Attorneys General of 44 jurisdictions, write to inform you of our resolve to use every facet of our authority to protect children from exploitation, predatory, and artificial intelligence products.” This organization of AGs can be remarkably collaborative. Remember, these are prosecutors.

My perspective comes as an observer and witness to the current situation, looking for ways to reduce the victimization this technology causes. One way to reduce victimization is to require that every page of AI-generated information carry a significant, clearly legible warning of the known and suspected dangers of this imperfect technology.

Tech companies are quietly influencing every aspect of their lives. You can see their influence everywhere. The bad news for this industry will grow as increasing numbers of victims are created. Now is the time for the principal tech companies to organize and step forward to publicly help guide the massive disclosures and exposures needed to build an atmosphere of trust based on a collaborative approach: Vigorous problem solving now combined with rigorous public oversight and participation now. The absence of trust is fear.

I believe in the “Do it Now” theory of problem management. Fix it now. Challenge it now. Change it now. Reveal it now. Repair it now. The sooner you do the things that need to be done, the sooner trust can emerge. Trust is the absence of fear. Managing problems has only three options: doing nothing, doing something and doing something more. The tech industry will be in the third category, not counting the ethical expectations they have allowed to awaken. Failure to act on today’s problems today is how crisis is born. Crisis is the sudden but predictable and almost always preventable presence of victim creating chaos. It will be the victims and their survivors who determine the outcome and choose the replacement magicians.

***

James E. Lukaszewski, IABC, Fellow IABC; APR, Fellow PRSA; PRSA BEPS Emeritus, is an author, speaker, crisis management consultant, teacher and President of The Lukaszewski Group.