Over the past week, anti-AI sentiment in America crossed a line that cannot be walked back. What had been angry social media posts, protest signs outside tech offices, and heated op-eds escalated into attempted murder, arson, and gunfire directed at specific individuals — AI executives, their families, and public officials who have supported AI-related projects. The events are deeply disturbing on their own terms, and they carry implications that everyone working in artificial intelligence needs to take seriously.
This is not an article about the politics of AI regulation or the merits of the anti-AI movement. It is a sober account of what happened, why it happened, and what it means for professionals who work in this space every day.
What You Need to Know
- Daniel Moreno-Gama, 20, from Spring, Texas, traveled to San Francisco and threw a Molotov cocktail at Sam Altman’s $27M home, then attempted to break into OpenAI offices threatening to “kill anyone” inside.
- He was arrested with additional incendiary materials and an anti-AI manifesto containing a kill list of AI executives and investors with their home addresses.
- Two separate shooters fired at Altman’s residence just two days later.
- In Indianapolis, a councilmember who backed a data center project found 13 bullet holes in his home with a “No Data Centers” note — his 8-year-old son was inside.
- Gallup data shows half of Gen Z say AI makes them afraid; Stanford’s AI Index found only 10% of Americans are more excited than concerned.
What Happened in San Francisco
Daniel Moreno-Gama, a 20-year-old from Spring, Texas, traveled to San Francisco with what authorities describe as a pre-planned intent to carry out violence against AI industry targets. He threw a Molotov cocktail at the home of OpenAI CEO Sam Altman — a $27 million residence in one of the city’s wealthiest neighborhoods. The device ignited on the exterior of the property.
Moreno-Gama then made his way to OpenAI’s offices, where he attempted to force entry while verbally threatening to “kill anyone” inside the building. He was apprehended by law enforcement with additional incendiary materials in his possession. During the investigation, authorities discovered an anti-AI manifesto that included a kill list — names and home addresses of AI executives and investors across multiple companies.
He has been charged with attempted murder and attempted arson. He is being held without bail.
Two days after the Molotov attack, two separate individuals — unconnected to Moreno-Gama, according to initial reports — fired shots at Altman’s residence. The fact that three distinct actors targeted the same person within a 72-hour window suggests something more systemic than isolated acts of rage.
It Is Not Just Silicon Valley
Over a thousand miles from San Francisco, in Indianapolis, a city councilmember who had voted in favor of a local data center project discovered 13 bullet holes in his home. A note was left at the scene reading “No Data Centers.”
His 8-year-old son was home at the time.
This is not a tech industry story. This is a story about a child sleeping in a house that someone decided to shoot at because his father voted to approve a building project. The target was not an AI billionaire. It was a local public servant whose family was put at risk for a zoning decision.
Separate Attackers in 72 Hours
Three distinct individuals targeted Sam Altman’s home within a three-day span — a Molotov cocktail followed by two separate shootings — suggesting coordinated radicalization rather than isolated acts.
Age of the Primary Suspect
Moreno-Gama is 20 years old. He traveled across the country with pre-made incendiary devices, a manifesto, and a kill list. This is a young person who was radicalized to the point of attempted murder.
Bullet Holes in a Family Home
The Indianapolis councilmember’s home was struck by 13 rounds. His 8-year-old son was inside. The “No Data Centers” note makes the motive explicit. This is political violence against local government.
Names and Home Addresses
The manifesto recovered from Moreno-Gama contained the names and home addresses of AI executives and investors. This is not protest. It is a targeting document designed to enable future attacks.
The Numbers Behind the Rage
These acts of violence did not emerge from a vacuum. They emerged from a measurable, well-documented crisis of public trust in artificial intelligence that the industry has been slow to acknowledge and slower to address.
Stanford’s 2026 AI Index — published just days before the Altman attack — reported that only 10% of Americans say they are more excited than concerned about AI in their daily lives. That is one in ten. Among the people actually building AI, 56% of experts say AI will have a positive impact over the next 20 years. Among the people being affected by it, the mood is fear.
Gallup’s data is even more pointed for the generation entering the workforce. Among Gen Z — the demographic most likely to be directly affected by AI-driven changes in the labor market — half say AI makes them afraid. Less than a fifth feel hopeful about it. These are the people who watched AI tools begin replacing entry-level knowledge work during their first years out of school. Their fear is not abstract. It is rooted in a lived experience of economic uncertainty that the AI industry has largely dismissed as “transition costs.”
None of this justifies violence. Nothing justifies violence. But a 46-point gap between expert optimism and public fear is not a gap that can be closed with better marketing. It requires a fundamentally different relationship between the people building AI and the people living with its consequences.
The Trust Deficit Is Now a Safety Problem
For years, the AI industry treated public skepticism as a PR problem — something to manage with better messaging, glossy safety reports, and carefully worded blog posts about “responsible AI.” The events of the past week make clear that the trust deficit is no longer a reputation issue. It is a physical safety issue for the people who work in this space.
When a 20-year-old drives across the country with Molotov cocktails and a kill list, the industry cannot respond with another transparency report. When someone shoots 13 rounds into a house because a councilmember approved a data center, the response cannot be another round of thought-leadership blog posts.
The honest assessment is this: the AI industry has moved at extraordinary speed to build and deploy systems that are transforming the economy, and it has moved at a fraction of that speed to bring the public along. The result is a population that feels acted upon, not included. For a growing number of people, AI is not a tool — it is a threat imposed on them by a class of people who do not seem to care about the consequences.
That perception is not entirely fair. Many people in AI care deeply about safety, fairness, and the displacement effects of automation. But perception drives behavior, and the perception right now is one of an industry that builds first and asks questions later — if it asks them at all.
What This Means for AI Professionals
If you work in AI — whether you are building models, deploying enterprise tools, teaching courses, or advising organizations on adoption — these events change the operating environment. Not theoretically. Practically.
First, physical security. AI companies have already been increasing security measures, but the discovery of a manifesto with home addresses means the threat model now includes targeted attacks on individuals, not just corporate facilities. If you are a public-facing figure in the AI space, your personal security posture matters in a way it did not six months ago.
Second, the way we talk about AI matters more than ever. The industry’s default mode of breathless optimism — every launch is revolutionary, every benchmark is historic, every quarter brings the future closer — reads very differently to a 25-year-old who just watched their entry-level job disappear. Honesty about trade-offs, displacement, and the real limitations of AI systems is not a concession to critics. It is a prerequisite for the kind of public trust that keeps people from deciding that violence is the only option left.
Third, building ethically and transparently is no longer optional. It is a safety imperative. The Stanford AI Index showed that model transparency is collapsing — down from 58 to 40 on a 100-point scale. That trend is exactly wrong at exactly the wrong time. The people who feel threatened by AI are not going to feel less threatened by companies that refuse to explain how their systems work.
We say this as people who believe deeply in the positive potential of AI. We teach it. We build with it. We believe it can create enormous value for individuals and organizations — including the people who are currently most afraid of it. But that value only materializes if the industry earns the trust to deliver it. Right now, by every measurable indicator, that trust is not being earned.
For those of us who teach and practice AI, the response to this moment is not to retreat. It is to double down on the kind of AI education and deployment that treats people as participants, not obstacles. The best antidote to fear is competence. The best antidote to rage is inclusion. And the best way to honor the seriousness of this moment is to build better — more transparently, more ethically, and with more care for the communities we serve.
AI Education Built on Trust, Not Hype
The 2-day in-person Precision AI Academy bootcamp. 5 cities. $1,490. 40 seats max. Thursday-Friday cohorts, June-October 2026.
Reserve Your Seat