Nicole Perlroth — former NYT cybersecurity journalist and advisor to CISA under the Biden administration — gives one of the more sobering takes on where we are right now. She spent years covering nation-state hacking for the Times and wrote This Is How They Tell Me the World Ends, a book about the zero-day vulnerability market. When she talks about cyber threats, she is not speculating.
The offense has the advantage
Zero-day vulnerabilities have always been valuable. Nation-states and criminal groups pay enormous sums for undiscovered exploits — a working iOS zero-day can fetch millions on the open market. But AI changes the math entirely. What used to require rare human expertise and months of work can increasingly be automated. Her argument is simple: in the age of AI, there is no more room for human error. Every vulnerability will eventually be found and exploited. This is not alarmism. It is arithmetic.
We are playing catch-up, and the gap is growing. The Target breach — where attackers compromised point-of-sale terminals across thousands of stores — was an early warning that critical infrastructure is vulnerable at every layer. That was over a decade ago. The lesson was not fully learned.
Three macro threats converging
Disinformation. AI has collapsed the barrier to entry. Large-scale influence campaigns used to require state-level resources. Not anymore. The Rio Tinto case in Serbia is a sharp example: what looked like organic local opposition to a mining project turned out to have Russian bot amplification behind it. A conspiracy theory, seeded and spread algorithmically, shaped real political outcomes. Regular people and regular institutions have almost no tools to fight this at scale — and the people who could build those tools are often the same ones benefiting from the chaos.
Resource wars. Cyberattacks increasingly follow contested natural resources and critical infrastructure. This is not a separate story about hackers — it is geopolitics by other means.
Cybersecurity. The classic threat surface, now turbocharged. AI writes code faster than humans can audit it. The attack surface expands automatically. The defenders are still mostly human.
The provenance problem
On secure code, Perlroth recommends Semgrep for static analysis and references the work of Veracode alongside Ken Thompson’s foundational 1984 lecture Reflections on Trusting Trust. Thompson’s core insight — that you cannot fully trust code you did not write yourself, down to the compiler — has never been more relevant. When AI generates thousands of lines per minute, the question of provenance becomes urgent: do you actually know where your code came from, and what is inside it?
The hypocrisy worth naming
Perlroth makes a point that is easy to overlook because it is stated so plainly. Several of the most prominent public voices for free speech — Sam Altman, Elon Musk, Bill Ackman — have simultaneously sent private legal requests to take down content they personally dislike. She notes this not with outrage but as a structural observation. The people best positioned to fight disinformation have their own reasons not to.
Where this leaves us
Voice deepfakes are already here. The tools are cheap and getting cheaper. The safeguards are not keeping pace. Perlroth is not someone who reaches for easy optimism, and after watching this talk, it is hard to argue with her caution.
The offense has the advantage. We are not ready. And the window to close the gap is not getting wider.