Sam Altman observes that the communities most involved in AI safety are often the least calm — “an extremely high-strung community with some peculiarities.” He argues this is dangerous because fear-based decision-making around something as consequential as AGI is likely to produce the worst outcomes. Fear could justify arms-race dynamics, reactive policies, and choices that don’t reflect “the best of humanity.”

This reflects a broader principle Joe Hudson sees repeatedly: the thing we’re scared of is the thing we attract. Fear-driven people protecting humanity from AI risk may actually increase that risk through their own reactivity. The alternative, as Altman frames it through a science fiction reference, is “prudence without fear” — the ability to be cautious and thoughtful without being driven by panic.

This applies well beyond AI. Any domain where fear drives the “safety” response — parenting, organizational management, national security — risks creating the very outcomes it seeks to prevent. The antidote is genuine calm, born not from suppression but from having worked through enough fear to act from clarity rather than reactivity.

Source