Public uncertainty and dread around AI aren’t random, but follow patterns that risk communication research has mapped for decades. Because of this, organisations can’t just manage implementation risk; they must also manage perception, and that’s an opening for communication professionals.
The gap in risk perception
While watching a conversation between Hank Green and Cal Newport on AI, I began to think about the gap between hype and reality in our perception of AI risk. The video starts with Green listing his real fears, driven by a combination of evidence-based outrage and moral panic. Newport then joins the conversation, grounding the discourse with his computer science background, and placing each fear in context, mapping it to its likely reality.
This gap, as explained by risk communication research, is predictable.
Peter Sandman, one of the world’s leading risk communication consultants, defines risk as hazard plus outrage. Perceived risk is therefore a combination of actual danger and emotional or social response — both are part of the same equation.
Psychologist Paul Slovic’s research — developed in the context of nuclear energy and chemical hazards — also points to two dimensions that explain how severely people respond to emerging threats. The first is dread: how catastrophic, uncontrollable, and irreversible a hazard feels. The second is the unknown: how invisible, novel, and scientifically uncertain it seems. Slovic’s original focus was physical hazards, but the underlying insight about the role of emotions (or affect) in escalating risk perception seems worth considering in the AI context.
Add Starr’s (1969) finding that people tolerate risks they’ve chosen far more readily than those imposed on them, and another pattern emerges. This phenomenon helps explain the rise of outright refusal to use AI tools — not only among those critical of current issues with AI and big tech, but also among those who fear what this technology might bring. Organisational rollouts of AI are therefore not just about the tech itself but also about the broader climate of uncertainty and fear around technological change.
Because AI stirs up dread — the future of work, automation, existential risk — based on a future reality that has not yet fully emerged, both dread and unknown factors will drive overall AI risk perception. The unknown dimension is especially significant here — unlike nuclear energy, where hazards were at least physically definable, AI risks are still genuinely emerging…
Yet risks are real — and unevenly distributed.
Despite these perceptual patterns, it’s important to emphasise that AI risks remain real and varied, ranging from well-documented environmental costs and AI use in military and surveillance contexts, to its broader geopolitical implications and still-emerging questions about AI’s impact on creative labour and mental health.
As the sociologist Ulrich Beck notes, wealth accumulates at the top while risk accumulates at the bottom — the people who benefit most from AI are rarely the ones bearing its costs. This asymmetry is also why resistance tends to be loudest among those with the least agency in the adoption decision.
How risk gets amplified
However, the speed and intensity of risk responses are further shaped by what Kasperson et al. call the ‘social amplification of risk’ — the same information produces vastly different reactions depending on the channels through which it travels, and the role of the media in stoking hype.
As Newport notes in his discussion with Green, media channels driven by engagement metrics and big tech’s commercial interest in amplifying promise and fear, converge to sustain investment. This dynamic leads to public discourse fixated on scenarios likely to generate strong emotion, while balanced or nuanced views often get buried or move slowly.
What does this mean for communication professionals?
No matter the source — whether current events, future fears, expert opinion, or social amplification — we cannot afford to ignore risk.
Since trust is the currency of our profession, AI risk perception should influence not just the rollout of AI tools within organisations, but also how organisations communicate with — and remain accountable to — their audiences and stakeholders.
Therefore, AI risk assessments should not only address implementation risks (data leaks, errors, system failures) but also weigh the tension between AI rollout and public and employee sentiment, as well as the impact AI adoption might have on organisational trust.
Risk and crisis experts, Covello and Allen (1988), make clear that how we talk about risk, involve stakeholders, and respond to concerns matters as much as the actual likelihood or magnitude of the risk itself.
Where there is affect — public outrage — there is also risk, and therefore, communication must follow. Communication, ultimately, isn’t just a response to risk — it’s part of how risk is managed.
The Guardian recently adopted a public, transparent approach to its use of AI. How do you think organisations should consider public risk perception and trust as part of their AI strategy?
AI tools were used in the creation of this article. Claude: thinking partner, structural editor, and condensing tool. Grammarly: final editing tool.
Thanks to Tony Jaques for the introduction to these ideas and resources. Many of these references were sourced directly from Jaques (2014) and served as the basis for my original essay on risk perception.
Beck, U. (1992). Risk Society: Towards a New Modernity. Sage.
Covello, V., & Allen, F. (1988). Seven cardinal rules of risk communication. U.S. Environmental Protection Agency.
Green, H., & Newport, C. (2026). This is Going to be Very Messy. YouTube. https://youtu.be/8MLbOulrLA0
Jaques, T. (2014). Risk management: Perception, hazard and outrage. In Issue and Crisis Management: Exploring Issues, Crises, Risk and Reputation (pp. 234–250). Oxford University Press.
Kasperson, R. E., Renn, O., Slovic, P., Brown, H. S., Emel, J., Goble, R., Kasperson, J. X., & Ratick, S. (1988). The social amplification of risk: A conceptual framework. Risk Analysis, 8(2), 177–187.
Sandman, P. M. (n.d.). Risk communication resources. psandman.com
Slovic, P. (1987). Perception of risk. Science, 236(4799), 280–285.
Starr, C. (1969). Social benefit versus technological risk. Science, 165(3899), 1232–1238.
The Guardian. (2026, March 4). How the Guardian is using GenAI. https://www.theguardian.com/help/insideguardian/2026/mar/04/all