Thinking in metaphors is perhaps one of the most powerful cognitive features that Homo Sapiens has. Quite when our species developed the capability to pattern-match real situations into fictional abstracts is unclear. The word itself comes from ancient Greek but the presence of metaphor stretches back well before any written language.
Despite this inherent ability to simplify, as a society we seem to be increasingly struggling to communicate important complex topics. The categories vary but great communication failures in recent memory include:
- Political: the Brexit Remain campaign
- Mathematical: exponential growth of Covid
- Medical: Covid vaccines
- Long-term, delayed gratification risks: climate
changecrisis - Low-frequency, high impact risks: asteroid defence
I’ve heard arguments that our average ability to tell stories and construct metaphors began to decline with the shift from oral transmission of history to written form. Regardless, there has never been a greater need for investment into effective communication for both science and also complex topics in general.
I’m not suggesting that a single metaphor can suddenly unlock clarity in billions of minds but an effective one may help. The potential misuse and civilizational risk of AI is a great example of a complex concept which needs an effective communication model. Ian Hogarth’s opinion piece in the Financial Times this weekend was excellent foundational reading but struggles with metaphors.

Talking about a God-like intelligence is by definition impossible to imagine but, worse from a communications point of view, makes the mistake of triggering people’s cognitive dissonance. That compounding effect is a recipe for defensive thinking and rejection when it then comes to making the case for AI alignment.

Unfortunately, history shows that successful communication of complex threats are the exception and change happens only when a major disaster literally illustrates the risk. For those using the metaphor of nuclear weapons as a rhetorical device for how AI should be regulated, remember that the breakthrough in popular understanding of nuclear fission was two real-world examples in Nagasaki and Hiroshima.
Metaphors have had a controversial role in science. I’m doing a gross injustice to the vast body of writing on the topic by simplifying it as the balance between making science accessible and restricting understanding. Simon Fisher’s thread on ‘DNA as a blueprint’ is a great example and I’ve found Brigitte Nerlich’s blog a useful resource on the general topic.
The Mother Test remains a useful bar for metaphor construction. Having tried and failed to explain the risks of AGI to my Mom (‘I really don’t think Siri has the brains to kill me, dear’), I’ve found that emphasising AI as an invasive concept (think untested vaccine being injected into your house, car, plane, health system) has resonated slightly: she has now started writing letters to local politicians.
As neither an expert in AI nor metaphors I’m sure this post can be vastly improved by others (feedback welcome!). But it seems quite clear that how we communicate about LLMs and AGI should be a material part of any R&D budgets being deployed in this area.