To begin with, there are two major arguments against artificial intelligence in its current form. One should note in passing that it is not a new concept but one that has only recently begun to impinge seriously on popular consciousness. The first problem is that it uses vast, and increasing, amounts of energy, which does not help bring climate change under control, or enable the careful stewardship of the planet's resources. Perhaps technological innovation will one day bring this issue under control, but there is no sign of this at present. Secondly, it infringes copyright by misusing people's intellectual property in the training of its algorithms. This may be more of a problem for the arts than for the sciences, in which the fruits of research are supposed to benefit all of us, but at present it is hard to tell.
At the present time, perhaps the greatest potential of AI in disaster management is in its presumed ability to use its algorithms and data banks to provide synthesised information quicker than traditional methods can do so. A report by the Joint Research Centre of the European Commission (Galliano et al. 2024) suggests that in this it is close to disaster management but not quite part of it. Hence, its utility may lie in supporting decision making rather than making the decisions themselves.
A look at the research on AI tends to be more depressing than heartening. First, it is used inductively rather than deductively, which is inefficient, and often grossly so. Secondly, it should not be used as a substitute for thinking, creativity and human interaction. Certain aspects of disaster management are greatly undervalued, and thus poorly researched, but the academic DRR community. One of these is emergency planning, the process of anticipating needs caused by disaster impacts and making arrangements to satisfy them as well as possible with available resources. At the moment it is unclear whether decision making using AI generates risks of incorrect assumptions, distortions, mistaken views of situations or other errors that the technique might magnify. Hence, the safety of AI as a means of emergency management cannot be guaranteed.
Large language models can help chart the progress of public perception of disaster threats and impacts, as manifest in the mass media and social media. However, we now live in a world in which 'manufactured reality' has loomed as large as objective reality as a result of the need to deal with beliefs, opinions and expectations that differ from what science and objectivity would inform and prescribe.
The advent of social media came with a wave of optimism about their utility in reducing disaster risks and impacts (Alexander 2014). Subsequently, the dark side of the media revealed itself: conspiracy theories, subversion, personal attacks, aggression, attempts to destroy reputations, so-called "alternative facts", and so on. Could we be about to experience yet more of this with AI? The challenge with social media is to find an efficient, effective, robust and reliable way of counteracting the effect of misinformation (or disinformation, if you prefer). One of the keys to this is the issue of trust in authority--or its absence. One wonders whether displacing the human element with the computer generated one will increase or reduce trust in the output that results. Scepticism induces me to prefer the latter.
Legend has it that, when he was foreign minister of China, Zhou Enlai was asked by a journalist what he thought of the French Revolution and he replied "It's too early to tell." A fantasy, perhaps, but an enjoyable story just the same. It is more genuinely "too early to tell" with AI. What we need is more research on its impact, research that is detached from the process of generating applications for AI and which looks objectively at how well it is working and what problems it either encounters or produces.
Come what may, emergency management is a human activity that requires human input and human reasoning. It is unlikely that this need will ever be satisfied by artificial intelligence. The human mind is too versatile and flexible to be displaced.
More than a quarter of a century ago, Professor Henry Quarantelli, father of the sociology of disaster, published a very perceptive article on the information technology revolution, which was then in its infancy in comparison with what came later. His conclusions are still perfectly valid:-
"...close inspection of technological development reveals that technology leads a double life, one which conforms to the intentions of designers and interests of power and another which contradicts them—proceeding behind the backs of their architects to yield unintended consequences and unanticipated possibilities." (Quarantelli 1997)
We would do well to heed this observation and not embrace artificial intelligence uncritically.
References
Alexander, D.E. 2014. Social media in disaster risk reduction and crisis management. Science and Engineering Ethics 20(3): 717-733.
Galliano, D.A., A. Bitussi, I. Caravaggi, L. De Girolamo, D. Destro, A-M. Duta, L. Giustolisi, A. Lentini, M. Mastronunzio, S. Paris, C. Proietti, V. Salvitti, M. Santini and L. Spagnolo 2024. Artificial Intelligence Applied to Disasters and Crises Management: Exploring the Application of Large Language Models and Other Ai Techniques to the European Crisis Management Laboratory Analyses. European Crisis Management Laboratory, Disaster Management Unit JRC E.1, European Commission Joint Research Centre, Ispra, Italy, 46 pp.
Quarantelli, E.L. 1997. Problematical aspects of the information/communication revolution for disaster planning and research: ten non-technical issues and questions. Disaster Prevention and Management 6(2): 94-106.