The analysis of human error consists in understanding what led to an accident. Such analysis is particularly difficult in modern socio-technical systems in which human operators interact with complex systems: accidents can occur even with a fully operational system and rational users. In these situations, it becomes very difficult to identify the cause of an error. This is why investigations after aircraft accidents, for example, can take a very long time.
In this presentation, I will show how artificial intelligence, and more specifically reasoning models based on formal logic, can help investigators explore different scenarios that could explain an accident. We show that the cognitive biases of humans can be modelled and used to extract the most plausible explanations. This work, at the frontier between artificial intelligence and cognitive science, is an example of how AI can be used not to replace humans but rather to provide tools to improve operators' performance.