

The EXPLAIN Project
As law enforcement agencies adopt AI technologies and solutions to enhance their capabilities and combat those who use it maliciously, there is still one big challenge – the legal status of AI and data-driven techniques as part of evidence is still unclear.
For example, the use of AI for things like automated image and video classification, social network analysis, voice identification in audio recordings, and analysing conversations through natural language processing (NLP) hasn’t been fully tested in a legal context. This is similar to when DNA evidence was first introduced and had to be legally validated.
That’s why ‘Explainable’ is a core principle at AiLECS Lab. Project EXPLAIN researches how the explainability of AI can be pursued from an evidential perspective, and how this can be embedded in law enforcement and juridical workflows for use in criminal trials. This is being done in collaboration with the Faculty of Law at Monash University.
This is vital for law enforcement to be able to move from using AI as an investigative support tool, towards seeking criminal justice against the highest standard of proof. To achieve this, we will need to be able to explain and prove:
How data is gathered and use to train AI algorithms.
How AI algorithms determine the outcomes they produce.
How reliable and credible the outcomes are.
This will be aligned with processes and frameworks within the legal system, where AI evidence can come under scrutiny throughout numerous stages and its validity questioned, challenged and evaluated.