This sheer scale, combined with greater sophistication and convincingness, means finding ways to detect and mitigate AI-generated deepfakes quickly is an increasingly urgent priority.
Concerns over criminal manipulation of digital text, images and video are not new, but the proliferation in recent months of generative AI tools that enable anyone, anywhere, to quickly, easily, and cheaply create deepfake images has significantly changed the game.
In its role as an innovative enabler connecting frontline government and law enforcement with cutting-edge technology from industry, the Accelerated Capability Environment (ACE) is at the heart of this ramp-up in activity designed to find practical solutions to combat deepfakes.
2024 was a year where the marriage of cutting-edge technology, collaboration and fresh thinking enabled significant strides forward.
Circular collaboration to combat AI-generated deepfakes
Clear results that accelerate the crucial detection of AI-generated deepfakes in a range of domains have been made across a series of focused commissions carried out by ACE.
Just as importantly, learnings and practical experiences developed in one commission have been shared with others to pass on deeper knowledge and skills.
The biggest event in this space was the Deepfake Detection Challenge. Initiated by the Home Office, the Department for Science, Innovation and Technology, ACE and the renowned Alan Turing Institute, this visionary idea brought together academic, industry and government experts to develop innovative and practical solutions focused on detecting deepfakes.
More than 150 people attended the initial briefing, during which five challenge statements pushing the boundaries of current capabilities were launched.
Major tech companies developing concepts to detect fake images
The critical importance of collaboration and sharing of skills and knowledge was a recurring theme, and major tech companies, including Microsoft and Amazon Web Services (AWS), provided practical support.
Eight weeks were spent developing innovative ideas and solutions on a specially created platform, which hosted approximately two million assets made up of both real and synthetic data for training and testing.
Following this, 17 submissions were received, and six teams were selected to demonstrate their ideas to detect AI-generated deepfakes in front of more than 200 stakeholders.
Solutions from Frazer-Nash, Oxford Wave, the University of Southampton and Naimuri, a combination of existing products that have been identified as potentially showing operational value as well as early-stage proof of concepts being developed against specific use cases, including CSEA, disinformation and audio, are now going through benchmark testing and user trials.
Key insights from the initial challenge work, alongside the clear success in accelerating the state-of-the-art deepfake detection possibilities, included that curated data was critical to be able to make as much progress as possible in the time and conditions available and that creating a dataset that was more representative of real-world operational scenarios would have been helpful.
Tackling deepfakes in policing
When another significant commission to further deepfake detection was brought to ACE by the government’s Defence Science and Technology Laboratory (DSTL) and the Office of the Chief Scientific Adviser (OCSA), data development was a top priority.
To mature the EVITA (Evaluating video, text, and audio) AI content detection tool, the focus has shifted away from volume.
The biggest challenge is in digital forensics, where the ACE team heard officers can be faced with up to a million child abuse images on a single seized phone.
This commission, working with community members Blueprint, Camera Forensics and TRMG, seeks to understand where deepfake detection tooling fits into the investigation stage to add most value.
The next step in this particular project is ‘making this real’ – working towards commissioning a proof of concept or trial of an existing capability.
Therefore, the learning is becoming circular once more as the next stage of the Deepfake Detection Challenge progresses.
This will push further than any work in this field so far, focusing on making the initial solutions presented more user-centric and deeply relevant to practitioners in the field.