Mindgard Raises £3M Seed to Put Automated AI Red Teaming on the Enterprise Security Agenda
December 20, 2024
Mindgard, a Lancaster University spinout specialising in security for artificial intelligence, has raised £3 million in seed funding from venture capital firms IQ Capital and Lakestar, with additional participation from Osney Capital. The company will use the investment to accelerate growth and expand operations, bringing its enterprise platform for AI security from its initial deployments in the intelligence community to a broader commercial market. Mindgard is led by CEO and CTO Dr Peter Garraghan, a Professor of Computer Science at Lancaster University, alongside co-founders Dr Neeraj Suri and Steve Street, who serves as COO and CRO.
Mindgard traces its origins to research that began at Lancaster University in 2018, when Garraghan began studying adversarial machine learning — the field concerned with how AI and machine learning systems can be attacked, manipulated, or deceived by malicious inputs. Over the following years, his research group, backed by the Engineering and Physical Sciences Research Council and additional institutional funding, produced more than sixty scientific papers and developed a deep technical understanding of the specific ways in which AI models fail under attack. The company was formally founded in 2022, when it became clear that no commercial product existed to address the problem the team had spent years studying. Mindgard holds exclusive rights to intellectual property generated by eighteen doctoral candidates at Lancaster University through a unique agreement with the institution.
The core problem Mindgard addresses is this: AI systems introduce a new and distinct category of security vulnerability that conventional cybersecurity tools are not equipped to handle. Traditional application security tests whether software behaves correctly against known attack signatures and logic flaws. AI models — particularly neural networks — can also be attacked through adversarial inputs that cause them to misclassify images, leak training data, or produce dangerous outputs when prompted in specific ways. These vulnerabilities only manifest at runtime, in response to how the model is being used, and cannot be detected by static analysis. Mindgard's platform, branded as Dynamic Application Security Testing for AI (DAST-AI), uses continuous automated red teaming to simulate adversarial attacks against deployed models across a wide library of known attack types, reporting on the security posture of the model and enabling teams to remediate identified weaknesses.
The platform integrates into existing MLOps and SecOps workflows, making it possible to run security testing as part of standard development and deployment pipelines without disrupting established processes. At the time of the seed round, Mindgard was already being deployed by key intelligence organisations around the world, giving it a uniquely credible early customer base and a validation of the technology in the most demanding security environments. The company is headquartered in London, with research and development anchored at Lancaster University, and has been recognised as the UK's Most Innovative Cyber SME at Infosecurity Europe 2024.
The seed funding positions Mindgard to make AI security a standard requirement for any enterprise deploying AI systems, at a moment when the adoption of large language models and other AI technologies in business-critical applications is accelerating rapidly.
Sources





