TECHnicalBeep

Trent AI Raises $13M to Rebuild Security for Agentic Systems

Trent AI Team

Most security tools tell you what’s wrong after the code is already written. Trent AI is built on a different idea: security should start at design, not after deployment. The London based startup just emerged from stealth in April 2026 with a $13M seed round and a multi agent security solution aimed directly at the growing complexity of agentic systems.

The round was led by LocalGlobe and Cambridge Innovation Capital, with angel investors from OpenAI, Spotify, Databricks, and AWS backing the raise. Trent AI was founded by Eno Thereska, Neil Lawrence, and Zhenwen Dai, a team with backgrounds spanning AWS, Confluent, Alcion (acquired by Veeam), DeepMind, and the University of Cambridge.

The Real Security Gap:

The problem Trent AI is solving is not a new one, but it’s getting harder to ignore. Security has traditionally been disconnected from the development process. Reviews happen late, findings pile up in dashboards, and engineers deal with hundreds of flagged issues, most of which turn out to be false positives or low priority noise.

According to Deloitte’s 2026 State of AI report, 74% of companies plan to deploy agentic AI within two years, but only 21% report having a mature governance model for autonomous agents. That gap is exactly where risk accumulates. In agentic systems, the stakes are higher because agents make decisions and take actions in real time, with fewer humans in the review loop.

How Trent AI Works:

Trent AI’s security solution operates across four layers: scan, judge, mitigate, and evaluate. Scanning agents continuously observe code, infrastructure, dependencies, and runtime behavior. Analysis agents then separate signal from noise and prioritize based on real business impact, not just static rule sets. Remediation agents patch vulnerabilities, open pull requests, and validate fixes. Posture agents track risk trends over time and benchmark against security standards.

Each cycle feeds back into the system, making Trent AI’s judgement more accurate as it learns the specific context of each deployment. This is different from tools that apply the same rule set across every project, regardless of architecture or data flow. Two systems can have the same piece of code, and one is safe while the other is a critical risk. Context determines that, and Trent AI is built to reason about it.

The Team Behind it:

Eno Thereska brings over 20 years of experience building secure and scalable systems. He was a Distinguished Engineer at Alcion and a Principal Engineer at AWS. Co founder Neil Lawrence is the DeepMind Professor of Machine Learning at the University of Cambridge and a former Director of ML at Amazon. Zhenwen Dai previously worked as a Machine Learning Scientist at AWS and a Senior Research Manager at Spotify.

This is a founding team that has built production systems at scale. Investors noted that background directly. Tony Jebara, former Spotify VP of Engineering and Head of AI/ML, pointed to the exponential growth in AI generated code as precisely the environment where specialized security models become necessary, and noted Trent AI addresses that need across all stages from initial design to large repositories.

Early Partners and Community Work:

Trent AI has already been working with design partners including Canopy, Commscentre, ML@Cam, Qbeast, and Weblogic. These companies have reported immediate visibility into their security posture, fast identification of vulnerabilities, and a clear remediation scope from the tool.

The startup is also active in the broader security community. It is a Partner Startup member of OWASP (the Open Worldwide Application Security Project), a Startup Partner with Carnegie Mellon University’s CyLab Venture Network, and has built a security agent for the open source platform OpenClaw. Developers building with tools like Lovable can also access Trent AI’s continuous security advisory layer through its dedicated integration. This kind of work matters for a company in the agentic security space, where credibility is built over time.

Security Built for Agentic Systems:

The $13M gives Trent AI the runway to go deeper on its core thesis: that agentic security requires a purpose built solution, not an adaptation of existing tools. For startup founders and engineering teams deploying AI agents into production, the question of who is handling the security reasoning behind the scenes is becoming harder to avoid.

Trent AI’s answer is a system that continuously learns, improves, and integrates across the development lifecycle. As agentic AI becomes a standard part of the software stack, tools built specifically for that environment are worth paying attention to. You can explore more about how AI security posture management works in practice, and how it compares to traditional approaches, if you’re evaluating options for your own stack.

Share this article