Darktrace unveils AI models that help protect data privacy and intellectual property

In response to growing use of generative AI tools, Darktrace launches new risk and compliance models to help its 8,400 customers around the world address the increasing risk of IP loss and data leakage.

poppy gustafsson

These new risk and compliance models for Darktrace DETECT and RESPOND make it easier for customers to put guardrails in place to monitor, and when necessary, respond to activity and connections to generative AI and large language model (LLM) tools.

This comes as Darktrace’s AI observed 74% of active customer deployments have employees using generative AI tools in the workplace. In one instance, in May 2023 Darktrace detected and prevented an upload of over 1GB of data to a generative AI tool at one of its customers.

New generative AI tools promise increases in productivity and new ways of augmenting human creativity. CISOs must balance the desire to embrace these innovations to boost productivity while managing risk. Government agencies including the UK’s National Cyber Security Centre have already issued guidance about the need to manage risk when using generative AI tools and other LLMs in the workplace.

In addition, regulators in a variety of jurisdictions (including the UK, EU, and US) and in various sectors are expected to lay out guidance to companies on how to make the most of AI without exacerbating its potential dangers.

“Since generative AI tools like ChatGPT have gone mainstream, our company is increasingly aware of how companies are being impacted. First and foremost, we are focused on the attack vector and how well prepared we are to respond to potential threats. Equally as important is data privacy, and we are hearing stories in the news about potential data protection and data loss,” said Allan Jacobson, VP and Head of Information Technology, Orion Office REIT. “Businesses need a combination of technology and clear guardrails to take advantage of the benefits while managing the potential risks.”

“CISOs across the world are trying to understand how they should manage the risks and opportunities presented by publicly available AI tools in a world where public sentiment flits from euphoria to terror. Sentiment aside, the AI genie is not going back in the bottle and AI tools are rapidly becoming part of our day-to-day lives, much in the same way as the internet or social media. Each enterprise will determine their own appetite for the opportunities versus the risk. Darktrace is in the business of providing security personalized to an organization, and it is no surprise we are already seeing the early signs of CISOs leveraging our technology to enforce their specific compliance policies,” said Poppy Gustafsson, CEO at Darktrace.

“At Darktrace, we have long believed that AI is one of the most exciting technological opportunities of our time. With today’s announcement, we are providing our customers with the ability to quickly understand and control the use of these AI tools within their organizations. But it is not just the good guys watching these innovations with interest – AI is also a powerful tool to create even more nuanced and effective cyber-attacks. Society should be able to take advantage of these incredible new tools for good, but also be equipped to stay one step ahead of attackers in the emerging age of defensive AI tools versus offensive AI attacks,” concluded Gustafsson.

To complement its core Self-Learning AI for attack prevention, threat detection, autonomous response, and policy enforcement, the Darktrace Cyber AI Research Center continually develops new AI models, including its own proprietary large language models, to help customers prepare for and fight back against increasingly sophisticated threats. These models are used across the products in Darktrace’s Cyber AI Loop.

“Recent advances in generative AI and LLMs are an important addition to the growing arsenal of AI techniques that will transform cyber security. But they are not one-size-fits-all and must be applied with guardrails to the right use cases and challenges,” said Jack Stockdale, CTO, Darktrace.

“Over the last decade, the Darktrace Cyber AI Research Center has championed the responsible development and deployment of a variety of different AI techniques, including our unique Self-Learning AI and proprietary large language models. We’re excited to continue putting the latest innovations in the hands of our customers globally so that they can protect themselves against the cyber disruptions that continue to create chaos around the world,” added Stockdale.



Looking for something specific?