Cyber Teams

7 min read

5 steps for proactive AI risk management

A guide to approaching AI risk exposure by laying the “rules of the road” with an accurate AI risk management policy.

Howard Poston avatar

Howard Poston,
May 21
2024

Any use of AI comes with potential risks, especially as providers are still working out the kinks. 

As companies and their employees begin embracing AI—both directly and via AI-enabled products—they also will need processes in place to mitigate risks and gauge potential implications for security and regulatory compliance.

Understanding AI risks

top AI risks

Managing AI risk means understanding the many things that can go wrong. Some of the top AI risks to consider include:

  • Privacy leaks: AI systems are continuously learning, meaning that user input could end up as part of the AI’s internal model. Multiple research efforts have demonstrated that AI technology like ChatGPT, Gemini, and others can be tricked into handing over sensitive data.

  • AI hallucinations: Generative AI (GenAI) frequently gets things wrong and makes stuff up, which can be a major problem in the wrong context. For example, when asked coding questions, GenAI chatbots frequently recommend using non-existent libraries. This provides attackers with a prime opportunity to create those libraries with malicious code that AI users will import into their projects.

  • Missed detections: Stepping back from GenAI, many security products now incorporate AI functionality to identify potential incidents in mountains of security data. These tools aren’t perfect, and a mistake could mean missing a real attack or blocking legitimate access.

  • Poor training data: An AI model is only as good as its training data, and most models are already running out of unused data. If AI models are trained on poor-quality data, they will provide bad outputs. For example, if an attacker injects training data that mislabels an attack as benign traffic, AI models trained on that data will allow the attack to pass through.

  • Bias: An AI model inherits the biases of its training data. This is why facial recognition systems work well for white males and not for any other demographic. Using a biased AI-based system will produce biased results.

  • Data poisoning: the compromise of a training dataset used by an AI or machine learning (ML) model that can trigger unintended consequences. 

  • Ethics and compliance: AI enables companies to use customer data in many ways that they never anticipated or consented to. In addition to the ethical issues with this, it’s also banned by certain regulations such as GDPR when classified as “automated decision-making.” The E.U.’s Artificial Intelligence Act expands on this and is expected to take effect by the end of 2024.

5 steps to develop an AI risk management framework or policy

A comprehensive and well-defined AI risk mitigation policy defines the “rules of the road” for AI usage within an organization. 

A proactive approach enables organizations to better handle their AI risk exposure and simplifies enforcement and compliance as AI becomes ubiquitous. But where should you start? 

If you’re creating your first risk management policy related to AI, here’s a step-by-step approach to follow.

developing ai risk management

Understanding your organization’s AI usage is critical to risk identification and management.

There are a few different options for doing this, including pre-existing tools. One option is to look at DNS requests from within the organization for AI-related domains (openai.com, gemini.google.com, etc.). 

Consider looking for SaaS applications that could have built-in AI features, like Salesforce. Even if you miss some, this will give an idea of what tools are in use and to what extent.

The greatest risk in my mind is data breaches and unauthorized access to sensitive information due to the unrestricted use of AI tools by employees. 

 

Before beginning to leverage AI within your company, it's important to first prioritize security and compliance starting with a risk analysis and in-depth AI acceptable use policy. 

 

Regular security audits and in-depth analysis of any new AI tools before they are brought into the environment are also paramount. 

 

Lastly, companies that invest in AI tools should also invest in employee security awareness training around their usage and how to best safeguard employee privacy and company data.

 

Ben Rollin (mrb3n), Head of Information Security, Hack The Box

AI creates a variety of different risks, and your AI threat model may not cover all of them yet. For example, if you’re not using AI for software development, the risk of AI hallucinations producing bad code isn’t a problem. 

On the other hand, the potential for sensitive business data being entered into AI tools might be a major concern for the business.

After identifying these risks, prioritize them. In the example above, we would prioritize the risk of privacy leaks above hallucinations. This helps to ensure that resources are focused on the biggest threats to the business.

The simplest AI risk mitigation strategy would be to block all requests to AI apps, like ChatGPT. While this might be a logical choice in some scenarios, it may be overkill for others. 

Risk mitigation strategies should be geared toward bringing AI usage under the control of the business and reducing associated risks to an acceptable degree. 

For example, the company may be fine with using AI to write code. However, it would also mandate that all code undergo security scanning and have a software bill of materials (SBOM) generated to weed out bad code and non-existent or even malicious dependencies.

Defining an oversight and governance framework will help validate that the plan works and simplify compliance with the EU’s Artificial Intelligence Act and similar regulations. Luckily, there are many resources available for designing an AI governance strategy, including:

Two years ago, GenAI didn’t exist, and now it’s a household name. AI tech changes rapidly, and your AI risk management strategy will need to evolve with it. Ensure your policies are up to date by integrating:

  • Feedback loops: Some risk management efforts won’t work, either because they’re too loose or too restrictive. Building in feedback loops enables these problems to be fixed before they become bigger issues.

  • Continuous improvement: No first draft is perfect. Define metrics for program success and plans for improving over time.

  • Process updates: Things change, and the AI risks of tomorrow may be very different from today. Keeping tabs on the tech landscape and performing regular reviews helps ensure that the program is still effective.

Getting started with your AI risk management framework

AI is integrating its way into most software, and regulators are working to define the rules of the road. Getting ahead of the curve and defining your AI risk management strategy today will pay off as AI usage and tools explode in the future.

The best place to get started with this is figuring out what you need to protect. Dig into those DNS records. Ask employees how they’re using AI. Ask them how they want to use AI. See if your company has a five-year plan that includes AI.

Once you have a better idea of your current AI footprint and how it might expand, consider all of the ways it can go wrong. Sensitive data is put into the system. The AI breaks. It makes the wrong decision. 

Brainstorm how you could avoid your worst-case scenario and manage the impact of the most likely issues. That’s the core of your risk management strategy.

Boost your team's AI threat detection & remediation skills

The Artificial Intelligence (AI) and Machine Learning (ML) on HTB Dedicated Labs provide insights into common attacks on AI/ML systems, emphasizing underlying principles and demonstrating how insecure implementations may compromise sensitive information or enable unauthorized access. Test your team’s ability to:

  • Identify and exploit insecure ML implementations.

  • Exploit classic vulnerabilities through AI systems.

  • Bypass Face Verification Systems.

  • Write Machine Learning programs in Python.

  • Train classification models for Membership Inference attacks.

Hack The Blog

The latest news and updates, direct from Hack The Box