25.11.2024
Reading time: 4-5 minutes

AI security: What CISOs need to know

Software Improvement Group
Yellow dots
Twitter icon on a yellow background
Youtube icon on a yellow background
Instagram icon on a yellow background
Icon of a mail symbol in a yellow background

In this article​

Artificial intelligence (AI) is much more than just a buzzword—it’s the fastest-adopted business technology in history. With 77% of companies already using or exploring AI, and 83% making it a top priority in their business strategies, its impact is undeniable.

From automating manual tasks to boosting employee productivity by up to 66%, AI is opening doors to innovation and growth. But with nearly 75% of AI systems facing serious quality issues, its risks can’t be ignored.

During SCOPE 2024, our IT leadership event, Rob van der Veer explained the quality issues of AI systems and what this means.

As AI becomes a core part of business operations, managing its security is crucial. For Chief Information Security Officers (CISOs), this means protecting your organization’s systems from emerging AI threats while ensuring you comply with security and compliance standards.

This may sound daunting; But it doesn’t have to be.

At the end of the day, AI systems are software systems but with some new, unique risks to address. So, instead of building an entirely new security framework for AI, we strongly recommend building upon your existing security management processes.

This approach not only prevents AI from being treated in isolation or duplicating efforts unnecessarily but also broadens the scope of risk analysis, awareness, training, policies, tools, and verification methods. It benefits both end users and developers, including AI teams.

In this blog post, we’ll break down the key steps CISOs should take to become AI ready. These insights are taken from our free AI readiness guide for organizations available for download here.

So, let’s dive in.

Step 1: Integrating AI-specific assets into security frameworks

AI systems introduce a range of unique technical assets that must be integrated into your overall security framework. For a CISO, understanding these assets is essential to ensure full protection.

What are AI-specific assets?

AI systems come with specific elements that need safeguarding, including:

  • Training data: The foundational data used to train models.
  • Test data: Data used to evaluate the model’s accuracy.
  • Externally sourced data: Any external data used for training or testing purposes.
  • The model itself (model parameters): The adjustable values that evolve during training.
  • Documentation: Records of model development, including experimental results.
  • Model input: The data fed into the model during operation.
  • Model output: The results generated by the model, which should be treated as untrusted if the training data or model isn’t trusted.
  • Sufficiently correct model behavior: Ensuring the integrity of model performance.
  • Externally sourced models: Pre-trained models or frameworks acquired externally.

Adding AI assets to your Information Security Management System

To manage these assets effectively, it’s important to integrate them into your organization’s Information Security Management System (ISMS). This ensures AI components receive the same level of oversight and protection as other critical IT assets.

In case you need a refresher, an ISMS is a systematic approach to safeguarding sensitive company data. It typically encompasses policies, procedures, and controls designed to protect and manage an organization’s information effectively.

Chances are, you already have an ISMS in place. Rather than building a new framework from scratch, incorporating AI-specific assets into your existing ISMS expands its scope to handle the unique challenges AI presents. This enhances your security posture and ensures that AI governance aligns seamlessly with your overall security strategy.

Step 2: Addressing AI-specific security threats

Unfortunately, the rapid adoption of AI has also amplified cybersecurity challenges.

Common AI security threats

AI presents a mix of traditional and new security threats that CISOs need to identify and address. Let’s take a look at some common examples:

Data poisoning

This occurs when malicious actors manipulate AI training data to alter the model’s behavior, causing it to make flawed or harmful decisions.

One notable example was when attackers fed Microsoft’s Twitter chatbot, Tay, harmful content, corrupting its training data and leading to inappropriate outputs. This permanently altered the chatbot’s output and led Microsoft to take Tay offline.

Close-up of a laptop screen with green lines of code, and a hand on the keyboard in a dark setting representing a hacker.

At our recent SCOPE 2024 conference, Dr. Lars Ruddigkeit from Microsoft shared his concerns about the deliberate introduction of flawed code into public repositories, posing significant security risks.

As he explained: “…there are people outside that on purpose want to make unsecure code in GitHub that a developer fetches with unsecure code that later there’s a backdoor in your software.”

Dr. Ruddigkeit went on to mention the major security breach earlier this year where hackers managed to sneak harmful code into XZ Utils, a tool used by many Linux systems. This meant they could secretly control affected computers and potentially steal data or cause damage without being detected.

Fortunately, the issue was caught by a Microsoft engineer before it could spread too widely, but it was a serious reminder of the dangers of malicious code in popular software.

Model theft

Proprietary AI models are highly attractive targets for attackers seeking to steal or reverse-engineer them for financial gain or a competitive edge.

For example, in the field of Natural Language Processing (NLP), companies that invest heavily in developing chatbots have discovered nearly identical chatbots on competitor platforms, mimicking their responses and conversation flows. This undermines years of research and development and gives malicious actors a fast-track to market success.

Input manipulation

In adversarial attacks, inputs are subtly modified to deceive an AI system.

This was seen when researchers demonstrated that self-driving cars can be misled by subtle alterations to road signs. In one study, they applied small stickers to a stop sign, causing the vehicle’s recognition system to misinterpret it as a speed limit sign.

Three views of a red stop sign held by a person indoors with tape covering parts of the letters.
Source: Paper 'Robust Physical-World Attacks on Machine Learning Models', by researchers at University of Washington, University of Michigan, Stony Brook University, and UC Berkeley

Such input manipulation can trick AI into making wrong or dangerous decisions, which is especially risky in situations where safety is crucial.

Mitigating AI-specific threats

AI’s dependence on vast datasets and autonomous decision-making processes makes it especially susceptible to exploitation by malicious actors. Securing AI goes beyond traditional risk controls like encryption and necessitates specialized measures, such as input segregation, to defend against attacks like prompt injection.

By integrating these AI-specific security controls into existing protocols, CISOs can build a more comprehensive defense strategy to protect their organization’s AI assets.

For a deeper dive into potential threats, their impact, and attack surfaces, refer to OWASP AI Exchange threat frameworks or any of the threat frameworks in the OWASP AI Exchange references.

Step 3: Securing the AI supply chain

AI systems often rely on external elements like third-party data and pre-trained models, which introduce potential vulnerabilities. Managing these risks effectively is essential to maintaining security.

Key supply chain controls for AI

  • Vendor management: Verify that all AI vendors adhere to strict security standards. Conduct due diligence to ensure the safety and reliability of the AI components they supply.
  • Model documentation: Keep comprehensive records of AI models, detailing their data sources and development processes. This enhances transparency and accountability, making it easier to trace any issues.

Lifecycle management

For organizations that build their own AI models, regular reviews and updates are crucial to maintain performance and security over time. This involves retraining models as needed and carefully tracking the entire AI lifecycle to address any potential issues.

Just like traditional software security, it’s essential to “shift left” and integrate security measures from the start of AI development. By building in security and privacy considerations from the outset, AI models can be optimized for performance while maintaining strong protective boundaries.

This prevents the costly and complex process of adding security measures after development, ensuring a smoother, safer implementation.

Step 4: Aligning AI security with broader organizational security

Rather than creating separate, standalone security protocols for AI, CISOs should integrate AI security measures into the organization’s overall security framework. This can be done through:

Security awareness training

Educate employees on AI-specific threats, such as AI-driven phishing scams or deepfake technology. Include these potential risks in security training to ensure staff can recognize and respond to them. For instance, training might cover how deepfake technology can mimic a CEO’s voice to deceive employees over the phone.

Updating your organization’s risk repository to include AI-powered threats such as deepfake scams and automated attacks. Regularly evaluate these risks to determine if current security measures need adjustments or enhancements.

Development-time security

Incorporate secure practices during the development of AI systems. This means that AI engineers and security teams should collaborate from the start to proactively address potential vulnerabilities. By embedding security considerations early in the development cycle, you can prevent costly fixes later on and ensure robust protection.

AI security audits and monitoring

Conduct regular audits of AI systems to confirm they adhere to established security standards. Implement continuous monitoring to detect any anomalies or potential breaches in real-time, enabling a swift response to any emerging threats. This proactive approach ensures that AI security remains aligned with the broader organizational strategy and is consistently maintained.

Step 5: Mitigating AI risks during operation

While developing secure AI is essential, maintaining its security during operation is just as important. Ensuring that AI systems run safely and effectively requires a proactive approach to monitoring and oversight.

Human oversight and safeguards

A significant risk with AI is the tendency to rely too heavily on autonomous systems. To mitigate this, it’s essential to maintain human oversight over critical AI operations. This ensures that if AI makes an incorrect or potentially harmful decision, it can be identified and addressed swiftly.

Continuous validation of AI models

Ongoing validation is key to ensuring AI systems remain secure and effective. Regularly test and monitor models to detect any signs of malfunction or external manipulation, especially after updates or changes. This helps verify that the models are functioning as intended and minimizes the potential for errors or vulnerabilities to go unnoticed.

By establishing these practices, CISOs can keep their AI systems reliable and secure throughout their operational lifecycle.

CISO’s role in ensuring AI security

As we’ve learnt above, AI brings new and complex security issues that CISOs must handle to keep data safe, maintain model integrity, and prevent attacks.

Securing AI takes teamwork across different departments. CISOs need to make sure AI security is part of the overall company strategy and must stay updated as new threats appear.

Work closely with your Governance, Risk, and Compliance (GRC) teams to create strong AI governance frameworks and make sure everyone understands the basics of AI security. This teamwork helps spot risks, set up proper controls, and build a sense of accountability across the company.

Collaborate with the CTO’s office to train and guide AI teams on safe and secure software development practices. Work with all teams to manage AI-specific assets and risks. It’s also helpful to have AI engineers and traditional software developers work together to spot and fix any security issues as they arise.

For more advice on how to use AI safely and effectively, our AI readiness guide gives clear, practical steps for board members, executives, and IT leaders to be ready for AI’s opportunities while managing its risks. Download your copy for free today.

Experience Sigrid live

Request your demo of the Sigrid® | Software Assurance Platform:
  • This field is for validation purposes and should be left unchanged.