How GRC teams can govern AI with an AI Management Systems (AIMS)
In this article
Summary
AI presents some impressive opportunities for improved efficiency and turnover. But its risks are too serious to ignore. To maximize AI’s potential, while minimizing its risks, GRC teams must first establish an AI Management System (AIMS). This article will take you through how to build an AIMs for safe, secure, and transparent AI use.
Since OpenAI’s launch of ChatGPT in November 2022, artificial intelligence (AI) has woven its way into almost every industry. In 2023, less than a third of organizations globally reported using AI in their operations. Fast forward one year, and 72% of businesses are leveraging AI for at least one business function.
Like any breakthrough technology, it’s no surprise that business leaders are eager to jump on board the AI train—not wanting to miss out on AI’s potential to revolutionize productivity.
Indeed, AI carries the potential to automate many time-consuming, repetitive tasks that come with running a business. In fact, a recent McKinsey study estimated that current generative AI and similar technologies could automate the work activities that consume 60-70% of employees’ time today.
Yet, while AI marks an impressive step forward for technology and society, it doesn’t come without risks. If left unaddressed, these risks can result in serious consequences for businesses, as well as the environment and public around them.
GRC is an organizational approach designed to manage governance, risks, and compliance with industry and government regulations. Simply put, it helps businesses efficiently handle IT and security risks, cut costs, minimize uncertainty, and ensure compliance. In today’s world, AI plays an integral role in this process.
But how and where to get started?
Mitigating AI risks through governance
The good news is that managing AI risks doesn’t have to be overwhelming. By treating AI for what it is—professional software, albeit with some unique challenges—AI can be governed by building upon the same frameworks used for traditional software.
This is where governance, risk, and compliance (GRC) come into play. GRC ensures that the AI systems integrated into your business are safe, legal, and trustworthy. Depending on the size of your business, you may already have a dedicated GRC team, or it may fall under another function.

The key to effective AI governance is the development of a robust AI Management System (AIMS). Similar to how an Information Security Management System (ISMS) governs traditional software, an AIMS offers the structured oversight essential for managing AI safely and responsibly.
In this article, we’ll walk GRC teams through the five steps to take to implement AI in a way that’s safe, secure, and compliant for years to come.
Step 1: Establish an AI Management System (AIMS)
An AI Management System (AIMS) serves as a strategic framework that guides businesses through their AI integration journey, focusing on minimizing risks and maximizing AI’s benefits. The process of building an AIMS is tailored to each organization, shaped by its specific AI applications, industry requirements, and applicable regulations.
A good starting point is ISO/IEC 42001, the new international standard that “specifies requirements for establishing, implementing, maintaining, and continually improving an AIMS within organizations.” This, alongside other AI-specific ISOs, can help businesses build out a strong AIMS framework centered around three core principles:
- AI governance: Preparing for AI failures, security breaches, and compliance issues, and having processes in place to address them. For example: AI models making mistakes and sending sensitive information outside security-cleared individuals, or gaps in security allowing AI systems and applications to be hacked.
- AI risk management: Establishing clear guidelines to manage AI risks.
- AI compliance: Ensuring that AI systems comply with relevant regulations, such as ISO/IEC standards and the EU AI Act.
An AIMS can also be built upon your existing management system frameworks, such as your ISMS. If you go down this path, ISO/IEC 42001 can help you restructure your existing frameworks to meet your AI management needs.

Why building an AIMs matters
There’s no denying AI’s potential is pretty impressive–from self-driving cars to robots fulfilling this year’s Black Friday orders. But its risks, such as spreading misinformation or copyright infringement, can’t be overlooked.
We could think of AI like the rise of nuclear technology in the 20th century. On the one hand, this new technology introduced devastating weaponry. Yet, it also enabled some of the world’s cleanest energy, responsible for supplying around 9% of the world’s renewable energy in 2023.
In this context, an AIMS is the framework needed to ensure AI solutions drive greater value and efficiency, without detonating. It protects businesses from AI risks such as reputational damage or regulatory penalties while enabling them to tap into AI’s opportunities, from improved job satisfaction and creativity to improved efficiency and productivity.
Step 2: Take stock of your AI applications
After developing your AIMs, it’s time to create an inventory of your organization’s current AI usage. This inventory provides a clear overview of all AI-related initiatives, helping you to better assess AI’s opportunities and risks.
What to include in the inventory of AI applications
There are three types of AI systems to account for in your inventory:
- Your current AI applications: Identify AI systems currently in use and their purpose.
- AI innovations in development: Track AI projects currently in R&D or in the planning phase.
- External AI tools: Include a list of third-party AI solutions you use, such as those used for marketing automation (like ChatGPT) or fraud detection.
Maintaining the AI Inventory
It’s crucial to recognize that creating an AI inventory is not a one-time, static task. As AI continues to evolve within businesses, your AI inventory must evolve as well.
IT leaders and GRC teams must consistently update and maintain their inventories to account for new AI developments and the decommissioning of outdated systems. Ongoing collaboration with IT and risk management teams is essential to ensure the inventory remains accurate and up to date.
Step 3: Evaluate AI applications for risks and opportunities
With your AI systems inventoried, you can now move on to evaluating the data for risks and opportunities. This will help you identify where AI’s risks may lie, while also revealing where AI could benefit your business.
Early detection of AI risks is crucial for avoiding costly mistakes. Similarly, early detection of opportunities gives your company the best chance at benefiting from AI integration.
The regularity of your evaluation is equally key here. AI models and algorithms are built to learn, adapt, and grow autonomously, meaning that their scope will expand over time. Regular evaluation keeps AI systems safe, secure, and compliant throughout their lifecycles.
Key AI risk areas to look out for
Here are some of the key risk areas to assess during your AI inventory evaluation:
- Compliance risks: Assess whether your AI applications meet current (and future) regulatory requirements.
- Ethical considerations: Evaluate the potential of your AI systems to subvert company ethics, such as by replacing human tasks and the ramifications regarding job retention, skill retention, employee satisfaction, and motivation.
- Security vulnerabilities: Identify risks in your AI systems related to data privacy, model manipulation, and other such serious security issues.
- Impact on the organization: Assess the general impact of AI use on your business. For example, the potential risks of AI breaching confidentiality, presenting inaccuracies in information or data presentation and assessment, and its capacity for informing poor decision-making.
- Impact on individuals and society: Lastly, assess the broader societal implications of your AI use, such as regarding:
- The lawfulness and ethics of your AI use
- The environmental impact of you’re AI use
- The potential for discriminatory bias in your AI algorithms and apps
- The transparency of your AI use
- The end-user’s right to object to AI use or communication
- The effects of AI on data protection (e.g. GDPR in the EU)

Mitigating AI risks in business
Once you’ve identified the risks in your AI inventory, it’s time to address them. This could mean:
- Making changes to risky AI models or discontinuing them altogether.
- Implementing additional security controls.
- Limiting an AI application’s scope.
- Updating your customer-facing disclaimers, terms and conditions to ensure your use of AI is transparent and understood by users.
- Upskilling and retraining your staff to ensure job retention, instill a greater understanding of AI, and enable better management of your AI systems.
Step 4: Create and implement AI policies
Part of preparing your business for AI adoption involves creating specific company policies concerning your approach to AI and AI management. AI policy creation is a cornerstone of any successful AIMS.
Developing AI governance policies
To manage AI effectively, organizations must create clear policies that outline the approved uses of AI within the company and specify which AI activities are to be prohibited.
These policies should align both with the company’s internal goals and external regulations such as the EU AI Act or the Colorado AI Act. They can also be informed by insights gained from your risk assessment evaluation and relevant ISO standards.
AI law is currently made up of a kaleidoscopic regulatory landscape which is constantly growing and changing. As such, policies within your AIMS must also be regularly reassessed and updated to ensure ongoing AI compliance.
3 key policy considerations
- Data management: Policy should define the rules around using AI to deal with sensitive data, and define what types of data the organization will train/use their AI models with.
- Third-party AI tools: Policy should establish clear, set guidelines for evaluating and using external AI solutions. For example, are LLMs to be deemed too sophisticated and corruptible to be used in public-facing customer service, or is this considered an acceptable risk?
- Ethical AI use: Policy should set ethical standards for developing and deploying AI systems, defining the limits of AI use within the organization in order to protect and preserve the company’s ethics.
Integrating AI policies into existing frameworks
Rather than creating entirely new policies for AI, organizations should build on their existing governance structures, such as data protection policies and risk management frameworks.
There’s no need to treat AI in isolation. It should be managed the same way you would deal with a new type of software. This means adapting your existing policies, frameworks, and management systems instead of building new ones from the ground up.
Step 5: Bake AI-readiness into your organization
The final step in developing an AIMs is to ensure that, once deployed in your organization, it is allowed to mature alongside the growth of future AI integration. In other words, it is necessary to bake AI-readiness into your company so that, as the technology and its use develops, your AIMS adapts too.
To do so, we recommend creating a roadmap which aligns your AI objectives with your newly-drafted AI policies. This allows you to establish clear goals and track your progress toward them. You can also use AI-ready assessments like this one, co-developed by SIG and EXIN, to track your AI-readiness maturity.
It’s time to get AI-ready
We hope the above insights have shed some light on why businesses need to get AI-ready as soon as possible. So far, AI technologies have evolved in a fairly deregulated landscape, making AI in business a risky tool.
Understandably, business leaders don’t want to lag behind competitors or miss out on the vast opportunities AI offers. To ensure you’re prepared to adopt AI tools and applications—and potentially even develop your own—it’s essential to first establish a robust AIMS by following the steps outlined in this article.
To learn more about AI governance, risk management, and compliance, download our free AI readiness guide today.