What the draft European Union AI regulations mean for business

| Artigo

As artificial intelligence (AI) becomes increasingly embedded in the fabric of business and our everyday lives, both corporations and consumer-advocacy groups have lobbied for clearer rules to ensure that it is used fairly. In May, the European Union became the first governmental body in the world to issue a comprehensive response in the form of draft regulations aimed specifically at the development and use of AI. The proposed regulations would apply to any AI system used or providing outputs within the European Union, signaling implications for organizations around the world.

Our research shows that many organizations still have a lot of work to do to prepare themselves for this regulation and address the risks associated with AI more broadly. In 2020, only 48 percent of organizations reported that they recognized regulatory-compliance risks, and even fewer (38 percent) reported actively working to address them. Far smaller proportions of the companies surveyed recognized other glaring risks, such as those around reputation, privacy, and fairness.

These statistics are alarming given the well-publicized incidents in which AI has gone awry and because AI-related regulations, such as Europe’s General Data Protection Regulation (GDPR), already exist in parts of the world.

AI presents tremendous opportunity for advancements in technology and society. It’s revolutionizing how organizations create value in industries from healthcare to mining to financial services. For companies to continue to innovate with AI at the pace required to remain competitive and reap the greatest return on their AI investments, they need to address the technology’s risks. Our research supports this: companies seeing the highest returns from AI are far more likely to report that they’re engaged in active risk mitigation than are those whose results are less impressive.

Although the EU regulation is not yet in force, it provides clear insight into the future of AI regulation as a whole. Now is the time to begin understanding its implications and take steps to prepare for them as well as for the regulations that are sure to follow. Such steps will also support compliance with current regulations and mitigate other, nonregulatory AI risks.

This article provides an overview of the proposed EU AI regulation and three key actions organizations can take to develop a comprehensive AI risk-management program that best enables them to meet regulatory requirements; minimize the legal, reputational, and commercial risks of using AI; and ensure that their AI systems are used fairly and ethically.

What types of AI systems fall under the proposed EU regulation?

The regulation divides AI systems into three categories: unacceptable-risk AI systems, high-risk AI systems, and limited- and minimal-risk AI systems (Exhibit 1). Organizations can use this framework as a starting point in developing their own internal risk-based taxonomies. However, they should understand that the regulation’s risk framework focuses exclusively on the risks AI poses for the public, not the broader set of AI risks to organizations themselves—for example, the risk of losses due to misclassified inventory.

1

Unacceptable-risk AI systems include (1) subliminal, manipulative, or exploitative systems that cause harm, (2) real-time, remote biometric identification systems used in public spaces for law enforcement, and (3) all forms of social scoring, such as AI or technology that evaluates an individual’s trustworthiness based on social behavior or predicted personality traits.

High-risk AI systems include those that evaluate consumer creditworthiness, assist with recruiting or managing employees, or use biometric identification, as well as others that are less relevant to business organizations. Under the proposed regulation, the European Union would review and potentially update the list of systems included in this category on an annual basis.

Limited- and minimal-risk AI systems include many of the AI applications currently used throughout the business world, such as AI chatbots and AI-powered inventory management.

If an AI system uses EU data but otherwise does not fall within one of these categories, it would not be subject to the draft AI regulation. It would, however, be subject to GDPR.

Would you like to learn more about McKinsey Analytics?

What requirements does the regulation place on organizations using or providing AI systems?

The draft regulation proposes different requirements for AI systems depending on their level of risk.

Systems in the unacceptable-risk category would no longer be permitted in the European Union.

As currently proposed, high-risk systems would be subject to the largest set of requirements, including human oversight, transparency, cybersecurity, risk management, data quality, monitoring, and reporting obligations (Exhibit 2).

2

The oversight obligations imposed by the draft regulation on those building and selling high-risk systems in the marketplace or using them include:

  1. “conformity assessments,” which are algorithmic impact assessments that analyze data sets, biases, how users interact with the system, and the overall design and monitoring of system outputs
  2. ensuring these systems are explainable, overseeable, and perform consistently throughout their lifetime, even on edge cases
  3. establishment of an organization-wide cyberrisk-management practice that includes AI-specific risks, such as adversarial attacks on AI systems

Systems seen as posing minimal risk would have significantly fewer requirements, primarily in the form of specific transparency obligations, such as making users aware that they are interacting with a machine so that they can make an informed decision about continuing, making it clear if a system uses emotion recognition or biometric classification, and notifying users if image, audio, or video content has been generated or manipulated by AI to falsely represent its content, such as an AI-generated video showing a public figure making a statement that was never, in fact, made. The requirement to create such awareness will apply to systems in all risk categories.

Does the regulation affect only companies in the European Union?

No. The regulation would have extraterritorial reach, meaning that any AI system providing output within the European Union would be subject to it, regardless of where the provider or user is located. Individuals or companies located within the European Union, placing an AI system on the market in the European Union, or using an AI system within the European Union would also be subject to the regulation.

What penalties could the regulation impose?

Enforcement could include fines of up to €30 million or 6 percent of global revenue, making penalties even heftier than those incurred by violations of GDPR. The use of prohibited systems and the violation of the data-governance provisions when using high-risk systems will incur the largest potential fines. All other violations are subject to a lower maximum of €20 million or 4 percent of global revenue, and providing incorrect or misleading information to authorities will carry a maximum penalty of €10 million or 2 percent of global revenue.

Although enforcement rests with member states, as is the case for GDPR, we can expect that the penalties will be phased in, with the initial enforcement efforts concentrating on those who are not attempting to comply with the regulation. We also expect to see ample material on how to comply with the regulation as well as interpretive notes.

When might the regulation go into effect?

Although there’s no way to know for sure, the timeline for the adoption of GDPR, which was proposed in 2012, adopted in 2014, and went into effect in 2018, could provide some guidance.

Regardless of the timeline, however, there are many existing laws and regulations that already apply to AI usage, and organizations need to understand and comply with them, particularly since some of them are sector specific and do not explicitly reference AI. In the European Union, for example, GDPR already requires explicit consent from individuals before they are subject to decisions based solely on automated processing. In April 2020, the Canadian federal government began requiring algorithmic impact assessments for all automated decision-making systems delivered to the federal government. In March 2021, some of the largest financial regulators in the United States requested information from financial institutions on their use of AI and noted that existing agency guidance and laws already apply. Additionally, in April 2021, the United States Federal Trade Commission published a blog post clarifying its authority under existing law to pursue enforcement actions against organizations that fail to mitigate AI bias or engage in other unfair or harmful actions through the use of AI.

Getting to know—and manage—your biggest AI risks

How can organizations prepare for the EU and future regulations?

The draft EU regulation is simply one step in what will become a global effort to manage the risks associated with AI. The sooner organizations adapt to this regulatory reality, the greater will be their long-term success with this critical technology.

At the beginning of every organization’s AI journey, it should establish a comprehensive AI risk-management program integrated within its business operations. The program should include an inventory of all AI systems used by the organization, a risk-classification system, risk-mitigation measures, independent audits, data-risk-management processes, and an AI governance structure.

As a foundation, an organization will need a few critical components: a holistic strategy for prioritizing the role AI will play within the organization; clear reporting structures that allow for multiple checks of the AI system before it goes live; and finally—because many AI systems process sensitive personal data—robust data-privacy and cybersecurity risk-management protocols.

With these foundations in place, organizations can take three actions to begin building out their full AI risk-management program systematically: create an inventory of AI systems and risk-mitigation measures based on a standard taxonomy, conduct conformity assessments, and establish an AI governance system (Exhibit 3). These steps can form a continuous feedback loop in which issues highlighted during a conformity assessment can be built into monitoring systems and eventually updated in the inventory and taxonomy.

3

Build an inventory of AI systems and risk-mitigation measures based on a standard taxonomy

An obvious key to any organization’s AI regulatory-compliance program is understanding exactly where and how it is deploying AI. Organizations should create and maintain comprehensive inventories containing descriptions of all AI systems associated with both current and planned use cases, along with risk classifications for each system. They can then use these inventories to map their AI systems against existing and potential future regulations to identify and address any gaps in compliance.

Legal personnel can develop a standardized taxonomy for risk classifications that is in line with current and potential regulations. This will enable business, technical, and legal personnel to quickly identify unacceptable and high-risk AI systems across the organization and take risk-mitigation measures.

As a further step to help streamline compliance, organizations should evaluate and budget for the best available tools on the market that can help them address AI-related risk.

Start implementing conformity assessments now

Under the proposed EU regulation, organizations would be required to conduct conformity assessments for all high-risk AI systems. Analogous to the privacy-impact assessments currently required in various regions, the conformity assessment is a review of each AI system to see whether it meets applicable regulations and other relevant standards.

These assessments should include all information required under the proposed EU regulation such as the following:

  • documentation of the various choices made when developing the AI system, including its limitations and level of accuracy
  • risks the system poses, including foreseeable sources of unintended consequences, such as potential discrimination violations of fundamental rights
  • any risk-mitigation measures built into the system or applied to it, such as human oversight

Rather than thinking about conformity assessments as a box to be checked for EU-type regulations, organizations should see them as enablers for effectively managing and mitigating the various risks associated with AI. The documentation the assessments produce will also significantly ease the burden when regulators check for compliance with the EU regulation and other requirements, such as anti-discrimination laws and industry best practices. Having standardized approaches and documentation will also help AI developers ensure that they are using best practices and allow governance teams to apply consistent standards to the evaluation of risk.

In the end, the conformity assessment will constitute a report on the organization’s process through the various stages of risk evaluation and mitigation, including:

  • data audits and data cleaning or augmentation measures, such as creating synthetic data to address biases or other issues with input data
  • risk evaluation
  • consultation with experts, such as ethics, legal, subject-matter experts, and data- and change-management teams
  • testing and validation
  • review of similar known adverse AI incidents
  • compliance-by-design steps
  • mitigation measures considered and applied
  • checks for compliance with relevant technical standards
  • ongoing monitoring processes
  • potential misuses of the system

Establish an AI governance system

Two key components of a successful governance system are a dedicated cross-functional committee responsible for ensuring AI risk compliance and independent audits of AI systems.

Organizations should convene a committee that ensures AI is responsibly developed and deployed throughout the organization. The committee should be made up of professionals from a variety of functions, including cybersecurity, legal, and technology, to properly address the full range of AI risk. This governance body sets risk standards that AI teams must adhere to, ensures and audits AI systems and development processes for compliance, and advises business and development teams on specific trade-offs or decisions needed to comply with regulatory and organizational standards.

Several organizations, such as the International Organization for Standardization (ISO) and the US National Institute of Standards and Technology (NIST), are already publishing responsible AI development and deployment standards and pushing for international or national adherence; AI governance committees can use these as helpful resources for setting organizational standards and benchmarking.

In addition to the audits carried out by this internal governance team, organizations will want to conduct periodic independent or external audits of AI systems to ensure compliance. These audits are in addition to the internal model testing and validation that AI teams must regularly conduct as part of the AI development process. Increasingly, government organizations’ and public companies’ requests for proposals (RFPs) for AI-related services now include conditions related to AI ethics, bias monitoring, and data risks. In the future, these requirements may evolve in the same way as cybersecurity requirements have, with RFPs requiring external audit reports—all the more reason for organizations to begin building the necessary capabilities sooner rather than later.


The draft EU AI regulations should serve as an urgent reminder that now is the time for organizations to ensure they have robust processes in place to manage AI risks and comply with existing and future regulations. Rather than scaling back on AI development, successful organizations will instead create a framework for risk management and compliance that will enable their business to continue to innovate and deploy AI—safely—at a rapid pace. Building an inventory of AI systems and risk-mitigation measures that adhere to a standardized taxonomy, performing conformity assessments, and establishing an AI governance system can help put organizations in a position to reach this goal.

Explore a career with us