On December 8th, 2023, the European Union passed an AI act that could influence global perspectives on how we view and use artificial intelligence. Although this long-maturing piece of legislation won’t be enacted until around 2025, the whole world is buzzing about the new law potentially providing a comprehensive route toward responsible AI usage.
Before we dive into the new regulations let’s take a look back.
Artificial intelligence, in itself, is not a new phenomenon. AI development began in the 1950s, seeing a boom in the 1980s.
However, it wasn’t until a few years previously, with the birth of Large Language Models (LLM) like OpenAI’s ChatGPT that talk of artificial intelligence regulation became that much louder. With open AI lawmakers and industry leaders have been asking questions like:
- When should governments step in?
- Should regulations be up to the companies?
- Does AI keep users’ data safe?
Despite the recent uncertainties and apprehension, the European Union has taken a giant leap toward the advancement of AI regulation. Being the first means that other countries around the world will be looking to the EU when they decide to implement their own laws for AI regulation.
What does the EU’s AI Act mean for artificial intelligence regulation around the world?
What We Know About the EU’s AI Act
The EU’s AI Act sets specific criteria that seek to protect its citizens from AI abuse, while still welcoming the opportunities gifted by this mammoth tech discovery.
This AI Act will implement a risk-based approach to operations that use artificial intelligence within the European Union. Even non-European countries with company output in the EU must comply with the EU AI Act.
The risk-based approach will range from “unacceptable risk” (these AI systems are completely outlawed under the new act) to “high risk” and “limited risk”.
So, the trickier the AI application is, the stiffer the rules. The AI systems that are low-risk, such as content recommendation systems or spam filters, would only face light regulation. According to the European parliament, these low-risk systems need only abide by
“transparency requirements… [including] drawing up technical documentation, complying with EU copyright law and disseminating detailed summaries about the content used for training.”
High-risk systems, such as medical devices and instruments that could influence the outcomes of elections, require mandatory fundamental rights impact assessments. The public “will have a right to launch complaints about AI systems and receive explanations about decisions based on high-risk AI systems that impact their rights.”
Examples of the 3-risk types
- Limited risk: AI applications and tools that meet specific transparency requirements. Chat bots or deepfakes.
- High risk: AI used in sensitive systems, such as employment, elections and voting, education, medical devices, and transportation.
- Unacceptable risk: AI systems that manipulate human behavior to circumvent their free will; biometric categorization systems that use sensitive characteristics like sexual orientation and race; untargeted scraping of facial images from the internet or CCTV footage to create facial recognition databases
Companies that don’t comply with the rules will face heavy fines. Depending on the violation, fines can range from € 7.5 million or 1.5 % of organizational turnover to € 35 million or 7% of global turnover.
Existing Regulations on AI and Privacy
AI Laws in the U.S
As of now, the federal security laws around AI in the U.S. are still in development. However, in a 2023 legislative session, generative AI laws addressing the issue of privacy were introduced.
Around 25 states, Puerto Rico and the District of Columbia have introduced artificial intelligence bills. Currently, 15 states and Puerto Rico have enacted some form of legislation.
These laws include Montana’s SB384 Act, the California Consumer Privacy Act (CCPA), the Connecticut Privacy Act(CTPA), and the Colorado Privacy Act (CPA).
AI Laws in China
Unlike the EU, the Chinese government sees AI as a tool to achieve state control and a convenient outlet for economic dynamism. In June 2023, China issued its first-ever regulations on generative AI technology, which include extensive content monitoring and marking, and data sourcing.
Generative AI providers must operate within and uphold the integrity of state power that protects the existing social order, which thus forces users to comply with AI systems whether they agree or not.
What does the EU AI Law mean for the US, specifically?
In October 2023, the Biden administration announced a landmark executive order with similar actions as outlined in the EU’s AI Act. Though not yet finalized, this proposed policy aims to establish new standards for the federal government’s approach to AI security and privacy.
With the European Union’s new AI Act, all eyes are on the Biden administration to see what the United States’ next steps will be. Will the executive branches work together swiftly to come up with terms of agreement on Biden’s proposal? Will they rework the rules and framework of the existing arguments?
The current administration will need to focus on the current challenges that AI presents for national and economic security and come up with a balanced approach that will keep the citizens and environment safe, while still promoting investment and growth within this technology sector.
Read more on AI Regulations Are Coming. Should IT Leaders Be Worried?
Conclusion
The EU’s AI Act has introduced a new era of AI regulation. This act would be the first of its kind to subject AI technologies to different “risk-based” requirements with monetary fines for those who go against guidelines.
Europe’s latest AI Act is in clear opposition to China’s existing law where the need for surveillance and social sharing would be banned. Whereas the US federal AI policies are still in their infancy, they are sure to be looking at the EU’s Act to see where they further develop their own executive order and create real results.