The saying goes that every man and his dog now own an AI startup. But given the excitement around the birth of new startups that leverage AI, more are asking what the rules are being put in place to protect the rights of consumers and prevent harms.
Last year, the EU passed new legislation known as the AI Act as an attempt to codify some of these rules. Many of these general rules as well as the prohibitions have already kicked in in February 2025 so it helps to brush up on the Act’s key facts. This article briefly covers the main ideas behind the AI Act as well as the implications it has for US startups.
What is the AI Act?
The AI Act is one of the world’s first major legal frameworks designed to regulate the use of artificial intelligence (AI) by businesses. The act seeks to “promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union”.
The Act’s primary goal is to ensure that AI is used in ways that are safe, transparent, non-discriminatory, and environmentally friendly while fostering innovation and competitiveness in AI development. The fundamental balancing act it tries to achieve is ensuring major risks are mitigated while still encouraging companies to innovate.
“Risk” is the operative word here as businesses will seek to understand the level of risk that their activity holds.
The AI Act divides AI applications into four risk categories:
- Unacceptable Risk: AI systems that pose a clear threat to fundamental rights and safety are banned. Examples include social scoring systems (similar to China’s surveillance programs) and biometric identification in public spaces.
- High Risk: AI systems that significantly impact fundamental rights or safety are subject to strict regulations. This category includes AI used in healthcare, law enforcement, recruitment, and financial services.
- Limited Risk: AI applications such as chatbots and recommendation systems require transparency measures to ensure users are aware they are interacting with AI.
- Minimal Risk: AI systems that pose little or no risk, such as spam filters or video game AI, are subject to minimal regulation.
Does the AI Act apply to US startups?
Although the AI Act is a law of the EU, US startups might still be under its purview, especially if these companies are developing, selling or deploying the technology within Europe. The AI Act particularly focuses on “developers, providers or deployers” of AI in the EU. But what exactly is a developer, provider or a deployer”?
As per the Act a developer or deployer is anyone “using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity”. The use of this AI system might run the risk of affecting others too, which the Act is attempting to mitigate its consequences. A provider under the Act is anyone that develops an AI system or a general-purpose AI model and places it on the market in the EU. For the purposes of the Act. It doesn’t matter whether the system or model has a charge or is made available for free as many models currently are.
Here are several reasons why the AI Act is particularly relevant to US startups.
1. Compliance Costs and Legal Obligations
For many new startups, the AI Act represents a new horizon of compliance headaches. This might also mean the birth of new businesses or consultancies aiming to help US startups to weave through the multiple regulations and risk profiles present in the Act. For U.S. companies that develop AI solutions in Europe, compliance with the AI Act means additional legal and operational costs. Companies that fail to meet these requirements could face heavy penalties, with fines reaching up to €35 million or 7% of global annual revenue. This is similar to the General Data Protection Regulation (GDPR), which imposed significant compliance costs on global firms when it was introduced.
2. Supply Chain and Vendor Compliance
Even if a U.S. company does not directly operate in Europe, its vendors, partners, or clients may be subject to the AI Act, requiring the entire supply chain to adhere to compliance measures. Businesses that provide AI-powered tools or software to European companies will need to ensure their products are AI Act-compliant, affecting software vendors, cloud service providers, and data analytics firms.
3. Competitive Advantages
Compliance with the AI Act may become a way for competitors to differentiate themselves in a global market. Companies that proactively align with the AI Act’s standards may gain easier access to European markets and build trust with consumers and regulators. On the other side of the coin, businesses that delay compliance may risk being locked out of lucrative markets or face the prospect of being left behind by facing legal challenges.
In a nutshell…
The AI Act represents a significant shift in how AI is regulated, with profound implications for U.S. companies operating in Europe or engaging with European customers. While compliance may introduce additional costs and operational challenges, it also presents an opportunity for businesses to lead in responsible AI development and build consumer trust. As AI regulations continue to emerge worldwide, startups that adapt early and understand their risk profiles will be best positioned to innovate responsibly using AI.