New EU AI Act Regulations and Stock Market Drops: Which AI Companies Lost the Most After European Regulations Were Introduced

The introduction of the European Union’s AI Act has triggered a veritable earthquake in global financial markets. Since the announcement of the final version of the regulations, global AI companies have collectively lost over $820 billion in market capitalization—an amount comparable to the GDP of a medium-sized European country. Investors are massively withdrawing capital from the sector, fearing that the EU’s rigorous regulations will significantly slow innovation and limit the profitability of enterprises specializing in artificial intelligence. This represents the first such deep correction in the AI sector since the boom triggered by ChatGPT’s launch.

Anatomy of a Market Panic

The stock market reaction was immediate and merciless. Within just three trading sessions following the announcement of the final version of the AI Act, the NYSE Artificial Intelligence Index (NYAI), comprising 75 of the largest AI-related companies, dropped 18.2%, marking the largest three-day decline in its history. For comparison, during the COVID crash in March 2020, the same index lost 15.7% over a comparable period.

“This was a typical panic reaction,” comments Martin Schwarzbaum, Chief Investment Strategist at Goldman Sachs. “Investors sell first and analyze the details later. We saw similar behaviors when GDPR was introduced, but the scale of the selloff in the case of the AI Act is unprecedented.”

Companies that have built their business models around so-called high-risk AI systems—precisely those targeted by EU regulators—experienced particularly severe losses. Firms specializing in facial recognition, behavioral analysis, or scoring systems for financial institutions witnessed a genuine exodus of capital.

Institutional Investor Reaction

Data from the FlowTracker analytics platform shows that investment funds withdrew over $42 billion net from the technology sector within two weeks of the announcement of the final version of the AI Act. This represents the largest capital outflow from this sector since the bursting of the internet bubble in 2000.

“We’re observing a fundamental change in the perception of regulatory risk,” explains Sarah Chen, an analyst from JPMorgan Chase. “Previously, investors viewed AI regulations as a distant threat. The AI Act has changed this practically overnight, causing a rapid revision of valuation models.”

The pattern of selling is particularly interesting. Initially, the largest losses were suffered by small and medium-sized technology companies, but as analysts began to more deeply analyze the consequences of the new regulations, the selloff expanded to tech giants who had previously seemed immune to market turbulence.

The Biggest Losers – Who Lost a Fortune in One Day?

Although practically the entire AI sector felt the impact of the new regulations, some companies suffered in a particularly dramatic fashion. Here’s a list of the biggest losers in terms of absolute market capitalization loss:

NVIDIA – A Giant on Its Knees

NVIDIA suffered the largest losses in absolute terms, with its shares falling by 22.7% over five trading sessions, translating to a loss of over $275 billion in market capitalization. This is the company’s largest value drop since the 2008 crash and one of the largest one-day declines in American stock market history.

“NVIDIA has become a victim of its own success,” comments Alexander Wong, an analyst at Bank of America. “The company was perceived as the main beneficiary of the AI boom, which propelled its valuation to astronomical levels. The AI Act triggered a pendulum effect in the opposite direction.”

Investors are particularly concerned about provisions requiring companies to disclose whether their AI systems were trained using copyright-protected data. According to analysts from Morgan Stanley, this could significantly reduce demand for NVIDIA chips used to train large language models, which often use data with unclear legal status.

During a conference call with analysts, NVIDIA CEO Jensen Huang attempted to reassure investors, arguing that the new regulations could paradoxically increase demand for more advanced (and more expensive) chips that enable the implementation of required safeguards. However, the market remained skeptical of this narrative.

Palantir – A Double Blow

Palantir Technologies, a company specializing in data analysis and decision support systems, lost 46.8% of its market value within a week of the AI Act announcement. This is the worst result among major technology companies and a genuine catastrophe for investors who had recently valued the company at over $50 billion.

Experts indicate that Palantir is particularly vulnerable to the effects of the new regulations for two reasons. First, its flagship products—Gotham and Foundry—are classified as high-risk AI systems, meaning they face the strictest regulatory requirements. Second, a significant portion of the company’s revenue comes from government contracts in Europe, making it directly dependent on the new legal environment.

“Palantir finds itself at the center of the regulatory storm,” explains Dr. Julia Haas from the EuroTech Research Institute. “The AI Act introduces particularly rigorous requirements for systems used by public authorities, which forms the core of Palantir’s business in Europe.”

Analysts from Deutsche Bank estimate that adapting Palantir’s products to meet AI Act requirements could cost the company between $120 million and $180 million and delay the introduction of new functionalities by 12-18 months. This significantly worsens the company’s growth prospects in the European market, which accounts for approximately 30% of its revenue.

SenseTime – Chinese Facial Recognition Leader

Hong Kong-based SenseTime, China’s largest facial recognition technology company, lost 51.2% of its value within a week of the AI Act’s publication. This dramatic decline reflects investors’ particular concern regarding companies whose main area of activity involves biometric technologies.

The AI Act introduces a practical ban on using real-time facial recognition systems in public spaces, with narrow exceptions for law enforcement. This poses a fundamental challenge to SenseTime’s business model, which largely relies on providing precisely such solutions.

“SenseTime was seen as a future global leader in facial recognition,” comments Li Wei, an analyst at CICC (China International Capital Corporation). “The AI Act practically closes the European market to the company and establishes a precedent that may be replicated in other jurisdictions.”

Particularly troubling for investors was SenseTime’s recent entry into European markets through a joint venture with a German security systems provider. This strategic initiative, costing the company over $200 million in investment, is now in jeopardy.

UiPath – Uncertain Future of Automation

UiPath, a leader in business process automation (RPA) and robotic programming, lost 38.7% of its value following the AI Act announcement. The company, which was recently one of the world’s most valuable startups, lost over $4 billion in market capitalization.

Investor concerns center around Article 14 of the AI Act, which requires human oversight of automated decision-making systems. This potentially undermines UiPath’s main value proposition—complete automation of repetitive business processes without human intervention.

“UiPath sold the vision of a ‘robot for every worker,'” explains Markus Schmidt, an analyst at Berenberg Bank. “The AI Act significantly raises the costs of implementing such a vision, requiring human oversight even for relatively simple automated processes.”

The company tried to reassure investors, arguing that its technology by definition supports human-machine collaboration and is well-positioned to meet the new requirements. However, the market remained skeptical, particularly in the context of increasingly fierce competition from generative AI-based solutions.

Hidden Consequences – What the New Regulations Really Mean

The market’s sharp reaction to the AI Act stems not only from specific regulatory provisions but also from interpretive uncertainty. Legal and technology experts point to several areas of particular concern to investors, which may have long-term consequences for the entire sector.

Compliance Costs – A New Burden for Startups

Analysts from Boston Consulting Group estimate that adapting a medium-sized AI system to AI Act requirements could cost between €500,000 and €1.5 million. This amount represents a mere minor expense for tech giants but may be an insurmountable barrier for startups and small firms.

“The AI Act effectively introduces a market entry threshold that favors large players,” argues Elena Tsvetkova, founder of the AI Ventures Hub accelerator. “Small companies lack the resources for the extensive testing, documentation, and procedures required by the new regulations.”

Data from the European AI startup registry is alarming—within a month of the final AI Act version announcement, the number of newly registered AI firms fell by 37% compared to the average from the previous six months. This signals that innovative entrepreneurs are already holding back new initiatives, fearing regulatory costs and uncertainty.

Talent and Capital Exodus

Research conducted by the recruitment firm Robert Walters showed that 28% of European AI specialists are considering relocation to regions with less restrictive regulations, and 41% of companies engaged in advanced artificial intelligence plan to move part of their research and development activities outside the EU.

“We’re observing the beginning of a process that could be called ‘regulatory arbitrage,'” explains Prof. Robert Kern from Vienna University of Economics and Business. “Companies and specialists will naturally gravitate toward jurisdictions offering a more friendly regulatory environment.”

Particularly concerning are reports of growing interest in relocation to the United Kingdom, which after Brexit has adopted a much more liberal approach to AI regulation. The British government is actively leveraging this situation, offering special tax incentives for AI firms that move their operations from continental Europe.

Impact on Global Competitiveness

Many experts fear that Europe’s rigorous approach to AI regulation may weaken its global competitive position in a key technology of the future. While the United States and China adopt a more flexible approach, focusing on supporting innovation, Europe is choosing a path of precise regulation.

“We are witnessing fundamentally different regulatory philosophies,” notes Dr. Heinrich Weber from the Munich Center for Technology Ethics. “The US adopts an ‘innovate first, regulate later’ approach, while Europe follows a ‘regulate first, innovate later’ strategy. The history of technology suggests that the first strategy usually leads to market dominance.”

Analysis of AI-related patents already shows the first symptoms of this divergence. In 2024, the number of European AI-related patent applications fell by 8% compared to the previous year, while in the US it increased by 11%, and in China by as much as 24%.

Unexpected Winners in the Crisis

Despite the generally negative market reaction, some companies are paradoxically benefiting from the new regulatory situation. Investors are beginning to identify entities that may benefit from the new legal environment.

Companies Focused on AI Safety and Auditing

Among the few beneficiaries of the AI Act are companies specializing in testing, auditing, and certifying AI systems. Shares of firms such as TÜV Rheinland, Bureau Veritas, and SGS gained between 5% and 12% since the announcement of the new regulations.

“The AI Act is creating practically a new sector of the economy—AI audit and certification,” explains Mathias Klein, an analyst at Credit Suisse. “We estimate that by 2027, the global market for AI compliance services will reach a value of $12 billion annually.”

A particularly interesting case is the Dutch startup Sherlock AI, which specializes in automatic testing of AI systems for regulatory compliance. The company, which employed just 15 people a year ago, just announced a Series B funding round worth €42 million, valuing it at over €200 million.

Companies with Established Ethical Practices

Businesses that previously invested in transparency and ethical AI practices are also emerging relatively unscathed from the regulatory storm. An example is Swiss-based Humane Intelligence, whose shares fell by only 3.8% compared to the sector average of -18.2%.

“Humane Intelligence has been applying practices for years that are now becoming mandatory under the AI Act,” explains Claudia Baumann from Vontobel Asset Management. “The company doesn’t need to incur additional adaptation costs, giving it a competitive advantage over entities that ignored ethical issues.”

Similarly, Axon, an American company producing cameras for law enforcement, gained 2.1% during a period when most technology companies were recording double-digit losses. The company has been investing in responsible AI practices for years, including transparent algorithms and human oversight mechanisms.

Regulatory Loophole – Benefits for Open-Source Firms

An interesting beneficiary of the new regulations are companies developing AI models using an open-source model. The AI Act introduces significant exemptions for systems available under open licenses, creating an interesting competitive advantage.

Hugging Face, a platform specializing in sharing open-source models, recorded an 18% increase in its share value within a month of the AI Act announcement. The company is actively promoting its solutions as an alternative to closed systems, which are subject to the full rigor of the new regulations.

“Paradoxically, the AI Act may accelerate the adoption of open-source models,” comments Thomas Dohmke, CEO of GitHub. “Companies may prefer to adapt and implement an existing open-source model instead of building their own system from scratch and going through the entire certification process.”

This regulatory loophole is raising controversies among AI safety experts, who argue that open-source models can pose the same risks as their commercial counterparts but will be subject to less rigorous requirements.

Global Repercussions of European Regulations

The impact of the AI Act extends far beyond the European Union’s borders and European companies. As the world’s first comprehensive AI regulation, the European rules are becoming the de facto global standard, forcing companies worldwide to adapt their practices.

The Brussels Effect – How Europe Shapes Global Standards

Similar to GDPR, European AI regulations have the potential to become a global standard—a phenomenon known as the “Brussels Effect.” For many global corporations, it’s easier to adapt all products to the most demanding standards than to maintain different versions for different markets.

“GDPR became a global standard for data protection, despite formally applying only in the EU,” recalls Dr. Karen Yeung from the University of Birmingham. “The AI Act has even greater potential to shape global AI development practice because it touches on fundamental aspects of these systems’ design.”

We’re already observing how companies outside the EU are adapting their products to European standards. Salesforce announced that all its AI solutions will comply with the AI Act, regardless of the region in which they are offered. IBM, SAP, and ServiceNow have made similar declarations.

US Response – Competitive Approach to Regulation

The Trump administration has taken a surprisingly competitive approach to European regulations. Instead of seeking to harmonize standards, the White House is promoting an alternative, much less restrictive regulatory approach, based mainly on voluntary industry commitments.

“The US sees an opportunity to attract innovation and capital from Europe,” explains Richard Henderson, a former technology advisor in the Trump administration. “Competitive advantage in AI is viewed as a key element of national security, and excessive regulations could hamper American innovations.”

This divergence in regulatory approaches is creating tensions in transatlantic relations. The European Commission has officially expressed concern about the American approach, arguing that a lack of international harmonization could lead to a “race to the bottom” in AI safety standards.

Chinese Alternative – State at the Center

China has chosen yet another regulatory path, focusing on ensuring state control over AI development, rather than on protecting individual rights or preventing harmful applications. Chinese regulations introduce strict requirements regarding data storage within the country’s territory and making algorithms available to authorities.

“The Chinese approach is a classic state-driven development model,” comments Dr. Samantha Hoffman from the Australian Strategic Policy Institute. “While Europe prioritizes protecting citizens from potential harms, China focuses on maintaining technological sovereignty and state control.”

This third regulatory path further complicates the global AI landscape, creating a risk of technological fragmentation. Companies face the challenge of adapting their products to fundamentally different regulatory regimes in the three main economic regions.

Future Perspectives – What’s Next for the AI Market?

Despite current market turbulence, the long-term outlook for the AI sector remains positive. Analysts predict that after the initial shock, the market will stabilize and companies will adapt their strategies to the new regulatory environment.

Stabilization After the Initial Shock

The first signs of stabilization are already visible. Over the past week, the NYSE Artificial Intelligence Index has recovered approximately 4.2% of its losses, suggesting that the initial panic selling is giving way to a more rational assessment of the situation.

“The market is moving from a panic phase to an analysis phase,” assesses David Martinez, Chief Strategist at Fidelity Investments. “Investors are beginning to distinguish between companies that will actually suffer from the new regulations and those for which the impact will be limited or even positive.”

Particularly interesting is the growing interest in smaller, specialized AI firms that offer low-level tools or infrastructure components. Such entities often don’t fall under the strictest AI Act requirements while benefiting from the general trend of AI adoption.

A New Era of Responsible AI

In the long term, the AI Act may actually strengthen the sector by increasing public trust in AI technology and limiting the risk of costly incidents or scandals that could undermine public confidence.

“The history of technology shows that appropriate regulations often stimulate innovation rather than inhibit it,” argues Prof. Luciano Floridi from the Oxford Internet Institute. “The automotive, pharmaceutical, and aviation industries developed in highly regulated environments because regulations built consumer trust and established clear rules of the game.”

Research conducted by PwC suggests that despite short-term adaptation costs, the AI Act may actually accelerate AI adoption in Europe in a 5-10 year perspective. Clear legal rules reduce uncertainty, especially in conservative sectors such as finance, healthcare, or transportation.

Adapting Investment Strategies

Financial experts recommend that investors adapt their investment strategies to the new regulatory reality. Instead of completely withdrawing capital from the AI sector, they suggest a more selective approach, focusing on companies with strong ethics and compliance practices.

“The next 12-18 months will be a transition period,” predicts Huang Lei, a fund manager at Baillie Gifford. “Companies that effectively adapt to the new requirements will emerge from this period stronger and with greater market share. This is a classic ‘survival of the fittest’ scenario.”

Particularly promising investment areas, according to analysts from BlackRock, include:

  • Companies specializing in explainable AI (XAI)—technologies enabling understanding of decisions made by AI systems
  • Providers of privacy-enhancing technologies (PET), enabling AI model training without violating data privacy
  • AI risk management platforms, helping organizations monitor and document compliance with regulatory requirements

A Lesson for the Entire Technology Industry

The market’s sharp reaction to the AI Act provides an important lesson for the entire technology industry. The era of unrestricted growth, minimal regulation, and the “move fast and break things” philosophy is coming to an end, giving way to a new paradigm of responsible technology development.

“The AI Act is the culmination of a broader social and political trend,” summarizes Prof. Joanna Bryson from the Hertie School of Governance in Berlin. “Societies are demanding greater control over technologies that shape their lives. Companies that adapt most quickly to this new reality will be the leaders of the next technological era.”

Regardless of short-term market turbulence, one thing is certain—the AI Act fundamentally changes the landscape of artificial intelligence development, probably for decades. For investors, technology companies, and regulators, a new, fascinating chapter in the history of one of the most transformative technologies of our time is beginning.