No verified ruling from March 6, 2026 exists in official EU records requiring Meta to pause its AI video generation tools. The working title circulating in industry briefings appears to be either premature or misattributed. But here’s the thing: the regulatory machinery to make exactly that happen is already fully operational, and the EU doesn’t need a new law to act.
The Digital Services Act entered full force on February 17, 2024. Meta was designated a Very Large Online Platform — a VLOP, in EU parlance — shortly before that, which means it faces the strictest tier of obligations under the framework. Those obligations include algorithmic transparency, systemic risk mitigation, and content origin disclosure. Applied to generative AI video tools, those three requirements alone create a compliance challenge that most platforms haven’t fully solved.
So while the specific March 2026 enforcement action can’t be confirmed, the underlying tension is very real.
The DSA doesn’t mention generative AI by name — the law predates the current wave of AI video tools — but its provisions map onto synthetic media with uncomfortable precision. Platforms must label artificially generated or manipulated content that could be mistaken for authentic material. They must assess systemic risks posed by their services, including the spread of AI-generated disinformation. And as VLOPs, they must make those risk assessments available to EU auditors.
Content origin labeling for AI video is the specific sticking point. When Runway Gen-4.5 produces a photorealistic clip, or when Meta’s video tools generate scenes from text prompts, the question of whether users — and the platforms hosting that content — are disclosing its synthetic origin is exactly the kind of thing DSA auditors are trained to look for. The European Commission has already flagged AI-generated content as a priority risk area for VLOP oversight cycles running through 2025 and 2026.

Beyond labeling, there’s a thornier compliance issue: training data transparency. The DSA intersects here with the EU AI Act, which came into force in August 2024 and includes specific requirements for general-purpose AI models — including disclosure of training data summaries. For video generation models trained on copyrighted footage scraped from the web, that’s a disclosure that could open parallel legal exposure under both frameworks simultaneously.
Meta, which has faced a €1.2 billion GDPR fine as recently as October 2023 over Instagram data transfers, knows better than most what EU enforcement feels like when regulators decide to move. The company has been designated a VLOP alongside roughly 20 other major platforms, all of whom are now subject to annual independent audits of their DSA compliance. Generative AI tools launched after the DSA’s entry into force fall squarely within scope — there’s no grandfather clause for products that didn’t exist when the law was written.

Runway, Pika, Kling, and Sora all operate or serve users within the EU. None of them have published anything resembling a full DSA compliance statement for their video generation products. That’s not unusual — the enforcement calendar has prioritized the largest social platforms first — but the EU’s Digital Services Coordinators in each member state have the authority to escalate complaints against any in-scope service, not just the designated VLOPs.
The European Commission’s position has been consistent: the DSA is the digital rulebook for Europe, and it applies to all digital services operating in the EU. AI-generated video is a digital service. The math isn’t complicated.
The story here isn’t a single regulatory order that may or may not have happened. It’s that the DSA framework is structurally positioned to catch AI video platforms whether or not they’ve been paying attention. Labeling requirements, training data disclosure, and user consent for synthetic media aren’t features that most AI video tools were designed with — they’re compliance retrofits waiting to happen. The EU has shown, repeatedly, that it’s willing to fine first and negotiate later. Any AI video platform with EU users that hasn’t audited its DSA exposure is running a risk that the current enforcement calendar is masking, not eliminating.
