Google Backs EU AI Code as Meta Balks at “Stifling” Rules
Google will adopt the European Union’s voluntary AI Code of Practice despite innovation concerns, positioning itself as a regulatory collaborator while Meta refuses participation, citing legal risks. The divergent strategies reveal a growing split in tech giants’ approaches to AI governance as binding rules loom.
The EU’s AI Rulebook Takes Shape
Effective August 2, the Code establishes guidelines for general-purpose AI systems like large language models, requiring:
- Transparent training data disclosure
- Copyright compliance mechanisms
- Robust cybersecurity protections
Designed as a bridge to the EU AI Act (slated for 2026 implementation), the framework offers early adopters regulatory certainty. Existing AI systems have until 2027 to comply fully.

Source: Pexels Image
Corporate Chess Match
Alphabet President Kent Walker confirmed Google’s participation but warned: “Overly broad requirements could inadvertently hamper Europe’s AI competitiveness.” The company will push for copyright rule adjustments during the Code’s trial phase.
Meta’s refusal stems from concerns about liability for AI outputs and trade secret exposure. Analysts suggest this could trigger stricter oversight as regulators favor compliant firms like Microsoft, which is expected to sign.
Regulatory Chess Game
Early adopters gain advantage in shaping final AI Act requirements, while holdouts risk being sidelined in policy discussions. As Brussels becomes the de facto AI rulemaker, this voluntary pact tests whether “ethics-first” regulation can coexist with rapid generative AI development. The tech world watches whether Europe’s balance of innovation and oversight becomes a global template.
