The EU Won't Receive Meta’s Next Major AI Models. The Reason: Local Regulations

  • Meta is developing a new multimodal artificial intelligence model that won’t reach European territory.

  • The company says the reason is “the unpredictable nature of the European regulatory environment.”

The European Union is a getting left behind when it comes to artificial intelligence. Except for France’s Mistral and a few local startups, significant advances in AI models come from the U.S. and, to a lesser extent, China. Companies such as OpenAI, Anthropic, Google, and Meta set the pace for this technology and are the creators of great models such as ChatGPT, Gemini, LLaMA, and Claude, which either don’t reach the EU or arrive late due to local regulations. This was the case with Gemini and Apple Intelligence and will be the case with Meta’s upcoming multimodal AI model.

Multimodal AI. First, let’s explain this concept. A multimodal AI model is an artificial intelligence algorithm inspired by our senses. Humans see, read, hear, and understand, all simultaneously and in real-time. Multimodal AI tries to do the same. This AI model can process and integrate data in different formats: text, images, audio, and video. We saw an obvious example of this in presentation of GPT-4o and at the recent Google I/O.

What happened? Meta, the developer of the “open source” LLaMa models, has confirmed that it will launch a new multimodal artificial intelligence model in the coming months, but not in the EU. According to a Meta post reported by Axios, the reason is “the unpredictable nature of the European regulatory environment.” However, Meta will also launch another text generator model that will available in the territory.

It’s not just about users. Meta plans to integrate this AI into various products, from smartphones to its Ray-Ban smart glasses—where this kind of technology makes a lot of sense. However, European customers can't use these models, even if they are “open source.” And it's not only users that won't have access to this AI model, companies that want to develop products and services based on it aren't getting access, either.

The underlying problem. The EU has been fighting Meta for years over how it utilizes the information of its of users, particularly regarding targeted advertising on Facebook and Instagram. Meta’s solution was to put up a paywall: You can pay if you don't want to see ads or have your information used for advertising purposes. If you don't, tough luck.

Meta CEO Mark Zuckerberg. Image | Anthony Quintano

When Meta talks about the “unpredictable nature of the European regulatory environment,” it's referring to the General Data Protection Regulation (GDPR), the Digital Markets Act (DMA), and everything related to data collection. In other words, the ability to use European user data to train its artificial intelligence models without running afoul of the GDPR, not to mention the AI Act.

The underlying problem is user privacy on one side of the scale and the advancement of AI on the other. Finding the middle ground takes work, and the situation is tense right now. The EU protects privacy to the hilt, and tech companies aren’t going to launch a product that violates GDPR and potentially leads to a multimillion dollar fine. Hence, the easiest solution is to leave EU users out of the game. As such, it’s true that the EU is losing the artificial intelligence race.

Meta is fighting back. As the company revealed last May, Meta wanted to use public Facebook and Instagram posts to train its generative AI models. So, it sent its more than 2 billion in-app notifications and emails to European users to allow them to withdraw their consent. Why does meta need users' information necessary? According to Meta:

“If we don’t train our models on the public content that Europeans share on our services and others, such as public posts or comments, then models and the AI features they power won’t accurately understand important regional languages, cultures or trending topics on social media. We believe that Europeans will be ill-served by AI models that are not informed by Europe’s rich cultural, social and historical contributions.”

Meta claims that it informed European regulators of this initiative and incorporated all feedback received. However, after announcing its plans in June, the EU forced Meta to halt training with Europeans' data. According to a Meta representative quoted in Axios, European regulators “take much longer to interpret existing laws than their counterparts in other regions.” That's likely a reference to the UK, which has a law similar to GDPR. In contrast to the EU, Meta will launch its upcoming multimodal AI models in the UK.

Meta AI. Imagen | Xataka

A two-speed AI. The problem with the European regulations is that we’re moving toward two-speed AI. There will be AI for the EU and much better and more capable AI for the rest of the world. This is already happening.

Copilot+ for Windows 11, announced for September 2023, isn’t yet officially available in Europe. Neither is Meta AI, WhatsApp’s AI-powered chatbot. Google Bard, now Gemini, took two months to reach the EU. In any case, the solution is complex: It involves balancing citizens’ data protection with competitiveness and innovation.

This article was written by Jose García and originally published in Spanish on Xataka.

Images | Christian Lue | Flickr (Anthony Quintano)

Related | ‘Have I Been Trained?’ Is a Site That Helps You Find Out if Your Data and Work Have Been Used to Train AI

See all comments on https://www.xatakaon.com

SEE 0 Comment

Cover of Xataka On