Mark Zuckerberg Keeps Saying That His AI Model Is Open Source, But He's Misusing the Term

The release of the new Llama 3.1 models focuses on their performance and the fact that they’re theoretically open source. Except they aren’t.

Meta CEO Mark Zuckerberg can say what he wants, but the Llama models aren’t open source. At least not in the strict sense of the definition. As the Open Source Initiative explained some time ago, “Meta is confusing ‘open source’ with ‘resources available to some users under certain conditions,’ two very different things.”

These comments, made by the Open Source Initiative after the release of Llama 2, also apply to Meta’s new generative AI models. In both cases, as in the rest of the industry, we see these models taking all they can from the public Internet—and probably some of the private—and, as in this case, misusing the term open source. Let’s see why.

Infinite Voracity

Known as Llama 3.1, these models are promising in performance and may even surpass GPT-4o or Claude 3.5. In addition to highlighting their power and versatility, Zuckerberg emphasized in an open letter that “open source AI is the way forward.”

The post with the image that X owner CEO Elon Musk posted a few months ago is no longer available on Twitter (X).

It’s the same discourse that Meta’s CEO had after the models’ release in an interview with Bloomberg. However, on that occasion, he admitted that Meta kept the datasets it used to train Llama 3.1 secret. “Even though it’s open, we’re also building this ourselves,” he emphasized, saying that his company only used Facebook and Instagram posts in addition to proprietary datasets licensed from others, without specifying more.

This lack of transparency is common in the industry: We don’t know exactly how developers have trained other models, such as GPT-4 or Claude 3.5, which are entirely closed and proprietary. It's likely that companies collected surprising data in this case and in others. For example, one of these models has been trained on 5,000 tokens of my personal blog.

The appetite of the models seems endless, leading to controversies and lawsuits but also agreements for content companies to license their texts, images, and videos for training. Sometimes, they don’t even ask for permission. For example, OpenAI ran out of data to train its AI, so it transcribed a million hours of YouTube to train GPT-4.

“Open Weights” Isn’t the Same As “Open Source”

The model is freely available on GitHub, and this is certainly noteworthy. As with Llama 2, companies and independent developers can use these models to create AI models derived from Llama 3.1.

This distribution is similar to that of GNU and Linux. They start with a Linux kernel and a set of components, to which they add additional elements.

The Llama 3 license allows you to work this way. Still, it also imposes a critical barrier: Models derived from Llama 3.1 are free unless they’re too successful. If more than 700 million monthly active users access the model, its developers must license it from Meta.

However, as in other cases, Meta shares the so-called “weights” that provide information about how it performs its calculations. This allows anyone to download the already trained neural network files and then use them directly or refine their performance for their own use cases. Doing this means these models are considered “open weights” rather than open source.

As Ars Technica states, this contrasts with what happens with proprietary models such as those from OpenAI, which don’t share these weights and monetize the models through subscriptions to ChatGPT Plus or through an API.

The term “open” in many AI projects, including Llama 3.1, has attracted increased scrutiny. (Someone should probably tell OpenAI, which uses it as part of its company name).

In this regard, exciting research by a team at Radboud University in Nijmegen, the Netherlands, highlighted this type of use. The project analyzed various AI models and evaluated several parameters that allow us to judge whether the models are more or less open.

Source: Radboud University in Nijmegen.

The result is a fantastic chart where we can quickly see two things. First, no model is perfect in this sense. Second, Meta’s models are very low in this rating, making it difficult to consider them open source.

Simon Willison, co-creator of the Django programming environment and an expert in the field, commented that Zuckerberg’s open letter was a “fascinating” and “influential document” but also noted that “it does, however, look like we have lost the battle in terms of getting them to stop misusing the term open source.”

Indeed, Zuckerberg’s influence makes it difficult for the general public not to accept that Meta’s models are indeed open source when they aren’t. As Willinson told Ars Technica:

“I see Zuck’s prominent misuse of ‘open source’ as a small-scale act of cultural vandalism. Open source should have an agreed meaning. Abusing the term weakens that meaning, which makes the term less generally useful because if someone says, ‘it’s open source,’ that no longer tells me anything useful. I have to then dig in and figure out what they’re actually talking about.”

That’s true. The massive use of the word—not just by Zuckerberg—has weakened the concept, partly because there’s no universally accepted definition of open source in general and an open source model of AI in particular.

This article was written by Javier Pastor and originally published in Spanish on Xataka.

Imagen | Black011 con Midjourney

Related | Bill Gates Predicts a Near-Term Decline for Current AI. His Bet on the Future? ‘Metacognitive’ AI

See all comments on https://www.xatakaon.com

SEE 0 Comment

Cover of Xataka On