Nvidia and cloud partners assist Meta in introducing its latest Llama AI model.

Nvidia and cloud partners assist Meta in introducing its latest Llama AI model.
Nvidia and cloud partners assist Meta in introducing its latest Llama AI model.
  • On Tuesday, Meta unveiled the latest iteration of its Llama AI model, Llama 3.1.
  • The latest Llama technology is available in three versions, including the most powerful AI model from Meta yet. As with previous versions, the newest model is open source and can be accessed for free.
  • The announcement also highlights the close and growing partnership between Meta and Nvidia.

On Tuesday, Meta unveiled the latest version of its Llama artificial intelligence model, named Llama 3.1. The new Llama technology is available in three different versions, with the largest and most powerful variant being the latest AI model from Meta. Like its predecessors, Llama 3.1 is open source and can be accessed for free.

The social network is investing heavily in AI development to remain competitive with other leaders in the field, including startups and tech giants.

Meta has announced a growing partnership with Nvidia, with Nvidia being a key Meta partner and providing the social networking giant with GPUs to help train its AI models, including the latest version of Llama.

Meta has no plans to launch its own enterprise tech business, unlike companies such as OpenAI, which aim to generate revenue by selling access to their LLMs or providing services to clients using the technology.

Meta is partnering with several tech companies to offer Llama 3.1 via their cloud computing platforms and sell security and management tools that work with the new software. Some of Meta's 25 Llama-related corporate partners include Amazon Web Services, Google Cloud, Microsoft Azure, Databricks, and Dell.

While Meta CEO Mark Zuckerberg has stated during past earnings calls that Meta generates some revenue from its corporate Llama partnerships, a Meta spokesperson revealed that any financial gain is only incremental. However, Meta believes that by investing in Llama and related AI technologies and making them available for free through open source, it can attract top talent in a competitive market and reduce its overall computing infrastructure costs, among other advantages.

Meta launched Llama 3.1 in anticipation of Zuckerberg and Nvidia CEO Jensen Huang speaking together at a conference on advanced computer graphics. As one of Nvidia's top end-customers, Meta relies on the latest chips to train its AI models, which it uses internally for targeting and other products. For instance, Meta announced that the Llama model launched on Tuesday was trained on 16,000 of Nvidia's H100 graphics processors.

But the relationship is also important to both companies for what it represents.

Nvidia's chips could remain in high demand if Meta continues to train open-source models that other companies can use and adapt for their businesses without paying a licensing fee or seeking permission.

The cost of creating open-source models can be hundreds of millions or billions of dollars, making it difficult for many companies to develop and release them. While Google and OpenAI, both Nvidia customers, keep their most advanced models private, there are few other companies with the financial resources to create similar models.

Meta, similarly to Nvidia, requires a steady source of the most recent GPUs to develop more potent models. In contrast to Nvidia, Meta aims to cultivate a community of developers who create AI applications using the company's open-source software, even if it means providing code and AI weights for free, which are costly to develop.

Meta benefits from the open-source approach by exposing developers to its internal tools and inviting them to build on top of it, according to Ash Jhaveri, Meta's VP of AI partnerships. Additionally, the open-source approach helps Meta because it uses its AI models internally, allowing the company to reap improvements made by the open-source community.

Zuckerberg announced in a blog post on Tuesday that Facebook was taking a "different approach" to the Llama release this week, stating that they were actively building partnerships to enable other companies in the ecosystem to offer unique functionality to their customers.

Meta, being a social networking giant and not an enterprise vendor, can direct companies inquiring about Llama to one of its enterprise partners, such as Nvidia, according to Jhaveri.

The largest version of the Llama 3.1 family of models is called Llama 3.1 405B, which has 405 billion parameters. This giant large language model (LLM) can perform more complex tasks than smaller LLMs, such as understanding context in long streams of text, solving complex math equations, and generating synthetic data that can potentially improve smaller AI models.

Meta is also launching smaller versions of Llama 3.1, known as Llama 3.1 8B and Llama 3.1 70B models. These smaller models are enhanced versions of their predecessors and can be utilized to power chatbots and coding assistants, the company stated.

Meta's U.S.-based WhatsApp users and Meta.AI website visitors will be able to experience Llama 3.1's capabilities through the company's digital assistant. The digital assistant, powered by the latest version of Llama, will be able to solve complex math problems and software coding issues. Users can switch between the new, large Llama 3.1 LLM and a less capable but faster and smaller version for their queries.

Watch: Cramer's Mad Dash: Meta

Cramer’s Mad Dash: Meta
by Kif Leswing

Technology