Be Tech Ready!!
Artificial IntelligenceInternet

Meta to make in-house custom chips to power AI drive

Meta Platforms, the owner of Facebook, is gearing up to introduce a new iteration of its custom chip into its data centers this year. This move is part of their strategy to bolster their efforts in artificial intelligence (AI), as revealed in an internal company document seen by Reuters on Thursday.

The chip, which is the second generation of Meta’s in-house silicon line announced last year, might play a key role in lessening Meta’s reliance on Nvidia chips, which currently dominate the market. This move aims to tackle the soaring costs linked to running AI workloads as Meta races to roll out AI products.

The largest social media company globally has been hustling to ramp up its computing capacity to support the energy-intensive generative AI products integrated into platforms like Facebook, Instagram, and WhatsApp, as well as hardware devices like its Ray-Ban smartglasses. This effort has involved spending billions of dollars to accumulate specialized chips and reconfigure data centers accordingly.

Given Meta’s massive scale, effectively implementing its own chip has the potential to cut annual energy costs by hundreds of millions of dollars and save billions in chip procurement expenses, as pointed out by Dylan Patel, the founder of the silicon research group SemiAnalysis.

The investment in chips, infrastructure, and energy to sustain AI applications has turned into a significant financial drain for tech companies. This, to some extent, is counteracting the gains achieved during the initial wave of enthusiasm surrounding the technology.

A spokesperson from Meta verified the intention to start producing the upgraded chip in 2024. They mentioned that it would collaborate with the hundreds of thousands of off-the-shelf graphics processing units (GPUs), which are the preferred chips for AI, that the company is purchasing.

Last month, Meta CEO Mark Zuckerberg revealed that the company aims to acquire around 350,000 flagship “H100” processors from Nvidia, the top producer of highly sought-after GPUs for AI, by the end of the year. Alongside contributions from other suppliers, Meta plans to amass a total compute capacity equivalent to 600,000 H100s, as stated by Zuckerberg.

Integrating its own chip into this strategy marks a positive shift for Meta’s in-house AI silicon project. This follows a decision made by executives in 2022 to abandon the initial version of the chip. Instead, the company chose to invest billions of dollars in Nvidia’s GPUs, which dominate the market for an AI process called training. Training involves feeding vast datasets into models to instruct them on how to carry out tasks.

The fresh chip, internally dubbed “Artemis,” just like its forerunner, is limited to a process known as inference. In this phase, the models are summoned to utilize their algorithms for making ranking decisions and generating responses to user queries.