Social media

Meta’s Testing its Own AI Chips to Expand its Processing Capacity

Mark Zuckerberg is high on AI, so high in fact that he’s invested billions into Meta’s own chip development process, so that his company will be able to build better AI data processing systems, without having to rely on external providers.

As reported by Reuters:

“Facebook owner Meta is testing its first in-house chip for training artificial intelligence systems, a key milestone as it moves to design more of its own custom silicon and reduce reliance on external suppliers like Nvidia, two sources told Reuters. The world’s biggest social media company has begun a small deployment of the chip and plans to ramp up production for wide-scale use if the test goes well, the sources said.”

Which is a significant development considering that Meta currently has around 350,000 Nvidia H100 chips powering its AI projects, which each cost around $25k to buy off the shelf.

That’s not what Meta would have paid, as it’s ordering them in massive volumes. But even so, the company recently announced that it’s boosting its AI infrastructure spend by the tune of around $65 billion in 2025, which will include various expansions of its data centers to accommodate new AI chip stacks.

And the company may have also indicated the scope of how its own chips will increase its capacity, in a recent overview of its AI development is evolving.

By the end of 2024, we’re aiming to continue to grow our infrastructure build-out that will include 350,000 NVIDIA H100s as part of a portfolio that will feature compute power equivalent to nearly 600,000 H100s.

So, presumably, while Meta will have 350k H100 units, it’s actually hoping to replicate the compute power of almost double that.

Could that additional capacity be coming from its own chips?

The development of its own AI hardware could also lead to external opportunities for the company, with H100s in massive demand, and limited supply, amid the broader AI gold rush.

More recently, Nvidia has been able to reduce the wait times for H100 delivery, which suggests that the market is cooling off a little. But even without that external opportunity, the fact that Meta may be able to build out its own AI capacity with internally built chips could be a huge advantage for Zuck and Co. moving forward.

Because processing capacity has become a key differentiator, and may end up being the element that defines an ultimate winner in the AI race.

For comparison, while Meta has 350k H100s, OpenAI reportedly has around 200k, while xAI’s “Colossus” super center is currently running on 200k H100 chips as well.

Other tech giants, meanwhile, are developing their own alternatives, with Google working on its “Tensor Processing Unit” (TPU), while Microsoft, Amazon and OpenAI all working on their own AI chip projects.

The next battleground, then, could the tariff wars, with the U.S. government implementing huge taxes on various imports in order to penalize foreign providers, and (theoretically) benefit local business.

If Meta’s doing more of its production in the U.S., that could be another point of advantage, which may give it another boost over the competition.

But then again, as more recent models like DeepSeek have shown, it may not end up being the processing power that wins, but the ways that it’s used that truly defines the market.

That’s also speculative, as DeepSeek has benefited more from other AI projects than it initially seemed. But still, there could be more to it, but if compute power does end up being the critical factor, it’s hard to see Meta losing out, depending on how well its chip project fares.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *