AMD is trying hard to remind people that #Nvidia is not the only company selling artificial intelligence chips. They announced the availability of new accelerators and processors designed to run large language models, or LLMs.
The new products introduced by the chipmaker include the Instinct MI300X accelerator and the Instinct M1300A accelerated processing unit (APU), designed for LLM training and management. The company claims that the MI300X has 1.5 times more memory than the previous M1250X. Both new products have more memory and are more power efficient than their predecessors, according to #AMD.
The company's CEO, Lisa Su, said: "LLMs continue to grow in size and complexity, requiring enormous amounts of memory and compute. And we know that GPU affordability is the most important factor in AI adoption."
Su also noted that the MI300X is the most powerful accelerator in the world. According to her, the MI300X is comparable to Nvidia H100 chips in LLM training, but performs better on the output side, outperforming the H100 by 1.4 times when running #Meta's Llama 2, an LLM with 70 billion parameters.
AMD has partnered with several major companies, including Google, Microsoft and Twitter, to use their new accelerators and processors in their artificial intelligence projects.
The announcement is part of AMD's strategy to expand its presence in artificial intelligence and compete with Nvidia. Nvidia is the market leader in GPUs and AI chips, but AMD is looking to take its share of the fast-growing industry.
AMD is also working on developing its own AI architectures, including Zen4 and RDNA 3, which are expected to be included in their future products.
According to IDC, the artificial intelligence market is expected to grow to $110 billion by 2024, making it one of the most promising industries for tech companies. The rapid development of artificial intelligence requires powerful chips and processors, and AMD is trying to take advantage of this opportunity to increase its market share.