Disclosure: My company, Tirias Research, has consulted for AMD and other companies mentioned in this article.
The latest additions to the Instinct family are the MI350 and MI355X. Following a yearly release schedule similar to its main competitor in the AI space, AMD continues to deliver new server AI accelerators on an annual basis. The MI350 and MI355X are based on the new CDNA 4 architecture. The MI350 uses passive cooling with heat sinks and fans, while the MI355X employs liquid cooling via direct-to-chip technology. This shift to liquid cooling brings two major advantages: it raises the Total Board Power (TBP) from 1000W to 1400W and increases rack density from 64 GPUs per rack to as many as 128 GPUs per rack.
AMD claims that the MI350 series offers around a 3x performance boost in both AI training and inference compared to the prior MI300 generation, delivering competitive results that match or surpass rivals on certain AI models and workloads. (Tirias Research does not publish comparative benchmark data unless independently validated).
In terms of design, the MI350 series retains a structure similar to the MI300 generation, using 3D hybrid bonding to stack an Infinity Fabric die, two I/O dies, and eight compute dies onto a silicon interposer. Key upgrades include the adoption of the CDNA 4 compute architecture, integration of the latest HBM3E memory, and improvements in I/O architecture that reduce the number of I/O dies from four to two. These components are built on TSMC's N3 and N6 manufacturing processes, resulting in improved performance efficiency across the chip while keeping the physical size compact.
The second key set of announcements centers around ROCm, AMD’s open-source GPU software development platform. With the launch of ROCm 7, the platform has matured significantly. One of the most notable updates is native PyTorch support on Windows for AMD-based PCs, which is a major benefit for developers and makes ROCm fully portable across all AMD platforms. ROCm now supports all major AI frameworks and models, including 1.8 million models available on Hugging Face. Compared to ROCm 6, ROCm 7 delivers an average of 3 times faster training performance and 3.5 times better inference speeds on leading industry models. Alongside these software enhancements, AMD is expanding its engagement with developers through initiatives like a dedicated developer track at the Advancing AI event and access to the new AMD Developer Cloud via GitHub.
The third major announcement was the upcoming rack-level system architecture, Helios, scheduled for release in 2026. Like other industry leaders, AMD is moving toward treating the entire rack as the primary computing platform rather than just individual server trays. Helios will be powered by the latest AMD technologies for processing, AI, and networking. It will feature the Zen 6 Epyc CPU, the Instinct MI400 GPU accelerator based on the next-generation CDNA Next architecture, and the Pensando Vulcano AI NIC for large-scale networking. For internal rack GPU connectivity, Helios will implement UALink. The UALink 1.0 standard was released in April, with IP availability from Marvell and Synopsys, and switch chips expected from vendors such as Astera Labs and Cisco, who are also UALink partners.
AMD was joined at Advancing AI by a strong lineup of partners and customers, including Astera Labs, Cohere, Humain, Meta, Marvell, Microsoft, OpenAI, Oracle, Red Hat, and xAI. Of particular interest was Humain, due to its joint venture with AMD and other semiconductor companies to develop AI infrastructure in Saudi Arabia. Humain has already started building eleven data centers and plans to deploy 50MW modules each quarter. A core part of Humain’s strategy is making use of Saudi Arabia’s abundant energy resources and young workforce.
There is much more to unpack regarding these announcements and the many partnership collaborations detailed, but these three highlight AMD’s ongoing commitment to staying competitive in the data center AI market, showcasing solid execution and reinforcing its position as a credible alternative provider of data center GPU accelerators and AI platforms. As the tech sector races to meet growing AI demands, AMD continues to refine its server solutions to better serve AI developers and applications. While it doesn’t overtake the competition outright, AMD closes the gap in several areas, positioning itself as the strongest alternative to Nvidia.
The above is the detailed content of AMD Accelerates AI Data Centers With Instinct And Helios. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Remember the flood of open-source Chinese models that disrupted the GenAI industry earlier this year? While DeepSeek took most of the headlines, Kimi K1.5 was one of the prominent names in the list. And the model was quite cool.

By mid-2025, the AI “arms race” is heating up, and xAI and Anthropic have both released their flagship models, Grok 4 and Claude 4. These two models are at opposite ends of the design philosophy and deployment platform, yet they

But we probably won’t have to wait even 10 years to see one. In fact, what could be considered the first wave of truly useful, human-like machines is already here. Recent years have seen a number of prototypes and production models stepping out of t

Until the previous year, prompt engineering was regarded a crucial skill for interacting with large language models (LLMs). Recently, however, LLMs have significantly advanced in their reasoning and comprehension abilities. Naturally, our expectation

Built on Leia’s proprietary Neural Depth Engine, the app processes still images and adds natural depth along with simulated motion—such as pans, zooms, and parallax effects—to create short video reels that give the impression of stepping into the sce

Picture something sophisticated, such as an AI engine ready to give detailed feedback on a new clothing collection from Milan, or automatic market analysis for a business operating worldwide, or intelligent systems managing a large vehicle fleet.The

A new study from researchers at King’s College London and the University of Oxford shares results of what happened when OpenAI, Google and Anthropic were thrown together in a cutthroat competition based on the iterated prisoner's dilemma. This was no

Scientists have uncovered a clever yet alarming method to bypass the system. July 2025 marked the discovery of an elaborate strategy where researchers inserted invisible instructions into their academic submissions — these covert directives were tail
