News
First introduced in late 2023, AMD’s Radeon Instinct MI300X accelerator has made an impressive debut on Geekbench's OpenCL ...
At Tuesday’s event, the chip designer revealed the second chip to come from its Instinct MI300 series: the MI300X, which AMD said provides better efficiency and cost savings for running large ...
Despite AMD's MI300x boasting higher advertised TeraFLOPs (TFLOPs) and memory bandwidth than Nvidia’s H200, Jefferies’ proprietary benchmarking suggests that the H200 "retains a significant ...
The MI300 chips—which are also getting support from Lenovo, Supermicro and Oracle—represent AMD’s biggest challenge yet to Nvidia’s AI computing dominance. It claims that the MI300X GPUs ...
The analyst pointed to AMD's MI300x chip lagging behind Nvidia's (NVDA) H200 in key AI benchmarks. He also flagged renewed competitive pressure from Intel (NASDAQ:INTC), driven by management ...
“Our proprietary benchmarking report suggests real-world throughput of NVDA’s H200 across a range of open-source models is substantially higher than AMD’s MI300x, despite MI300x’s higher ...
By integrating Rapt’s intelligent workload automation platform with AMD Instinct MI300X, MI325X and upcoming MI350 series GPUs, this collaboration delivers a scalable, high-performance ...
Whereas the MI300X sports 192 GB of HBM3 high-bandwidth memory and a memory bandwidth of 5.3 TBps, the MI325X features up to 288 GB of HBM3e and 6 TBps of bandwidth, according to AMD.
In 2024, we reported on a number of big wins for AMD, which included shipping thousands of its MI300X AI accelerators to Vultr, a leading privately-held cloud computing platform, and to Oracle.
Despite AMD’s MI300x boasting higher advertised TeraFLOPs (TFLOPs) and memory bandwidth than Nvidia’s H200, Jefferies’ proprietary benchmarking suggests that the H200 "retains a significant ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results