News

AMD tested the MI300X with Genoa and Turin CPUs, showing good performance scaling. The MI300X's peak FP16 performance is 1,307.4 teraflops, generating 2,530.7 tokens per second in server mode.
AMD's data center Instinct MI300X GPU can compete against Nvidia's H100 in AI workloads, and the company has finally posted an official result for MLPerf 4.1 — for the Llama 2 70B LLM at least.
AMD launched its new flagship AI GPU and accelerator, the Instinct MI300X, earlier this month. During the launch event, AMD provided charts and data indicating that the MI300X outperformed NVIDIA ...
Meta’s embrace of AMD’s MI300X makes AMD the prime second source to Nvidia as AI compute demand accelerates. Read why I rate ...
AMD has finally launched its Instinct MI300X accelerators, ... (HPC) applications. MI300X is faster than H100, AMD said earlier this month, ...
When Compared To The NVIDIA H100, The MI300X Has Some Issues. Let’s do a sanity check on AMD”s ambitions. First, I must say that the MI300 is an amazing chip, a tour de force of applying ...
On Wednesday, AMD released benchmarks comparing the performance of its MI300X with Nvidia's H100 GPU to showcase its Gen AI inference capabilities. For the LLama2-70B model, a ...
TensorWave started racking up AI systems powered by AMD's just-released Instinct MI300X AI accelerator, which it plans to lease the MI300X chips at a fraction of the cost of NVIDIA's Hopper H100 ...
AMD said its newly launched Instinct MI300X data center GPU exceeds Nvidia’s flagship H100 chip in memory capabilities and surpasses it in key AI performance metrics.
AMD's CEO Introduces the Instinct MI300X. AMD. ... The competition for the MI300X is clearly going to be Nvidia’s H100 GPU. The Nvidia H100 SXM module offers 80GB of memory today, ...
AMD launched its Instinct MI300X AI accelerator and the Instinct MI300A data center APU at its Advancing AI event in San Jose, California, claiming a lead over Nvidia in certain AI workloads.