18 - 05 - 2024
Login Form



 


Share this post

Submit to FacebookSubmit to TwitterSubmit to LinkedIn

ASUS, the leading IT Company in server systems, server motherboards, workstations and workstation motherboards, today announced support for the latest NVIDIA AI solutions with NVIDIA® Tesla® V100 Tensor Core 32GB GPUs and Tesla P4 on its accelerated computing servers.
Artificial intelligence (AI) is translating data into meaningful insights, services and scientific breakthroughs. The size of the neural networks powering this AI revolution has grown tremendously. For instance, today’s state of the art neural network model for language translation, Google’s MOE model has 8 billion parameters compared to 100 million parameters of models from just two years ago.
To handle these massive models, NVIDIA Tesla V100 offers a 32GB memory configuration, which is double that of the previous generation. Providing 2X the memory improves deep learning training performance for next-generation AI models by up to 50 percent and improves developer productivity, allowing researchers to deliver more AI breakthroughs in less time. Increased memory allows HPC applications to run larger simulations more efficiently than ever before.
Tesla P4 is the world’s fastest deep learning inference GPU for scale-out servers to enable smart, responsive AI-based applications. It slashes inference latency by up to 10X in any hyperscale infrastructure and provides an incredible 40X better energy-efficiency compared to CPUs, unlocking a new wave of AI services that were previously restricted by latency.
ASUS servers are powered by the latest NVIDIA Tesla V100 32GB and P4 GPUs, providing customers with the latest technology to elevate AI performance for diverse use cases.

  • ASUS ESC8000 G4 is optimized for HPC and AI Training, supports up to 8 Tesla V100 32GB GPUs and is a member of the HGX-T1 class of NVIDIA GPU-Accelerated Server Platforms.
  • ASUS ESC4000 G4 is designed for HPC and inference workloads and is powered by 4 Tesla V100 32GB or 8 Tesla P4 GPUs, depending on the application. It is a member of the HGX-I2 class of NVIDIA GPU-Accelerated Server Platforms and delivers a responsive, real-time experience to unlock new use cases by slashing deep learning inference latency by 10X. With 20 teraflops of inference performance using INT8 operations and a hardware-accelerated transcode engine, ESC4000 G4 unlocks new AI-based video services. A small form-factor, 75-Watt design fits any scale-out server and provides 40X higher performance-efficiency compared to CPUs.
  • ASUS RS720-E9 is excellent for AI inference and is powered by NVIDIA Tesla P4.
Building on the advantages offered by NVIDIA Tesla GPUs and industry-leading ASUS hardware design expertise, ASUS is committed to delivering even more choice and value for customers worldwide.

###

AVAILABILITY & PRICING The ASUS ESC8000 G4, ASUS ESC4000 G4, ASUS RS720-E9 are available now in North America. Please visit ASUS or contact your local ASUS representative for further information.