0
New GPU-accelerated servers supporting PCIe or baseboard configurations.

Giga Computing, a subsidiary of GIGABYTE and an industry leader in high-performance servers, server motherboards, and workstations, today announced a lineup of powerful GPU-centric servers with the latest AMD and Intel CPUs, including NVIDIA HGX H100 servers with both 4-GPU and 8-GPU modules. With growing interest in HPC and AI applications, specifically generative AI (GAI), this breed of server relies heavily on GPU resources to tackle compute-heavy workloads that handle large amounts of data.

RELATED: Giga Computing announces entry-level AMD Ryzen-based servers by GIGABYTE

 

With the advent of OpenAI’s ChatGPT and other AI chatbots, large GPU clusters are being deployed with system-level optimization to train large language models (LLMs). These LLMs can be processed by GIGABYTE’s new design-optimized systems that offer a high level of customization based on users’ workloads and requirements.

The GIGABYTE G-series servers are built first and foremost to support dense GPU compute and the latest PCIe technology. Starting with the 2U servers, the new G293 servers can support up to 8 dual-slot GPUs or 16 single-slot GPUs, depending on the server model. For the ultimate in CPU and GPU performance, the 4U G493 servers offer plenty of networking options and storage configurations to go alongside support for eight (Gen5 x16) GPUs. And for the highest level of GPU compute for HPC and AI, the G393 & G593 series support NVIDIA H100 Tensor Core GPUs.

All these new two CPU socket servers are designed for either 4th Gen AMD EPYC™ processors or 4th Gen Intel® Xeon® Scalable processors.

  • G293 series:
    • Dual socket Intel Xeon servers (up to 225W TDP) that support either eight dual-slot GPUs or sixteen single-slot GPUs, as well as eight 2.5” storage bays
  • G363 series:
    • Dual socket Intel Xeon server with NVIDIA HGX H100 4-GPU and six low-profile slots
  • G493 series:
  • G593 series:
    • Dual socket AMD EPYC server for NVIDIA HGX H100 8-GPU

NVIDIA HGX H100 servers

NVIDIA HGX H100 is the world’s most powerful end-to-end AI supercomputing platform that brings together the full power of NVIDIA H100 GPUs and fully optimized NVIDIA AI Enterprise and NVIDIA HPC software to provide the highest simulation, data analytics, and AI performance. As the software layer of the NVIDIA AI platform, NVIDIA AI Enterprise accelerates data science pipelines and streamlines development and deployment of production AI including generative AI, computer vision, speech AI and more. It includes over 50 frameworks, pretrained models and development tools.

ADVERTISEMENT

HGX H100 is available as a server building block in the form of integrated baseboards in four or eight H100 GPU configurations. Four H100 GPUs offer fully interconnected point-to-point NVLink connections between GPUs, while the eight-GPU configuration offers full GPU-to-GPU bandwidth through NVIDIA NVSwitch technology. Leveraging the power of H100 multi-precision Tensor Cores, an 8-way HGX H100 server provides up to 32 petaFLOPS of FP8 deep learning compute performance.

Relationships to Deliver Value

Partnering with leading technology companies allows Giga Computing to bring products quickly to the market for new enterprise applications. Giga Computing is committed to certifying servers and motherboards for various operating systems, software, and IT technologies.

ADVERTISEMENT

You may also like