
HPE ProLiant Compute DL384 Gen12
Memory intensive AI inferencing with Retrieval Augmented Generation or non-AI workloads plus can handle large language models fine-tuning. The HPE ProLiant Compute DL384 Gen12 is a high-performance 2U rack server engineered for demanding AI workloads, including generative AI, large language models (LLMs), and retrieval-augmented generation (RAG). It features the NVIDIA GH200 Grace Hopper™ Superchip, combining Grace CPUs and Hopper GPUs for exceptional compute and memory capabilities.​
-
Processor: Up to 2 x NVIDIA GH200 Grace Hopper Superchips.
-
Memory: Up to 1.2 TB of coherent memory (480 GB LPDDR5X + 144 GB HBM3e per superchip)​.
-
Storage: Supports up to 8 x EDSFF NVMe Gen5 drives​.
-
Expansion: Up to 4 x PCIe Gen5 x16 slots; options for OCP 3.0​.
-
Power Supply: Up to 4 x 1800W–2200W Flex Slot Titanium Hot Plug PSUs​.
-
Management: HPE iLO 6 with Intelligent Provisioning and Advanced options​.
-
Operating Systems: Supports RHEL 9.2+, with SLES and Ubuntu support forthcoming​.
-
Use Cases: Ideal for AI inferencing, LLM fine-tuning, and hybrid cloud deployments​
HPE ProLiant Compute DL384 Gen12
Enquiry NowMemory intensive AI inferencing with Retrieval Augmented Generation or non-AI workloads plus can handle large language models fine-tuning. The HPE ProLiant Compute DL384 Gen12 is a high-performance 2U rack server engineered for demanding AI workloads, including generative AI, large language models (LLMs), and retrieval-augmented generation (RAG). It features the NVIDIA GH200 Grace Hopper™ Superchip, combining Grace CPUs and Hopper GPUs for exceptional compute and memory capabilities.​
-
Processor: Up to 2 x NVIDIA GH200 Grace Hopper Superchips.
-
Memory: Up to 1.2 TB of coherent memory (480 GB LPDDR5X + 144 GB HBM3e per superchip)​.
-
Storage: Supports up to 8 x EDSFF NVMe Gen5 drives​.
-
Expansion: Up to 4 x PCIe Gen5 x16 slots; options for OCP 3.0​.
-
Power Supply: Up to 4 x 1800W–2200W Flex Slot Titanium Hot Plug PSUs​.
-
Management: HPE iLO 6 with Intelligent Provisioning and Advanced options​.
-
Operating Systems: Supports RHEL 9.2+, with SLES and Ubuntu support forthcoming​.
-
Use Cases: Ideal for AI inferencing, LLM fine-tuning, and hybrid cloud deployments​