A landmark technology solution for generative artificial intelligence (AI) has been launched in Australia, thanks to a collaborative endeavour between Hewlett Packard Enterprise (HPE) and NVIDIA. This forward-thinking supercomputing solution is primed to revolutionize AI training and simulation for major enterprises and research institutions.
The solution targets the immediate need for robust and efficient AI model training. It amalgamates AI/ML software, supercomputing technology, and NVIDIA's pioneering Grace Hopper GH200 Superchips into a first-of-its-kind package. The convergence of these technologies represents a significant stride forward in promoting global AI advancements.
Standout features of the solution include a Quad NVIDIA Grace Hopper GH200 Superchip configuration and utilisation of HPE Cray supercomputing technology—architecture used in the world's quickest supercomputer to ensure top-tier performance. Moreover, the solution is designed with enhanced energy efficiency, aligning with Australia's push for greener technology through its liquid-cooling capabilities.
The innovative design of the solution is specifically tailored to meet the rigorous demands of large-scale AI workloads. The solution pledges unrivalled performance and scalability whilst simultaneously honouring its commitment to sustainable technology operations.
Justin Hotard, executive vice president and general manager of HPC, AI & Labs at HPE, heralds the potential of the solution, "To support generative AI, organisations need to leverage solutions that are sustainable and deliver the dedicated performance and scale of a supercomputer to support AI model training."
It's not just the hardware and efficiency that sets this solution apart. Integrated software tools capable of building AI applications, customising models, and developing and modifying code, cement this solution as cutting-edge and essential to generative AI. Paired with HPE Cray supercomputing technology, the NVIDIA Grace Hopper GH200 Superchips open up new possibilities for handling the large-scale AI workloads such as large language model and deep learning recommendation model training.
Unveiled as a comprehensive solution for generative AI, the package offers an integrated, AI-native package, ranging from AI/ML acceleration software to turnkey simplicity courtesy of HPE Complete Care Services. Other components include the HPE Cray Programming Environment suite and features designed for scalability, such as the industry-leading NVIDIA GH200 Grace Hopper Superchips. HPE Complete Care Services rounds out the offering, providing comprehensive support to streamline AI adoption.
Looking ahead, the trajectory of supercomputing and AI is en route to a more sustainable future. Given estimates that by 2028, AI workloads could demand around 20 gigawatts of power within data centres, customers will need solutions that epitomise energy efficiency. HPE prioritises this by delivering solutions with liquid-cooling capabilities, hence consuming less power and driving performance improvement.
The groundbreaking supercomputing solution for generative AI is set to be generally available in December through HPE in more than 30 countries.