DataCenterNews Asia Pacific - Specialist news for cloud & data center decision-makers
Story image

Microsoft introduces custom silicon chips for enhanced AI services

Fri, 17th Nov 2023
FYI, this story is more than a year old

In an endeavour to address the increasing demand for Artificial Intelligence (AI) services, Microsoft has embarked on a systems approach, aiming to fine-tune everything from silicon to services. A significant step towards achieving this was unveiled at Microsoft Ignite - two custom-designed silicon chips and integrated systems. Redefining their infrastructure systems, these chips will allow for top-to-bottom optimisation and serve as the final element to deliver a comprehensive system, capable of catering to internal and customer workloads.

The meticulously tested silicon chips, namely the Microsoft Azure Maia AI Accelerator and the Microsoft Azure Cobalt CPU, are set for a roll-out early next year. The former, optimised for AI tasks and generative AI, and the latter, a processor crafted to run general-purpose compute workloads on the Microsoft Cloud, are at the centre of Microsoft's grand scheme of revamping its infrastructure systems. These chips will power services like Microsoft Copilot and Azure OpenAI Service at Microsoft datacentres, ably supported by a plethora of products from industry partners.

Scott Guthrie, executive vice president of Microsoft's Cloud + AI Group, stated, "Microsoft is building the infrastructure to support AI innovation, and we are reimagining every aspect of our datacentres to meet the needs of our customers. At the scale we operate, it's important for us to optimise and integrate every layer of the infrastructure stack to maximise performance, diversify our supply chain and give customers infrastructure choice."

Rani Borkar, corporate vice president for Azure Hardware Systems and Infrastructure (AHSI), noted that their core strength lies in software, but undoubtedly, Microsoft considers itself a systems company, relentlessly striving to co-design and optimise hardware and software together, which leads to an exponential increase in combined efficiency and performance. Partnering with industry for more infrastructure options, Microsoft launched a preview of the new NC H100 v5 Virtual Machine Series built for NVIDIA H100 Tensor Core GPUs, aiming for superior performance, reliability and efficiency.

The Maia 100 AI Accelerator will power some of the most substantial internal AI workloads operating on Microsoft Azure, with OpenAI providing feedback and aiding to shape future designs. The combination of these power-packed chips with their optimised design and performance will result in a more efficient, pleasurable experience for users, also heeding Microsoft's sustainability goals.

Custom-configured for Azure's hardware stack, the Maia 100 AI Accelerator aims at ultimate utilisation of the hardware. Coupled with this, the Cobalt 100 CPU is built emphasising performance per watt throughout its datacentres, which directly translates into more computing power for each unit of energy consumed, thus contributing to energy efficiency.

Prevalent prior to 2016, the norm of purchasing the various layers of the 'cloud' from off the shelf vendors has since shifted, with Microsoft preferring to custom build its own servers and racks. This integration, housing chips most suited for its crucial workloads, has led to a drop in costs and providing customers a more streamlined experience. The company can now optimise its current datacentre assets and maximise server capacity within its given footprint.

In the words of Pat Stemen, partner program manager on the AHSI team, "Microsoft innovation is going further down in the stack with this silicon work to ensure the future of our customers' workloads on Azure, prioritising performance, power efficiency and cost. We chose this innovation intentionally so that our customers are going to get the best experience they can have with Azure today and in the future."

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X