Complete this form to download the report

The Data Center architecture for Graphcore computing

Designed for large-scale parallel workloads

By agreeing to receive this content piece you're opting-in to receive follow up communication by Graphcore , Inc. sponsor.

This research examines the Graphcore data center architecture that enables highly scalable parallel processing for Artificial Intelligence (AI) and High-Performance Computing (HPC).

The architecture encompasses efficient low-latency communications between Intelligence Processing Units (IPUs) within a node, within a rack, and across a data center with hundreds, and in the future thousands, of accelerators to handle exponentially increasing AI model complexity.

IPU-Fabric dynamically connects IPU accelerators with disaggregated servers and storage. Critically, this agile platform for parallel applications supports a comprehensive software stack to develop and optimize these workloads using open-source frameworks and Graphcore-developed libraries and development tools.