Newsletters




Jayhawk by d-Matrix Tackles Ever-Expanding Compute and Memory Requirements for Generative AI


d-Matrix, the provider of high-efficiency AI-compute and inference processors, is launching Jayhawk, an Open Domain-Specific Architecture (ODSA) Bunch of Wires (BoW)-based chiplet platform. Aimed at embracing energy-efficient die-die connectivity over organic substrates, Jayhawk is the second-generation silicon platform for managing generative AI applications and large language model transformer applications with drastically improved performance and efficiency.

The introduction of more innovative and complex AI models calls for greater memory and energy needs; d-Matrix’s Jayhawk is designed to offer in-memory compute-based IC architecture, tool integrations with ANN models, and block-grid chiplets to withstand highly demanding ML workloads and enhance overall efficiency. d-Matrix is spear-heading this approach to compute platforms, according to the enterprise.

“With the announcement of our second-generation chiplet platform, Jayhawk, and a track record of execution, we are establishing our leadership in the chiplet ecosystem,” said Sid Sheth, CEO of d-Matrix. “The d-Matrix team has made great progress toward building the world’s first in-memory computing platform with a chiplet-based architecture targeted for power hungry and latency sensitive demands of generative AI.”

Complex transformers no longer pose challenges for productivity and performance, as Jayhawk is fundamentally designed to support the colossal data and power necessities associated with inference AI. With a modular-chiplet architecture, refresh cadence can be more quickly established due to a pre-validated chiplet formation—ultimately encouraging a compute platform equipped with robust performance capabilities.

"d-Matrix has moved quickly to seize the chiplet opportunity, which should give them a first-mover advantage,” said Karl Freund, founder and principal analyst at Cambrian-AI Research. “Anyone looking to add an AI accelerator to their SoC design would do well to investigate this new approach for efficient AI.”

Chiplets will be built based on both BoW and UCIe-based interconnects, allowing Jayhawk to be as comprehensive and heterogeneous as it is highly performant. The platform features 3mm, 15mm, and 25mm trace lengths on organic substrates; 16 Gbps/wire high bandwidth throughput; 6-nm TSMC process technology; and <0.5 pJ/bit energy efficiency.

Demos and evaluations are now available for the Jayhawk compute platform. For more information, please visit https://www.d-matrix.ai/.


Sponsors