Ai Accelerator Card

  • 12 Programmable SHAVE Cores

    achieve the hundreds of GFLOPS required in fundamental matrix multiplication compute that’s required for deep learning networks of various topologies.

  • Ultra-Low Power Architecture
  • High Performance VPU w/ on board RAM

    deep neural networks create large volumes of intermediate data. Keeping all of this on chip enables our customers to vastly reduce the bandwidth that would otherwise create performance bottlenecks.

  • Small footprint for embedded applications

    Native Support for Mixed Precision and Hardware Flexibility—the ability to support Deep Learning networks with industry-leading performance at best-in-class power efficiency is supported by Myriad’s flexibility in terms of mixed precision support.