The first-ever transformer supercomputer, a marvel etched into silicon chips, is here. Built by Etched AI, the supercomputer is embracing the revolutionary approach of burning transformer architecture directly into the core of chips.
The cutting-edge transformer supercomputer boasts impressive capabilities with NVIDIA’s 8xA100, 8xH100, and 8xSohu. The system achieves unprecedented performance metrics, notably in terms of Tokens Per Second (TPS).
This architecture enables the processing of real-time voice agents that can ingest thousands of words in milliseconds, setting a new standard in computational speed and efficiency.
Beyond traditional GPU capabilities, this supercomputer empowers developers to build products previously deemed impossible. The incorporation of tree search enhances coding experiences by allowing the system to compare hundreds of responses in parallel. This not only improves efficiency but also opens up new possibilities for creating sophisticated applications.
One of the standout features of the transformer supercomputer is its multicast speculative decoding, enabling the generation of new content in real-time. This capability paves the way for dynamic and responsive applications that can adapt and generate fresh content instantaneously.
Etched’s blog says that this architecture will allow running trillion-parameter models with unparalleled efficiency. With only one core, the system accommodates a fully open-source software stack, expandable to 100T parameter models.
Incorporating beam search and MCTS decoding, the supercomputer leverages 144 GB HBM3E per chip, accommodating both Mixture of Experts (MoE) and transformer variants.
Meanwhile, there is an advent of state space models such as Mamba, which are here to replace Transformers. It would be interesting to see how both of these pan out simultaneously.
The post The World’s First Transformer Supercomputer Etched into Silicon appeared first on Analytics India Magazine.