Efficient Computer, a renowned chip and compiler designer, has recently showcased its cutting-edge general-purpose edge AI processor architecture. The company's innovative approach aims to revolutionize the field of AI processing by focusing on energy efficiency and performance.
The architecture is based on a fabric design optimized for energy efficiency, allowing the processor to achieve an impressive 1.3TOPS/W for edge AI applications. This unique fabric architecture consists of 256 tiles, each serving as a processing element with an ALU and logic to execute instructions efficiently.
Brandon Lucia, the CEO and founder of Efficient Computer, highlighted the significance of developing the architecture in tandem with the compiler and software stack. This collaborative effort, stemming from research at Carnegie Mellon, ensures a high level of generality in the design, setting it apart from traditional approaches.
The compiler plays a crucial role in generating a data flow representation and placing instructions within an efficient network on chip. A RISC-V core configures the fabric, enabling it to operate as a general-purpose processor capable of running various programming languages and frameworks, including C, C++, Rust, and edge AI frameworks.
Moreover, the company recently secured $16 million in funding to propel the next phase of development, underscoring the industry's confidence in Efficient Computer's groundbreaking approach to processor design.
Lucia emphasized the inefficiency of current computer systems, particularly the prevalent von Neumann processor design, which he claims wastes a significant amount of energy. In contrast, Efficient Computer's Fabric processor architecture offers a more energy-efficient solution by expressing programs as a circuit of instructions that can be executed in parallel.
The first chip based on this architecture boasts an impressive performance of 1.3 to 1.5TOPS/W, consuming only 500mW to 600mW of power. By optimizing the number of processor elements, users can tailor the chip's power consumption to suit their performance requirements through the compiler.
Efficient Computer's compiler currently supports TensorFlowLite for machine learning and is set to incorporate ONNX AI framework format support in the future. Leveraging the Multi-Level Intermediate Representation (MLIR) from the LLVM compiler activity, the compiler offers flexibility and optimization capabilities.
Looking ahead, the company plans to scale up the architecture to achieve even higher performance levels. By exploring the design space, Efficient Computer aims to reach 100GOPS at 200MHz by early 2025, with the potential to scale performance by 10 to 100 times while maintaining efficiency.
Part of this exploration involves investigating transformer frameworks for low-power edge AI applications. The ability to run transformers efficiently opens up new possibilities for AI processing on the edge, showcasing Efficient Computer's commitment to innovation in the field.
For more information, visit www.efficient.computer and mlir.llvm.org.