European startup Vybium is making waves in the semiconductor industry with its innovative approach to developing an AI accelerator chip using the open RISC-V instruction set architecture. This move is seen as a direct challenge to the dominance of the Nvidia A100 GPU in the data center.
Vybium, a fabless semiconductor startup currently operating in stealth mode, is leveraging the power of RISC-V to promote European Digital Strategic Autonomy for AI. Founded by VRULL in Austria and Software Ecosystem Solutions in Albania, Vybium aims to create RISC-V AI accelerators that cater to a wide range of industries.
To expedite the chip development process, Vybium has secured a license for the NPU IP from Stream Computing. By incorporating new data types, sparsity support, and high-bandwidth memory solutions into its integrated design, Vybium is poised to deliver cutting-edge AI acceleration capabilities.
"Vybium is committed to offering European-developed products that reflect the unique European perspective. Our goal is to provide comprehensive solutions based on RISC-V that meet the specific requirements of European industrial firms. The increasing demand for AI/ML applications has prompted us to respond swiftly, leveraging the proven NPU IP from Stream Computing," stated Dr. Philipp Tomsich, chief technologist and founder of Vrull.
"By empowering Vybium to develop its own silicon and enhance our established NPU IP, we are accelerating the path to European AI/ML accelerators that can compete with Nvidia. This collaborative approach not only fosters innovation but also facilitates broader support for common, open-source software initiatives. It's a mutually beneficial arrangement," added Andy Mei, CEO of Stream Computing.
The initial lineup of Vybium products will focus on AI/ML accelerator cards designed to rival the Nvidia A100 family. Future plans include the development of systems that combine RISC-V general-purpose computing with AI/ML capabilities for embedded, industrial, and edge applications. These upcoming designs are expected to build on the foundation laid by the AI/ML acceleration solutions tailored for data centers, cloud environments, and enterprise settings.