179 Views

£66m for UK AI hardware projects

LinkedIn Facebook X
October 27, 2024

Get a Price Quote

The UK’s ARIA innovation agency is funding 57 AI hardware projects with £66m (€70m).

The projects funded by the UK’s Advanced Research and Invention Agency (ARIA) include the first for imec UK and £5m for analog in memory compute at Fractile. The agency aims to be the equivalent of DARPA in the US, funding longer term hardware projects with commercial opportunities.

The project at Fractile is improving analogue in Memory matrix-vector multiplication. The aim is to use the approaches already used for inference acceleration hardware to run frontier models orders of magnitude faster than current state of the art. Whether sufficient precision can be achieved for application, in theory, to large scale training systems remains an open question.

“This is a nascent technology and we think we can reach even more energy efficiency. There’s a whole frontier to be explored, said Walter Goodwin, CEO of Fractile.

  • Fractile raises $15m for in memory compute
  • First project for imec UK
  • UK establishes ARIA agency

Researcher as at Kings College London are also looking at neuromorphic Matrix Multiplication, while other projects include developing interconnect for scalable AI systems, led by Noa Zilberman at the University of Oxford as well as connectivity technology at Alphawave Semi to allow tens of thousands of AI accelerator chips to be interconnected across distances up to 150m with low cost and power consumption without limiting performance.

“This project will develop and demonstrate the next generation of connectivity technologies for sustainable AI scaling,” said Behzad Dehlaghi, Project Technical Lead at Alphawave.

Seven teams are developing new technologies with the potential to open up new areas of computing, with a targeted relevance for modern AI algorithms. Signaloid in Cambridge is working on a CMOS digital thermodynamic AI hardware accelerator for linear algebra using analogue thermodynamic computing.

“There is currently a missed opportunity to exploit insights from analogue computing systems to enable more efficient analogue and digital hardware for speeding up linear algebraic kernels,” said Prof Phillip Stanley-Marbell, founder of Signaloid.

Patrick Coles at Normal Computing UK is leading a project to build physics-based computing chips to invert matrices and explore applications in training large-scale AI models, targeting a reduction of a 1000x in energy over GPUs.

This will combine event-driven, backpropogation-free learning algorithms, stochastic computing, and in-memory computing based on CMOS technology for a a neuromorphic framework to reduce the cost of developing AI models.

The agency is also funding a project at KU Leuven in Belgium on Massive Parallelism for Combinational Optimisation Problems, developing a new class of mixed-signal processors that are specifically conceived to solve combinatorial optimisation problems, as well as a project art Cornell University in the US on improved AI training algorithms.  

At US-based Rain AI, Jack Kendall is leading two AI hardware projects, including a team developing an SRAM-based analogue AI accelerator for  fast vector-matrix inverse multiplication using digitally-programmable transistor arrays with feedback control.

“The Scaling Compute programme enables researchers across many disciplines, from machine learning to photonics, analogue computing, and materials science, to come together under one roof with the unified goal of creating new paradigms of energy-efficient AI systems,” said Kendall.

 

Recent Stories