40 Views

STMicroelectronics and Leopard Imaging integrate multimodal vision module

LinkedIn Facebook X
March 30, 2026

Get a Price Quote

STMicroelectronics and Leopard Imaging have collaborated to develop a cutting-edge multimodal vision module for advanced robotics, integrating multiple sensing capabilities into a single platform. This innovative module connects directly with NVIDIA’s robotics ecosystem, enhancing the perception capabilities of robots for various applications.

Integrated sensing for robotics perception

The multimodal vision module is at the heart of this collaboration, combining 2D imaging, 3D depth sensing, and motion tracking in a compact unit. By integrating ST’s imaging sensors, inertial measurement capabilities, and time-of-flight LiDAR with NVIDIA’s Holoscan Sensor Bridge, the system can stream multi-gigabit sensor data to Jetson platforms in real time via Ethernet connectivity.

This setup is specifically designed to meet the size, weight, and power constraints commonly found in humanoid robots, where sensor consolidation is crucial. By consolidating multiple sensing functions into a single module, developers can simplify calibration and synchronization across different sensing domains, reducing the reliance on discrete components.

Marco Angelici, Vice-President of Marketing and Application for Analog Power MEMS and Sensors at STMicroelectronics, highlighted the significance of this collaboration, stating, “Humanoid robotics is evolving beyond research projects to deliver powerful machines for various industries. Our partnership with Leopard Imaging integrates ST sensors seamlessly into the NVIDIA robotics ecosystem, facilitating the deployment of physical AI applications with human-like awareness.”

Alignment with the NVIDIA robotics ecosystem

The multimodal vision module, centered around NVIDIA’s Holoscan Sensor Bridge, is supported by the NVIDIA Isaac robotics framework, offering AI models, simulation tools, and development libraries. This integration aims to address the challenge of transitioning from simulation to real-world deployment in robotics development.

Leopard Imaging emphasized the benefits of standardized sensor data pipelines, which can streamline data collection and training workflows for developers. Bill Pu, CEO of Leopard Imaging, noted, “Robot builders can leverage our multi-sensing vision module with Isaac tools to accelerate learning and bridge the ‘sim-to-real’ gap efficiently.”

Sensor technologies underpinning the module

The multimodal vision module incorporates various sensing technologies from STMicroelectronics, including a 5.1-megapixel RGB-IR image sensor with rolling and global shutter modes, a 6-axis IMU with embedded machine-learning capabilities, and a direct time-of-flight LiDAR sensor capable of generating detailed depth maps.

The LiDAR component supports ranging up to nine meters, offering a wide field of view and high frame rates suitable for dynamic environments requiring real-time perception. This combination of advanced sensor technologies enhances the capabilities of robots in perceiving and interacting with their surroundings effectively.

Recent Stories


Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.