1X NVIDIA CHIP

Developing NEO’s AI Using NVIDIA’s Robotics Platform

MAR 17 '261X

Training humanoid robots requires a large amount of infrastructure behind the scenes — simulation environments, large-scale model training, and powerful onboard compute to handle real-time inference.

At 1X, we use several parts of NVIDIA’s robotics platform throughout our development pipeline, from generating simulated training data to running models directly onboard our robots. With GTC underway this week, it’s a good moment to share a bit more about how these tools fit into our stack.

NVIDIA Isaac Sim and Isaac Lab: Simulation and robot learning

Before a robot learns a new capability in the real world, it often practices thousands or even millions of times in simulation. This allows us to explore new behaviors and generate training data before deploying models on physical robots.

We use NVIDIA Isaac Sim and NVIDIA Isaac Lab, open-source frameworks for robot simulation and learning, to generate photorealistic environments where our robots can train and evaluate tasks. By training our world models in these physics-based simulation environments, we can run task executions, measure whether a model successfully completes them, and generate additional training data to improve performance.

Isaac Lab also allows us to run large numbers of GPU simulations in parallel in the cloud. We use this to train reinforcement learning controllers for locomotion and other foundational robot capabilities. Running these simulations at scale exposes the models to far more scenarios than would be possible with physical robots alone.

Cloud Computing: Training models in the cloud

Training embodied AI models requires significant computation.

We use NVIDIA Blackwell HGX B200 GPUs to train our proprietary robot foundation models across both simulated and real-world robot data. These models learn to connect perception with action — interpreting visual scenes and translating them into physical behavior.

Access to large GPU clusters allows us to iterate quickly during both mid-training and post-training stages as we improve robustness and generalization across environments.

NVIDIA Jetson Thor: Running models onboard the robot

Once models are trained, they need to run directly on the robot with minimal latency. Real-world interaction requires fast perception and decision-making while operating within strict power and size constraints– NVIDIA Jetson Thor is the only product on the market that is built to support NEO’s requirements for onboard compute.

Thor allows us to run large neural networks locally on the robot itself, supporting real-time perception, reasoning, and control with optimized inference performance.

Running multimodal AI models directly onboard NEO allows the system to respond quickly to its environment while minimizing reliance on external computers.

DOCA Argus: Vision and perception

Vision is one of the primary ways humanoid robots understand the world around them.

As part of our perception stack, we use NVIDIA Argus to process camera data and support low-latency visual inference. This helps maintain a tight perception-to-action loop when the robot is navigating environments or interacting with objects.

From simulation to deployment

Together, these tools support the full lifecycle of robot learning — from simulation, to large-scale training, to real-world deployment.

Combining this infrastructure with our robotics and AI systems allows us to move efficiently from simulated learning to physical capability as we continue developing NEO and advancing our humanoid robotics platform.

Discover

ExperienceNEO

Opt in to receive updates. Unsubscribe anytime.