Earlier this year, Nvidia declared it was partnering with the “global robotics ecosystem” to bring physical AI to factory floors.

Defined as software that gives machines greater awareness of and interaction capabilities with their surroundings, physical AI has long been tapped as the next wave of innovation in the industry, with anticipated applications across manufacturing, healthcare and construction.

AI hardware-software giant Nvidia is looking to forge ahead in this burgeoning industry, with current partners (such as ABB, Agibot, Agility and Figure) using Nvidia tech to power everything from robot brains to industrial humanoids

AI Business spoke with Docca about how the company is driving physical AI uptake across industries, and what is still needed before the tech can see full-scale rollout.

Regarding physical AI deployment, what is still needed to shift the needle from innovation and experimentation to real-world usability?

Related:Scout AI Raises $100M to Build ‘AI Brain' for Autonomous Warfare

Akhil Docca: What is needed now is the assurance that physical AI can work reliably in the environments where it will actually operate. That means better real-world and synthetic data, physically accurate simulation, validated digital twins, safety testing and edge runtime infrastructure that can stress-test systems before they are put near people, equipment or production workflows.

The goal is to move from a demo that works once to a repeatable loop that seamlessly transitions from simulation to reality. That is why physically accurate simulation and synthetic data are so important. With technologies like Nvidia Omniverse libraries and the Nvidia Isaac open robotics development platform, teams are integrating these libraries, models and frameworks for building digital twins, generating training data and validating systems before deployment. 

Where does the partner ecosystem sit within these efforts? What gap is it looking to address?

Docca: Every deployment looks different. A factory, warehouse, hospital or vehicle fleet will have its own sensors, workflows, safety requirements and physical constraints, so the gap is between general robotics capability and systems that can actually work in a specific site.

That is where the ecosystem becomes essential. Robot makers, sensor and actuator providers, industrial software companies, cloud providers, systems integrators and domain experts help adapt Nvidia’s open models, frameworks, and compute to different embodiments and operating environments. This is what lets physical AI scale beyond isolated pilots to general-purpose robots that are ready for production environments.

Nvidia Taps Robotics Ecosystem to Scale Physical AI

Related:Humanoid Bots to Start Airport Pilot in Japan

What are the biggest challenges in implementing physical AI?

Docca: Testing every edge case in the real world is too slow, expensive and dangerous, making simulation and synthetic data essential for training, testing, validating and deployment at scale. Physical AI systems require large volumes of real and synthetic data to handle unstructured environments and real-world ambiguity. Models must generalize across different settings, and systems must meet strict safety requirements when operating around people and other systems.  

Developers can simulate these environments using physically accurate digital twins and synthetic data to test rare or dangerous scenarios before deployment, then connect that training and validation loop to accelerated edge systems in the field.

How are the demands of the robotics industry changing? And how are these changes informing Nvidia’s own pipeline?

Docca: The industry is moving from fixed automation toward adaptable autonomy: robots and fleets that can perceive, reason, act and adapt to new tasks without expensive reprogramming. Customers are asking for systems that are software-defined, easier to program, safer to validate and flexible enough to work across many physical environments.

Related:Accenture Showcases Humanoid Robot Warehouse Pilot

That shift puts models such as Nvidia Isaac GR00T N, an open, reasoning vision-language-action model, at the center of robotics development. Rather than building a narrow model for every task, developers need powerful generalist models that understand language, perception and action, and can then be post-trained for a specific robot, site or workflow. 

A recurring problem is closing the sim-to-real gap. How can organizations overcome this obstacle? 

Docca: The biggest challenge is fidelity: accurate simulationof physics, sensors, lighting, materials, contact, motion and human behavior close enough to the real world that learned behavior transfers. Coverage is just as important. Simulation has to expose models to edge cases, not only idealized conditions.

Closing the gap also requires continuous calibration between real and synthetic data. Synthetic data can fill gaps and accelerate training, but it has to be grounded in real-world physics and validated against real performance. The most durable approach is a loop where simulation, deployment data and evaluation keep correcting each other.

Editor’s note: This interview was edited for clarity and conciseness