Home / News / Technology & AI

Nvidia Launches the “Physical AI” Era with Robot Foundation Models — What Exhibitors Need to Know for MWC, RSA, and re:Invent 2026

Humanoid robot in a modern technology environment representing the Physical AI era

For three years, the artificial intelligence conversation at trade shows revolved around large language models, generative text, and image synthesis. Chatbots dominated keynotes. Software demos filled exhibit halls. The physical world, with its gravity, friction, unpredictable humans, and unstructured environments, remained stubbornly beyond the reach of most AI systems. That changed at CES 2026, where Nvidia CEO Jensen Huang stood on stage and declared that the industry had entered the era of “Physical AI” — artificial intelligence that does not merely generate words and images on a screen but perceives, reasons about, and acts within the real, physical world.

This was not a theoretical proclamation. Nvidia backed it with a cascade of product announcements that, taken together, represent the most comprehensive platform play in the history of robotics AI. Cosmos, a family of foundation models that simulate physics-governed environments. Isaac GR00T N1.6, a vision-language-action model purpose-built for humanoid robots. Alpamayo, billed as the world’s first thinking, reasoning autonomous vehicle AI. And a roster of launch partners — Boston Dynamics, Caterpillar, Franka Robots, LG Electronics, NEURA Robotics — that reads like a who’s who of companies building machines that move through the physical world.

For exhibitors preparing for the 2026 show circuit — MWC 2026 in Barcelona, RSA Conference 2026, and AWS re:Invent 2026 — Nvidia’s Physical AI offensive is not just another product launch to monitor from a distance. It is a tectonic shift that will reshape exhibit hall conversations, partnership dynamics, and competitive positioning across every industry vertical that involves machines interacting with the physical environment. Whether you manufacture robots, develop enterprise software, sell cybersecurity solutions, or provide cloud infrastructure, the Physical AI era has implications for your booth, your messaging, and your pipeline.

5
Major Physical AI products unveiled by Nvidia at CES 2026 — Cosmos Transfer 2.5, Cosmos Predict 2.5, Cosmos Reason 2, Isaac GR00T N1.6, and Alpamayo

What Nvidia Actually Announced: The Physical AI Stack

Understanding the strategic significance of Nvidia’s CES 2026 announcements requires looking beyond the individual product names to see the architecture they form. Nvidia is not releasing a single robot AI model. It is constructing an entire software stack that spans every layer of intelligence a physical machine needs — from perceiving its environment to reasoning about actions to executing movements in real time. Each announcement targets a specific layer of that stack, and together they form a platform that Nvidia clearly intends to become as foundational to robotics as Android became to smartphones.

Cosmos: Foundation Models for the Physical World

At the base of the stack sits Cosmos, which Nvidia describes as an AI foundation model family designed to simulate environments governed by the laws of physics. Traditional AI models are trained on text and images scraped from the internet. Cosmos models are trained on representations of physical reality — how objects fall, how liquids flow, how forces propagate through rigid and deformable bodies, how light interacts with surfaces in three-dimensional space. The result is a world model that understands not just what things look like but how they behave.

Nvidia released two variants at CES 2026. Cosmos Transfer 2.5 specializes in generating synthetic training data — photorealistic simulations of physical environments that can be used to train robot perception systems without the cost, time, and safety constraints of collecting data in the real world. A warehouse robot that needs to learn to identify and grasp ten thousand different product SKUs no longer needs to physically encounter each one. Cosmos Transfer can generate realistic training scenarios for every object, lighting condition, and shelf configuration imaginable.

Cosmos Predict 2.5 takes the world model concept further by enabling forward simulation — predicting what will happen next in a physical environment given the current state and a proposed action. This is the capability that allows a robot to evaluate the consequences of its actions before executing them. Should it reach for the glass from the left or the right? What happens if it applies more force to the lid? Cosmos Predict answers these questions in milliseconds by simulating the physics forward in time.

Key TakeawayCosmos is not a robotics product. It is the physics engine underneath all robotics products. By positioning Cosmos as a platform layer, Nvidia is inserting itself into the foundation of every robot that needs to understand physical reality — which is to say, every robot that will ever be commercially useful.

Cosmos Reason 2: Giving Machines the Ability to Think

Perception and physics simulation are necessary but not sufficient for useful robot behavior. A machine also needs to reason — to interpret what it sees, understand context, formulate goals, and plan sequences of actions that achieve those goals. Cosmos Reason 2 is Nvidia’s answer to this requirement. It is a reasoning vision-language model (VLM) designed specifically for machines rather than humans.

Where a consumer-facing VLM like GPT-4 or Claude is optimized to produce natural language responses that are helpful to human users, Cosmos Reason 2 is optimized to produce structured action plans that are executable by robotic systems. It takes multimodal inputs — camera feeds, lidar point clouds, sensor telemetry, natural language instructions — and outputs reasoned decisions about what the machine should do next. The emphasis on “reasoning” is critical: this is not a simple stimulus-response system. Cosmos Reason 2 can chain together multi-step plans, consider constraints, evaluate trade-offs, and adapt when circumstances change mid-execution.

For the robotics industry, Cosmos Reason 2 addresses what has been the single hardest problem in deploying robots outside of structured factory environments: the ability to handle novel situations. A robot on a factory assembly line performs the same task thousands of times in identical conditions. A robot in a hospital, warehouse, construction site, or home encounters variability constantly. Cosmos Reason 2 is designed to give machines the cognitive flexibility to operate in these unstructured environments without requiring explicit programming for every possible scenario.

Isaac GR00T N1.6: The Brain for Humanoid Robots

If Cosmos provides the physics understanding and Cosmos Reason provides the cognitive layer, Isaac GR00T N1.6 is where everything converges into a complete robot intelligence. GR00T N1.6 is a vision-language-action (VLA) model — a single neural network that takes in visual and language inputs and directly outputs motor control commands. It sees, it understands, and it moves, all within one integrated model.

The “N1.6” designation indicates this is not the first iteration. Nvidia has been developing the GR00T platform for several generations, each time expanding the range of tasks the model can perform, the diversity of robot body types it can control, and the reliability of its actions in real-world conditions. Version N1.6 represents what Nvidia considers the first commercially viable release — robust enough for partners to build products on top of, flexible enough to generalize across different humanoid robot platforms, and efficient enough to run on Nvidia’s edge computing hardware without requiring a cloud data center connection for every movement.

The partner list tells the story of where this technology is headed. Boston Dynamics is integrating GR00T into its Atlas humanoid platform. NEURA Robotics, a German manufacturer of cognitive robots, is using it for industrial manipulation tasks. LG Electronics is exploring consumer and hospitality applications. Franka Robots is applying it to precision assembly. Caterpillar — perhaps the most telling partner — is evaluating it for construction and heavy equipment applications where autonomous operation in unstructured outdoor environments is the ultimate goal.

5+
Major robotics partners building on Nvidia’s Physical AI platform — Boston Dynamics, Caterpillar, Franka Robots, LG Electronics, NEURA Robotics

Alpamayo: Autonomous Vehicles Get a Thinking Brain

Nvidia’s Physical AI ambitions extend beyond humanoid robots to the largest and most commercially advanced physical AI market: autonomous vehicles. Alpamayo, which Nvidia describes as “the world’s first thinking, reasoning autonomous vehicle AI,” applies the same Cosmos reasoning architecture to the driving domain. Unlike previous autonomous driving systems that rely primarily on perception and pre-programmed rules, Alpamayo reasons about driving situations in real time — anticipating what other drivers, pedestrians, and cyclists might do, evaluating multiple possible actions, and choosing the safest path forward based on contextual understanding rather than rigid rule sets.

The significance of Alpamayo for exhibitors is less about the autonomous vehicle market itself and more about what it reveals about Nvidia’s platform strategy. Nvidia is applying the same foundational AI models — physics simulation, world prediction, reasoning — across both robotics and autonomous vehicles. This means the development tools, training infrastructure, simulation environments, and deployment frameworks are shared across both domains. A company that builds expertise on Nvidia’s platform for a warehouse robotics application can transfer much of that knowledge to an autonomous vehicle application, and vice versa. This cross-domain leverage is a powerful incentive for companies to standardize on Nvidia’s stack.

The “Android of Robotics” Strategy

Nvidia’s ambition is now unmistakable: it wants to become the Android of generalist robotics — the default software platform upon which the entire robot hardware industry builds. Just as Android provided the operating system, app framework, and developer ecosystem that allowed hundreds of smartphone manufacturers to compete on hardware while sharing a common software layer, Nvidia’s Physical AI stack is designed to provide the perception, reasoning, and control layer that allows hundreds of robot manufacturers to compete on mechanical design, form factor, and application specialization while sharing a common intelligence layer.

This strategy has profound implications for the competitive landscape. If Nvidia succeeds, the value in the robotics industry will shift from proprietary software intelligence — which has traditionally been each robot company’s most closely guarded asset — to hardware differentiation, application expertise, and ecosystem integration. Robot companies will compete on the quality of their actuators, the precision of their sensors, the durability of their mechanical design, and the depth of their domain-specific training data. But the core intelligence layer — the ability to perceive, reason, and act — will come from Nvidia.

Not everyone in the robotics industry welcomes this vision. Several major players, including companies like Tesla with its Optimus humanoid robot program, are developing fully proprietary AI stacks precisely to avoid dependence on a platform vendor. The tension between adopting Nvidia’s platform for its speed-to-market advantages and building proprietary intelligence for long-term competitive differentiation will be one of the defining strategic debates at every technology trade show in 2026.

"Nvidia is not just building the tools for Physical AI. It is building the entire playing field and inviting everyone to come play on it. The companies that adopt the platform early will get to market faster. The question is whether getting to market faster on someone else's platform is better than getting to market slower on your own." — Robotics industry strategist on Nvidia’s platform play

The Broader CES 2026 Context: AI Goes Physical Everywhere

Nvidia’s Physical AI dominance at CES 2026 did not occur in a vacuum. The entire show reflected a broader industry shift from purely digital AI to AI that interacts with the physical world. Other major announcements reinforced this trend and created additional context that exhibitors at upcoming shows need to understand.

AMD: AI-Powered Computing at the Edge

AMD announced new Ryzen AI processors designed to bring powerful AI inference capabilities directly to personal computers, laptops, and edge devices. While AMD’s announcement is primarily about consumer computing, the underlying trend is the same one driving Nvidia’s Physical AI push: AI capabilities are moving from centralized cloud data centers to distributed edge devices that operate in the physical world. For exhibitors in the enterprise computing and edge infrastructure spaces, AMD’s Ryzen AI chips represent a parallel track to Nvidia’s robotics-focused hardware — one that optimizes for human-computer interaction at the device level rather than machine-world interaction at the robot level.

The competitive dynamic between AMD and Nvidia is particularly relevant for exhibitors at MWC 2026 and AWS re:Invent 2026, where the question of where AI inference runs — in the cloud, on the device, or at the network edge — will be a central topic. AMD’s edge computing play and Nvidia’s robotics platform play represent different answers to the same fundamental question: how do you bring AI intelligence closer to the physical point of use?

Samsung: The AI-Native Device Paradigm

Samsung’s unveiling of the Galaxy Z Trifold with onboard AI processing further illustrates the industry-wide movement toward physical AI. The trifold device integrates AI capabilities directly into the hardware — not as a cloud service that the device accesses but as a native capability that runs on the device processor itself. This approach to AI-native hardware design parallels Nvidia’s approach to robot-native AI: in both cases, the intelligence is co-located with the physical system rather than residing in a remote data center.

For exhibitors, Samsung’s device AI strategy is relevant because it validates the broader thesis that AI is transitioning from a cloud-delivered service to an embedded capability of physical products. This transition affects exhibit strategies across every industry vertical: medical device companies need to talk about on-device AI for diagnostics; manufacturing equipment vendors need to address built-in AI for quality control; consumer electronics companies need to demonstrate AI capabilities that work without an internet connection. The era of “AI inside” labeling — analogous to Intel’s famous “Intel Inside” campaign — has arrived.

Key TakeawayCES 2026 was not just an Nvidia story. It was the show where the entire technology industry pivoted from digital-only AI to Physical AI across every product category. Nvidia’s robot foundation models are the most dramatic expression of this shift, but AMD’s edge processors and Samsung’s AI-native devices tell the same story: intelligence is moving into the physical world. Exhibitors who are still positioning AI as a purely cloud-based or software-only capability are already behind.

MWC 2026: Where Physical AI Meets Global Connectivity

Mobile World Congress 2026, held in late February and early March in Barcelona, is the first major international trade show after CES where Nvidia’s Physical AI announcements will reshape the exhibit hall conversation. MWC has evolved far beyond its origins as a mobile phone conference. Today it is the global stage for connectivity infrastructure, edge computing, IoT, and increasingly, the intersection of telecommunications and AI. Physical AI sits squarely at that intersection, and exhibitors need to be prepared.

The 5G-to-Physical-AI Pipeline

For three years, telecom operators and infrastructure vendors at MWC have been searching for the compelling use case that justifies the massive capital investment in 5G networks. Ultra-reliable, low-latency communication was always the technical promise of 5G, but the applications that actually require those capabilities — as opposed to merely benefiting from faster download speeds — have been slow to materialize. Physical AI changes that equation dramatically.

Autonomous robots operating in warehouses, construction sites, hospitals, and public spaces require exactly the kind of connectivity that 5G was designed to deliver: consistent low latency for real-time control, high bandwidth for streaming sensor data, and the ability to support massive numbers of connected devices in a defined area. Nvidia’s Physical AI stack, which enables robots to offload computationally intensive reasoning tasks to edge servers while maintaining real-time motor control on-device, is a perfect architectural match for 5G private network deployments.

For telecom exhibitors at MWC 2026, this means the Physical AI narrative provides the tangible, revenue-generating use case that 5G has been waiting for. Exhibitors selling 5G infrastructure, private network solutions, edge computing platforms, and network management software should incorporate Physical AI scenarios into their booth demonstrations. A live demo showing a robot performing complex manipulation tasks over a 5G private network — with real-time telemetry, video feeds, and edge inference all running simultaneously — is the kind of concrete proof point that enterprise buyers have been asking for.

MWC Exhibitor Strategy for Physical AI

100K+
Expected attendees at MWC 2026 in Barcelona — the first major global show to reckon with the Physical AI era post-CES

RSA Conference 2026: The Security Implications of Physical AI

If MWC 2026 is where Physical AI meets connectivity, RSA Conference 2026 is where Physical AI meets its most formidable challenge: security. The cybersecurity implications of deploying AI systems that control physical machines in the real world are fundamentally different from — and significantly more serious than — the security implications of AI systems that generate text and images. When a chatbot produces a wrong answer, someone gets bad information. When a Physical AI system makes a wrong decision, a machine moves incorrectly in the real world. The consequences scale from annoying to catastrophic depending on the machine and the context.

New Attack Surfaces in the Physical AI Era

Nvidia’s Physical AI stack introduces several categories of security risk that the cybersecurity industry is only beginning to address. These will be central topics at RSA 2026, and exhibitors who can speak to them credibly will stand out.

Foundation model poisoning. Cosmos and GR00T N1.6 are trained on massive datasets of physical world data. If an adversary can inject corrupted training data into these pipelines, the resulting models may produce subtly wrong physics simulations or motor control commands that are difficult to detect but dangerous in practice. The supply chain security of AI training data is an emerging discipline that few cybersecurity vendors currently address.

Adversarial physical inputs. A well-documented weakness of vision AI systems is their vulnerability to adversarial inputs — carefully crafted visual patterns that cause the system to misclassify what it sees. In a chatbot context, this is a curiosity. In a Physical AI context, where a robot’s actions depend on its visual perception, adversarial attacks become a physical safety risk. A stop sign with a subtle adversarial patch might be ignored by an autonomous vehicle running Alpamayo. A product with a modified label might be misidentified and incorrectly handled by a warehouse robot running GR00T.

Command injection and action manipulation. Cosmos Reason 2 accepts natural language instructions and converts them into action plans for physical machines. This creates a new category of prompt injection attack where adversarial instructions could cause a robot to perform unintended physical actions. The guardrails required for physical action systems are qualitatively different from those required for text generation systems, because the consequence of a bypass is physical motion rather than inappropriate text.

Network-level attacks on real-time control. Physical AI systems that rely on edge computing for inference and 5G for connectivity introduce network-level attack surfaces. A denial-of-service attack on a robot’s edge inference server could cause a loss of reasoning capability at a critical moment. A man-in-the-middle attack on the communication channel between a robot and its edge server could inject false sensor data or modified action commands. These are not theoretical scenarios; they are engineering realities that security architects need to address before Physical AI deployments go into production.

RSA Exhibitor Strategy for Physical AI Security

"We have spent two decades securing software systems. Physical AI forces us to secure physics. The attack surface is not just data and networks anymore — it is the physical world itself. The cybersecurity industry is not ready for this, and the companies that get ready first will define the next decade of the market." — Cybersecurity industry observer on Physical AI implications
Exhibitor InsightRSA Conference 2026 will be the first major cybersecurity event where Physical AI security transitions from a niche academic topic to a mainstream industry concern. If your products touch AI security, model integrity, adversarial defense, OT/IT convergence, or edge infrastructure protection, the Physical AI narrative is your opportunity to reframe your value proposition for a new era of risk. Prepare threat models, build demos, and have deep technical answers ready. The CISO audience at RSA will not accept vague reassurances.

AWS re:Invent 2026: The Cloud Infrastructure Layer for Physical AI

AWS re:Invent 2026, typically held in late November or early December in Las Vegas, is where the cloud infrastructure implications of Physical AI will take center stage. Amazon Web Services has been systematically expanding its capabilities for AI workloads, from SageMaker for model training to Inferentia and Trainium chips for inference. Physical AI adds a new dimension to cloud infrastructure demand: the need to train, simulate, and deploy AI systems that interact with the physical world at scale.

The Training Data Problem at Physical AI Scale

Training a Physical AI model is fundamentally more expensive and complex than training a text-based AI model. Text models can be trained on publicly available internet data. Physical AI models require data that represents the physical world — 3D environments, physics simulations, sensor recordings from real-world deployments, and annotated video from millions of hours of robot operation. Generating, storing, processing, and managing this data at the scale required for foundation model training is a cloud infrastructure challenge that will drive significant compute and storage demand.

Nvidia’s Cosmos Transfer 2.5 addresses part of this problem by generating synthetic training data, but the synthetic data itself must be generated, validated, and stored somewhere. For AWS, Google Cloud, and Microsoft Azure, Physical AI training workloads represent a new revenue opportunity that combines GPU compute for simulation, massive storage for 3D datasets, and specialized networking for distributed training runs. Exhibitors at re:Invent who provide infrastructure software, data management tools, or MLOps platforms should be preparing their Physical AI narratives now.

Simulation-as-a-Service: A New Cloud Category

One of the most commercially significant implications of Nvidia’s Cosmos platform is the emergence of simulation-as-a-service as a cloud infrastructure category. Before deploying a Physical AI system in the real world, companies need to test it exhaustively in simulated environments. Cosmos Predict 2.5 enables these simulations, but running them at the scale required for production validation — millions of scenarios, each with multiple variables, across hours of simulated time — requires enormous compute resources that most companies do not own.

Cloud providers are the natural hosts for this workload, and exhibitors at re:Invent 2026 who can offer simulation infrastructure, physics engine optimization, or distributed simulation orchestration will find a receptive audience. This is particularly relevant for companies in the autonomous vehicle space, where simulation testing is already a regulatory requirement in many jurisdictions and Physical AI will increase both the complexity and the scale of required testing.

re:Invent Exhibitor Strategy for Physical AI

$1T+
Projected cumulative market for Physical AI systems by 2035 — spanning robotics, autonomous vehicles, industrial automation, and edge AI infrastructure

Cross-Show Strategy: Themes That Connect MWC, RSA, and re:Invent

The most effective exhibitors in 2026 will not treat MWC, RSA, and re:Invent as isolated events. They will develop a unified Physical AI narrative that adapts to each show’s audience while maintaining consistent positioning. Here are the themes that connect all three shows and should form the backbone of your cross-show strategy.

Theme 1: From Digital AI to Physical AI

Every audience at every show in 2026 needs to understand the distinction between digital AI (generating text, images, and code) and Physical AI (perceiving, reasoning, and acting in the physical world). This is not just a technical distinction. It is a business model distinction. Digital AI is primarily a software-as-a-service business. Physical AI requires hardware, connectivity, edge infrastructure, security, and ongoing operational support — all of which create revenue opportunities for a much broader set of vendors. At MWC, this means 5G infrastructure vendors can position themselves as Physical AI enablers. At RSA, it means security vendors can position Physical AI as a new category of risk requiring new categories of protection. At re:Invent, it means cloud providers can position Physical AI as a new workload class driving compute and storage demand.

Theme 2: The Platform War

Nvidia’s bid to become the Android of robotics will be contested throughout 2026. Google, Amazon, Microsoft, and potentially Apple are all developing their own approaches to Physical AI platforms. Tesla is building a vertically integrated approach with Optimus. Open-source robotics AI projects are gaining momentum. For exhibitors, the platform question is strategically critical: do you build on Nvidia’s stack, hedge across multiple platforms, or build your own? Your answer to this question should be clear and consistent across MWC, RSA, and re:Invent, because partners and customers will be evaluating your platform alignment at each show.

Theme 3: Security as a First-Class Requirement

Physical AI security cannot be an afterthought bolted on after deployment. It must be architected into the system from the foundation layer up. This theme connects MWC (where network and edge security are primary concerns), RSA (where the full threat model is dissected), and re:Invent (where cloud infrastructure security for training and simulation must be addressed). Exhibitors who can tell a coherent security story across all three shows — covering network security at MWC, application security at RSA, and infrastructure security at re:Invent — will be perceived as the most credible and comprehensive partners in the Physical AI ecosystem.

Theme 4: The Data Flywheel

Physical AI creates a data flywheel that is fundamentally different from digital AI’s data dynamics. Every robot deployed in the field generates sensor data that feeds back into training pipelines, improving the foundation models that power the next generation of robots. This flywheel means that companies with deployed Physical AI systems accumulate a compounding data advantage over time. For exhibitors, this dynamic affects messaging at every show: at MWC, it means the connectivity layer must support continuous data streaming from deployed robots. At RSA, it means the data pipeline itself is a high-value attack surface. At re:Invent, it means the cloud infrastructure must support continuous retraining loops at scale.

Key TakeawayThe Physical AI era does not create separate conversations at separate shows. It creates one conversation that plays out differently at MWC (connectivity and edge), RSA (security and risk), and re:Invent (cloud infrastructure and MLOps). The exhibitors who develop a unified narrative and adapt it to each audience will build stronger positioning than those who treat each show as a standalone event.

Practical Booth Strategies for the Physical AI Era

Beyond strategic positioning, exhibitors at MWC 2026, RSA 2026, and re:Invent 2026 need practical booth execution strategies that reflect the Physical AI shift. Here are specific tactical recommendations.

Invest in Physical Demonstrations

The irony of the Physical AI era is that it rewards physical demonstrations at a time when many exhibitors have been moving toward screen-based, virtual, and video-driven booth experiences. When your value proposition involves AI that interacts with the physical world, showing it on a screen undermines the message. Invest in live robotic demonstrations, interactive sensor displays, and physical mock-ups that let visitors experience Physical AI capabilities firsthand. A small collaborative robot arm performing real tasks in your booth communicates more about Physical AI capability than the most polished video reel.

Train Your Booth Staff on Physical AI Fundamentals

Your sales engineers and booth staff will face questions in 2026 that they did not face in 2025. What is a world model? How does a VLA differ from a VLM? What is the latency requirement for real-time robot control? How does synthetic data training compare to real-world data collection? What are the security implications of foundation model deployment on edge devices? Invest in training that equips your team to engage in these conversations at a technical level. Visitors who ask these questions are serious evaluators, and losing their confidence because your booth staff cannot explain the basics is an avoidable error.

Develop Physical AI Use Case Libraries

Different industries will adopt Physical AI at different rates and for different applications. Prepare a library of use cases tailored to the vertical markets your company serves. For healthcare visitors, demonstrate how Physical AI enables surgical robots, pharmacy automation, and patient monitoring. For manufacturing, show autonomous quality inspection, flexible assembly, and predictive maintenance. For logistics, present warehouse automation, last-mile delivery robots, and autonomous fleet management. Having pre-built use case narratives for each vertical lets your booth staff quickly pivot to the scenario most relevant to each visitor’s industry.

Capture and Categorize Leads with Precision

The Physical AI conversation attracts an unusually diverse audience: robotics engineers, enterprise IT buyers, venture capital investors, academic researchers, government regulators, and industrial operations managers. Each visitor type represents a different pipeline opportunity and requires different follow-up. Use Scannly to scan badges, capture contact information instantly, and tag each lead with the conversation topic, industry vertical, and stage of evaluation. When you return from the show, your follow-up can be immediate, targeted, and informed by the specific Physical AI topics each contact was interested in, rather than a generic email blast that treats every lead identically.

The Competitive Landscape Beyond Nvidia

While Nvidia dominated the Physical AI narrative at CES 2026, it is not operating without competition. Understanding the broader competitive landscape is essential for exhibitors who need to evaluate platform choices and anticipate where the market is heading.

Google DeepMind continues to advance its robotics AI research, with published breakthroughs in robot learning from video demonstration and language-conditioned manipulation. Google’s advantage is its integration of robotics AI with its cloud platform and its massive research talent pool. However, Google has historically struggled to translate research breakthroughs into commercial products that hardware partners can build on, and Nvidia’s partnership-driven approach may prove more effective at driving ecosystem adoption.

Tesla is building the most vertically integrated Physical AI system in the industry with its Optimus humanoid robot. Tesla designs the hardware, trains the AI models on data from its vehicle fleet, and plans to deploy Optimus in its own factories before selling externally. This approach gives Tesla end-to-end control but limits ecosystem adoption. For exhibitors, Tesla’s approach represents the alternative to the Nvidia platform model: a closed ecosystem where one company controls the entire stack.

Amazon has significant Physical AI capabilities through its warehouse robotics division (formerly Kiva Systems), its autonomous delivery initiatives, and AWS’s infrastructure for AI workloads. Amazon’s unique advantage is that it is simultaneously a Physical AI developer (building and deploying robots in its own operations), a cloud infrastructure provider (offering compute and services for others to build Physical AI), and a potential customer for third-party Physical AI systems. This multi-role position makes Amazon an important company to watch at re:Invent 2026.

Open-source alternatives are emerging through projects like the Robot Operating System (ROS 2), Open X-Embodiment, and various academic foundation model initiatives. These open-source approaches appeal to companies that want to avoid platform lock-in with Nvidia while still benefiting from community-developed AI models. For exhibitors, understanding the open-source landscape helps you articulate your platform strategy to buyers who may be evaluating both Nvidia-based and open-source approaches.

"The Physical AI platform war will be the defining technology competition of the late 2020s. It is Android versus iOS all over again, but the stakes are higher because the machines are physical and the consequences of platform failure are measured in physical-world outcomes, not just app store revenue." — Technology analyst on the emerging Physical AI platform competition

Industry Verticals Most Affected by Physical AI in 2026

Not every industry will feel the impact of Physical AI equally in the near term. Exhibitors should prioritize their messaging based on which verticals are moving fastest and which will drive the most immediate demand.

Manufacturing and Industrial Automation

Manufacturing is the most immediately addressable market for Physical AI. Factories already have robots, already have structured environments, and already understand the ROI of automation. Nvidia’s GR00T and Cosmos platforms allow manufacturers to upgrade existing robotic systems with foundation model intelligence, enabling flexible manufacturing, rapid changeover between product lines, and autonomous quality inspection. For exhibitors selling into manufacturing, Physical AI should be front and center at every relevant show in 2026.

Logistics and Warehousing

The logistics industry is a close second. E-commerce demand continues to drive investment in warehouse automation, and Physical AI enables a new generation of autonomous mobile robots that can navigate unstructured warehouse environments, pick and place irregularly shaped objects, and collaborate with human workers safely. Exhibitors targeting logistics should emphasize the reduction in integration complexity that foundation model-based robots offer compared to traditional hard-coded systems.

Healthcare and Medical Devices

Healthcare is a high-value but slower-adoption vertical for Physical AI due to regulatory requirements. Surgical robots, rehabilitation devices, pharmacy automation systems, and hospital logistics robots all stand to benefit from foundation model intelligence, but each requires regulatory clearance that adds time and cost to deployment. For exhibitors, healthcare represents a long-sales-cycle opportunity where early positioning and regulatory expertise are competitive advantages.

Construction and Heavy Equipment

Caterpillar’s partnership with Nvidia signals that the construction industry is serious about Physical AI. Autonomous excavation, grading, and material handling in outdoor, unstructured environments represent some of the hardest technical challenges in robotics, and the economic payoff — construction labor shortages are acute globally — justifies the investment. Exhibitors targeting construction should note that this vertical values ruggedness, reliability, and safety certification above all else.

What Comes Next: The Physical AI Roadmap for 2026 and Beyond

Nvidia’s CES 2026 announcements are the opening salvo in what will be a multi-year transformation. Exhibitors should be thinking beyond the immediate show season to anticipate where the Physical AI market will be in 12 to 24 months.

Expect Nvidia to release updated versions of its Physical AI models at GTC (GPU Technology Conference) later in 2026, with each iteration expanding the range of tasks, robot types, and environments the platform can handle. Expect the partner ecosystem to grow rapidly, with dozens of new robot manufacturers announcing Nvidia-based products throughout the year. Expect cloud providers to launch dedicated Physical AI services — simulation environments, training pipelines, model registries — at re:Invent 2026 and similar events. And expect regulators in the EU, US, China, and Japan to begin drafting frameworks for Physical AI safety certification, creating a compliance landscape that will shape the industry for decades.

For exhibitors, the imperative is clear: the Physical AI era has begun, and the companies that move early to understand, adopt, and incorporate these capabilities into their products and messaging will establish positions that late movers will struggle to displace. CES 2026 was the starting gun. MWC, RSA, and re:Invent are the first turns on the track. The race is on.

Key TakeawayPhysical AI is not a single product category or a temporary trend. It is a platform-level shift comparable to the transition from mainframe to personal computing or from desktop to mobile. Nvidia’s CES 2026 announcements established the architecture. The rest of 2026 will determine which companies, across every layer of the technology stack, establish themselves as leaders in this new era. For exhibitors, the time to develop your Physical AI strategy is now — before MWC opens its doors in Barcelona.

The era of AI that only exists on screens is ending. The era of AI that moves through factories, drives on roads, operates in hospitals, and builds on construction sites is beginning. Nvidia has laid the foundation. The question for every exhibitor at every show in 2026 is the same one it has always been at inflection points: will you be the company that defined its position early, or the company that spent another year watching from the sidelines? The show floor is where that decision becomes visible. Make yours count.

Share this article

X Post LinkedIn Facebook

Capture Every Lead at Your Next Trade Show

Scannly replaces business cards with instant QR code contact exchange. Scan badges, share your info, and export leads in seconds.

Download Scannly Free

Get the Complete Exhibitor Toolkit

19 checklists, spreadsheets, email templates, and guides — everything you need before, during, and after the show.

Get Mega Bundle — $49.99

$213.81 — Save 77%

Get Full Access to ShowFloorTips

Create a free account to unlock trade show data, exhibitor tools, and expert guides.

Create Free Account

Free forever. No credit card required.

The Complete Exhibitor Toolkit

19 checklists, spreadsheets, and guides — everything you need.

Get Mega Bundle — $49.99

$213.81 Save 77%