国产av日韩一区二区三区精品,成人性爱视频在线观看,国产,欧美,日韩,一区,www.成色av久久成人,2222eeee成人天堂

Table of Contents
From Passive Cameras To Active
Real-World Impact
Trust And Transparency
A Market On The Brink
The Takeaway
Home Technology peripherals AI What Is 'Physical AI'? Inside The Push To Make AI Understand The Real World

What Is 'Physical AI'? Inside The Push To Make AI Understand The Real World

Jun 14, 2025 am 11:23 AM

What Is ‘Physical AI’? Inside The Push To Make AI Understand The Real World

Add to this reality the fact that AI largely remains a black box and engineers still struggle to explain why models behave unpredictably or how to fix them, and you might start to grasp the major challenge facing the industry today.

But that’s where a growing number of researchers and startups see the next big opportunity: not just in faster model training or more impressive generative outputs, but in machines that truly understand the physical world — the way it moves, reacts and unfolds in real time. They refer to this as “physical AI.”

The phrase was first brought into the spotlight by Nvidia CEO Jensen Huang, who has described physical AI as the next frontier in artificial intelligence, defining it as “AI that understands the laws of physics,” moving beyond pixel labeling to bodily awareness — space, motion and interaction.

From Passive Cameras To Active

At its core, physical AI combines computer vision, physics simulation and machine learning to teach machines cause and effect. In essence, it enables AI systems to not only identify objects or people, but also to understand how they interact with their environment — such as how someone’s movement might cause a door to swing open or how a ball might bounce off a wall.

At Lumana, a startup funded by global venture capital and growth equity firm Norwest, that phrase isn’t just a marketing term; it represents a full transformation in product development. Known for AI video analytics, the company is now training its models not only to detect motion, but also to interpret human behavior, assess intent and generate real-time alerts automatically.

“We define physical AI as the next phase in video intelligence,” Lumana CEO Sagi Ben-Moshe said in an interview. “It’s no longer simply about identifying a red car or a person in a hallway — it’s about predicting what might happen next and taking meaningful action in real-world situations.”

In one practical example, Lumana’s system detected a potential assault after recognizing unusual body language and close proximity between two men and a pair of unattended drinks, triggering an alert that allowed staff to intervene before the situation escalated. In another instance, it identified food safety violations in real time, including workers failing to wash hands, handling food without gloves and leaving raw ingredients out too long. These weren’t issues found after the fact, but ones caught as they were happening. This level of layered interpretation, Ben-Moshe explained, turns cameras into “intelligent sensors.”

Real-World Impact

It’s no accident that Huang has used the term “physical AI” before, connecting it with embodied intelligence and real-world simulation. It reflects a larger shift in the industry toward building AI systems that better understand the laws of physics and can reason more effectively. Physics, in this context, refers to cause and effect — the ability to analyze motion, force and interaction, not just appearances.

That perspective aligned with investors at Norwest, which supported Lumana during its early stages. “You can’t build the future of video intelligence by just detecting objects,” said Dror Nahumi, a general partner at Norwest. “You need systems that understand what’s happening, in context, and can do it better than a human watching a dozen screens. Often, businesses also need this information instantly.”

Norwest isn’t the only player in this space. Others, like Hakimo and Vintra, are exploring similar applications — using AI to spot safety violations in manufacturing, detect loitering in retail, or prevent public disturbances before they worsen.

For example, Hakimo recently developed an autonomous surveillance agent that prevented assaults, identified vandalism and even assisted a collapsed individual using live video feeds and AI. At Nvidia GTC in March, Nvidia showcased robotic agents learning to understand gravity and spatial relationships through environment-based training, echoing the same physical reasoning Lumana integrates into its surveillance tools.

And just yesterday, Meta announced the release of V-JEPA 2, “a self-supervised foundation world model to understand physical reality, anticipate outcomes and plan efficient strategies.” As Michel Meyer, group product manager at the Core Learning and Reasoning division of the company’s Fundamental AI Research, wrote on LinkedIn yesterday quoting Meta chief AI scientist Yann Lecun, “this marks a fundamental shift toward AI systems that can reason, plan, and act through physical world models. To reach advanced machine intelligence, AI must go beyond perception and understand how the physical world works — anticipating dynamics, causality, and consequences. V?JEPA 2 does exactly that.”

When asked what the practical impact of physical AI might be, Nahumi pointed out that it goes beyond buzzwords. “Anyone can detect motion, but if you want true AI in video surveillance, you have to move beyond that to understand context.” He sees Lumana’s full-stack, context-aware architecture as foundational rather than just a sales pitch.

“We believe there’s a significant business opportunity here and the technology is now mature enough to support and even surpass human performance in real time,” he told me.

Trust And Transparency

The truth is, the success of physical?AI systems won’t depend solely on the underlying technology. As AI continues to evolve, it becomes increasingly clear that the success of most AI systems depends heavily on ethics, trust and accountability. Put differently, trust is the currency of AI effectiveness. And the key question companies must continue to answer is: Can we trust your AI system to operate safely?

In security settings, false positives can shut down locations or wrongly accuse innocent individuals. In industrial environments, misinterpreted actions could set off unnecessary alarms.

Privacy is another area of concern. While many physical AI systems function within private facilities — factories, campuses, hotels — critics warn that real-time behavior prediction, without oversight, could lead to mass surveillance. As Ben-Moshe himself admitted, this is powerful technology that needs safeguards, openness and explicit consent.

However, according to Nahumi, Lumana’s multi-tiered approach delivers useful alerts while protecting privacy and enabling smooth integration into current systems. “Lumana designs systems that layer physical AI onto existing infrastructure with minimal disruption,” he noted, “ensuring operators aren’t overwhelmed by false positives.”

A Market On The Brink

Despite these concerns, demand is rising quickly. Retailers want to monitor foot traffic anomalies. Municipalities aim to prevent crime without increasing personnel. Manufacturers seek real-time safety compliance instead of post-event reviews. In every scenario, the issue remains the same: too many cameras, too little insight.

And that’s the commercial justification behind physical AI. As Norwest’s Nahumi put it, “We’re seeing clear ROI indicators — not only in avoided losses, but in operational efficiency. This is no longer speculative deep tech. It’s a platform investment.”

That investment relies on systems that are scalable, flexible and cost-efficient. Lumana’s strategy, which adds physical AI on top of existing camera systems, avoids the “rip-and-replace” dilemma and keeps adoption barriers low. Nahumi highlighted growing enterprise interest across retail, manufacturing, hospitality and public safety — areas where video footage is abundant, but analysis remains manual and inefficient.

Even across corporate boardrooms and research labs, the desire for machines that “understand” rather than “observe” is growing. That’s why companies like Norwest, Nvidia, Hakimo and Lumana are doubling down on physical AI.

“In five years,” Ben-Moshe envisions, “physical AI will do more than observe — it will suggest actions, predict events and provide safety teams with unmatched visibility.” This, he emphasized, is about systems that don’t just see, but also respond.

The Takeaway

Ultimately, the goal of physical AI isn’t just to help machines see more clearly — it’s to help them understand what they’re seeing. It's to help them perceive, comprehend and reason in the complex physical world we live in.

Ben-Moshe imagines a future where physical AI recommends actions, stops incidents from escalating and even predicts events before they occur. “Every second of video should produce actionable insights,” he said. “We want machines to reason about the world as a system — like particles tracing possible paths in physics — and highlight the most probable, most valuable outcome.”

That’s a significant leap from today’s basic surveillance systems. From preventing crime and avoiding accidents to uncovering new operational insights and analyzing activity trends, reasoning engines over cameras offer tangible, measurable value.

But scaling them is where the real effort lies. It will require systems that are precise, ethical, auditable and trustworthy. If that balance can be achieved, we may soon enter a world where AI doesn’t just show us what happened, but helps us determine what matters most.

The above is the detailed content of What Is 'Physical AI'? Inside The Push To Make AI Understand The Real World. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Top 7 NotebookLM Alternatives Top 7 NotebookLM Alternatives Jun 17, 2025 pm 04:32 PM

Google’s NotebookLM is a smart AI note-taking tool powered by Gemini 2.5, which excels at summarizing documents. However, it still has limitations in tool use, like source caps, cloud dependence, and the recent “Discover” feature

Hollywood Sues AI Firm For Copying Characters With No License Hollywood Sues AI Firm For Copying Characters With No License Jun 14, 2025 am 11:16 AM

But what’s at stake here isn’t just retroactive damages or royalty reimbursements. According to Yelena Ambartsumian, an AI governance and IP lawyer and founder of Ambart Law PLLC, the real concern is forward-looking.“I think Disney and Universal’s ma

From Adoption To Advantage: 10 Trends Shaping Enterprise LLMs In 2025 From Adoption To Advantage: 10 Trends Shaping Enterprise LLMs In 2025 Jun 20, 2025 am 11:13 AM

Here are ten compelling trends reshaping the enterprise AI landscape.Rising Financial Commitment to LLMsOrganizations are significantly increasing their investments in LLMs, with 72% expecting their spending to rise this year. Currently, nearly 40% a

What Does AI Fluency Look Like In Your Company? What Does AI Fluency Look Like In Your Company? Jun 14, 2025 am 11:24 AM

Using AI is not the same as using it well. Many founders have discovered this through experience. What begins as a time-saving experiment often ends up creating more work. Teams end up spending hours revising AI-generated content or verifying outputs

The Prototype: Space Company Voyager's Stock Soars On IPO The Prototype: Space Company Voyager's Stock Soars On IPO Jun 14, 2025 am 11:14 AM

Space company Voyager Technologies raised close to $383 million during its IPO on Wednesday, with shares offered at $31. The firm provides a range of space-related services to both government and commercial clients, including activities aboard the In

Boston Dynamics And Unitree Are Innovating Four-Legged Robots Rapidly Boston Dynamics And Unitree Are Innovating Four-Legged Robots Rapidly Jun 14, 2025 am 11:21 AM

I have, of course, been closely following Boston Dynamics, which is located nearby. However, on the global stage, another robotics company is rising as a formidable presence. Their four-legged robots are already being deployed in the real world, and

What Is 'Physical AI'? Inside The Push To Make AI Understand The Real World What Is 'Physical AI'? Inside The Push To Make AI Understand The Real World Jun 14, 2025 am 11:23 AM

Add to this reality the fact that AI largely remains a black box and engineers still struggle to explain why models behave unpredictably or how to fix them, and you might start to grasp the major challenge facing the industry today.But that’s where a

Nvidia Wants To Build A Planet-Scale AI Factory With DGX Cloud Lepton Nvidia Wants To Build A Planet-Scale AI Factory With DGX Cloud Lepton Jun 14, 2025 am 11:17 AM

Nvidia has rebranded Lepton AI as DGX Cloud Lepton and reintroduced it in June 2025. As stated by Nvidia, the service offers a unified AI platform and compute marketplace that links developers to tens of thousands of GPUs from a global network of clo

See all articles