• Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints
  • Business
  • Markets
  • Politics
  • Crypto
  • Finance
  • Intelligence
    • Policy Intelligence
    • Security Intelligence
    • Economic Intelligence
    • Fashion Intelligence
  • Energy
  • Technology
  • Taxes
  • Creator Economy
  • Wealth Management
  • LBNN Blueprints

Physical AI uses both sight and touch to manipulate objects like a human

Simon Osuji by Simon Osuji
September 4, 2025
in Artificial Intelligence
0
Physical AI uses both sight and touch to manipulate objects like a human
0
SHARES
1
VIEWS
Share on FacebookShare on Twitter


Physical AI uses both sight and touch to manipulate objects like a human
Based on camera information, the arm grips both ends of the Velcro (A.1, B.1). Using tactile information, it senses the orientation of the tape and adjusts the posture and angle to align the hook surface with the loop surface (A.2, B.2). The Velcro is fixed, and the right arm presses it to ensure a firm connection (A.3, B.3). Different tape manipulation movements are automatically generated to adapt to the situation. Credit: Tohoku University

In everyday life, it’s a no-brainer to be able to grab a cup of coffee from the table. Multiple sensory inputs such as sight (seeing how far away the cup is) and touch are combined in real-time. However, recreating this in artificial intelligence (AI) is not quite as easy.

Related posts

Best Electric Toothbrush, Backed by Real-Life Testing (2026)

Best Electric Toothbrush, Backed by Real-Life Testing (2026)

February 22, 2026
What to Know About At-Home STI Tests: Pros, Cons, and Recommendations (2026)

What to Know About At-Home STI Tests: Pros, Cons, and Recommendations (2026)

February 22, 2026

An international group of researchers created a new approach that integrates visual and tactile information to manipulate robotic arms, while adaptively responding to the environment. Compared to conventional vision-based methods, this approach achieved higher task success rates. These promising results represent a significant advancement in the field of multimodal physical AI.

Details of their breakthrough were published in the journal IEEE Robotics and Automation Letters.

Machine learning can be used to support artificial intelligence (AI) to learn human movement patterns, enabling robots to autonomously perform daily tasks such as cooking and cleaning. For example, ALOHA (A Low-cost Open-source Hardware System for Bimanual Teleoperation) is a system developed by Stanford University that enables the low-cost and versatile remote operation and learning of dual-arm robots. Both hardware and software are open source, so the research team was able to build upon this base.

However, these systems mainly rely on visual information only. Therefore, they lack the same tactile judgments a human could make, such as distinguishing the texture of materials or the front and back sides of objects. For example, it can be easier to tell which is the front or back side of Velcro by simply touching it instead of discerning how it looks. Relying solely on vision without other input is an unfortunate weakness.






Video of the physical AI in action, successfully tying a zip tie. Credit: Tohoku University

“To overcome these limitations, we developed a system that also enables operational decisions based on the texture of target objects—which are difficult to judge from visual information alone,” explains Mitsuhiro Hayashibe, a professor at Tohoku University’s Graduate School of Engineering.

“This achievement represents an important step toward realizing a multimodal physical AI that integrates and processes multiple senses such as vision, hearing, and touch—just like we do.”

The new system was dubbed “TactileAloha.” They found that the robot could perform appropriate bimanual operations even in tasks where front-back differences and adhesiveness are crucial, such as with Velcro and zip ties. They found that by applying vision-tactile transformer technology, their Physical AI robot exhibited more flexible and adaptive control.

The improved physical AI method was able to accurately manipulate objects, by combining multiple sensory inputs to form adaptive, responsive movements. There are nearly endless possible practical applications of these types of robots to lend a helping hand. Research contributions such as TactileAloha bring us one step closer to these robotic helpers becoming a seamless part of our everyday lives.

The research group was comprised of members from Tohoku University’s Graduate School of Engineering and the Center for Transformative Garment Production, Hong Kong Science Park, and the University of Hong Kong.

More information:
Ningquan Gu et al, TactileAloha: Learning Bimanual Manipulation With Tactile Sensing, IEEE Robotics and Automation Letters (2025). DOI: 10.1109/LRA.2025.3585396

Provided by
Tohoku University

Citation:
Physical AI uses both sight and touch to manipulate objects like a human (2025, September 3)
retrieved 3 September 2025
from https://techxplore.com/news/2025-09-physical-ai-sight-human.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.





Source link

Previous Post

Global Rail 2025 to bring world’s transport leaders to Abu Dhabi for high-level industry dialogue

Next Post

Unified security layers may accelerate institutional crypto adoption

Next Post
High yields, hidden hazards? The truth about staking in crypto

Unified security layers may accelerate institutional crypto adoption

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

RECOMMENDED NEWS

The pre-owned boat show sets sail for its most spectacular edition yet

The pre-owned boat show sets sail for its most spectacular edition yet

1 year ago
Joe Biden’s Sweeping New Executive Order Aims to Drag the US Government Into the Age of ChatGPT

Joe Biden’s Sweeping New Executive Order Aims to Drag the US Government Into the Age of ChatGPT

2 years ago
It’s layoff ‘season’ at Phil Libin’s Airtime

It’s layoff ‘season’ at Phil Libin’s Airtime

9 months ago
Signs Strategic MoU at Ocean Forum

Signs Strategic MoU at Ocean Forum

1 year ago

POPULAR NEWS

  • Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    Ghana to build three oil refineries, five petrochemical plants in energy sector overhaul

    0 shares
    Share 0 Tweet 0
  • The world’s top 10 most valuable car brands in 2025

    0 shares
    Share 0 Tweet 0
  • Top 10 African countries with the highest GDP per capita in 2025

    0 shares
    Share 0 Tweet 0
  • Global ranking of Top 5 smartphone brands in Q3, 2024

    0 shares
    Share 0 Tweet 0
  • When Will SHIB Reach $1? Here’s What ChatGPT Says

    0 shares
    Share 0 Tweet 0

Get strategic intelligence you won’t find anywhere else. Subscribe to the Limitless Beliefs Newsletter for monthly insights on overlooked business opportunities across Africa.

Subscription Form

© 2026 LBNN – All rights reserved.

Privacy Policy | About Us | Contact

Tiktok Youtube Telegram Instagram Linkedin X-twitter
No Result
View All Result
  • Home
  • Business
  • Politics
  • Markets
  • Crypto
  • Economics
    • Manufacturing
    • Real Estate
    • Infrastructure
  • Finance
  • Energy
  • Creator Economy
  • Wealth Management
  • Taxes
  • Telecoms
  • Military & Defense
  • Careers
  • Technology
  • Artificial Intelligence
  • Investigative journalism
  • Art & Culture
  • LBNN Blueprints
  • Quizzes
    • Enneagram quiz
  • Fashion Intelligence

© 2023 LBNN - All rights reserved.