From a young age we are taught we have five senses: a universal ‘truth’ born from the works of ancient Greek philosopher Aristotle. But it’s not quite that simple.
Precisely how many senses we have depends on how we define them, and it’s far from an exact science. For example, the number could be as small as three if we count physical categories of incoming information (mechanical, chemical and light), or it could run to hundreds or thousands if we count types of receptors in the human body.
Sensory expert John Henshaw of the University of Tulsa, Oklahoma, thinks we may have nine senses, adding balance, temperature, pain and proprioception (awareness of the position and action of the parts of our body) to Aristotle’s original list of sight, hearing, smell, taste and touch.
While quantifying the senses is interesting, scientists are more interested in recreating them. This could lead to robots with enhanced human-like sensory capabilities and to more sensitive prostheses, among other technological advances.
Proprioception is popularly known as our sixth sense. Simply put, it allows us to keep track of where our body parts are in space with our eyes shut. Without it, we could not determine each part’s position, speed and direction and control our movements precisely.
In the human body, ‘muscle spindles’ act as receptors, which, through a complex process, essentially tell the brain about the length and stretch of muscles to make proprioception possible. As you can imagine, this process is challenging to replicate.
Engineers typically recreate proprioception using a combination of cameras, sensors and algorithms, enabling their robotic creations to walk, grip objects and perform delicate manipulations more seamlessly.
Soft robots are of particular interest because they offer advantages in adaptability, safety and dexterity compared to conventional bots, but it is difficult to equip soft robots with accurate proprioception and tactile sensing due to their high flexibility and elasticity.
To overcome this problem, researchers at the Massachusetts Institute of Technology (MIT) created a novel exoskeleton-covered soft robotic finger called GelFlex that uses embedded cameras and deep-learning methods to enable high-resolution tactile sensing and awareness of positions and movement. Yu She, who led the study, says: “Our soft finger can provide high accuracy on proprioception and accurately predict grasped objects.” When tested on metal objects of various shapes, the system had over 96 per cent recognition accuracy.
A team of engineers led by the University of California San Diego also used a mixture of cameras, sensors and algorithms to enable a four-legged robot to avoid obstacles and run on challenging terrain. Reproducing proprioception gives the quirky quadruped sense of movement, direction, speed, location and the feel of the ground beneath its feet. The team developed a special set of algorithms to fuse data from real-time images taken by a depth camera on the robot’s head with data from sensors on its legs.
The hope is that robots like this one could be used in rescue missions, but there are plenty more applications for bots with proprioception too. More dexterous ‘fingers’ capable of coordinated movements could lead to a new generation of skillful robotic surgeons, while the creation of a special interface has the potential to restore proprioception to amputees.
Closely related to proprioception is balance, which keeps us upright and allows us to move around without getting hurt, which is also advantageous for robots. Boston Dynamics arguably makes the world’s most famous balancing bots, including Atlas.
Claimed to be the world’s most advanced humanoid robot, Atlas stands 5ft (1.5m) tall and has 28 hydraulic joints for mobility. Slightly resembling an astronaut, its cameras and depth sensors provide input to its control system, while all of the computation required for control, perception and estimation happens on board its three computers, allowing it to mimic our ability to balance while moving around.
You might have seen Atlas dancing or taking on a death-defying parkour course, but in the company’s latest video, ‘Atlas Gets a Grip’, the humanoid navigates a simulated building site, carrying and tossing objects, climbing stairs, jumping between levels and pushing a large wooden block out of its way, before dismounting with an inverted 540-degree flip.
“Parkour and dancing were interesting examples of pretty extreme locomotion, and now we’re trying to build upon that research to also do meaningful manipulation,” explains Ben Stephens, controls lead for Atlas. Performing manipulation tasks requires a more nuanced understanding of Atlas’s environment and better balance. One of Atlas’s most impressive stunts involves performing a 180-degree jump while holding a wooden plank, because the robot’s control system needs to account for the plank’s momentum to avoid toppling over.
Engineers recreated our ability to balance by using a mixture of software and hardware such as joints to allow it to move in a human-like way. At the heart of Atlas’s controller is a technique called Model Predictive Control (MPC). The model is a description of how the robot’s actions will affect its state, and Boston Dynamics uses it to predict how the robot’s state will evolve over a short period of time. To control the robot, MPC searches over possible actions it can take immediately and in the near future to best achieve a set task. For example, Atlas uses MPC to walk by constantly updating its prediction of its future state and uses that prediction to choose its actions in real time.
Boston Dynamics’ dancing and parkour stunts used MPC with a simple model of the robot to consider its total centre of mass and inertia when deciding where to step and how hard to push on the ground. But the robot’s manipulation work required the model to consider the motion of every joint, the momentum of every link and the forces the robot applies on an object that it is carrying or throwing to enable it to maintain balance.
The simulated building site test was big news in robotics because it shows Atlas can layer multiple behaviour references on top of one another. It is a milestone for Boston Dynamics, which envisions a robot similar to Atlas performing a wide variety of manipulation tasks, picking up potentially heavy objects, quickly bringing them to where they’re needed, and accurately placing them in the real world.
While the first thermometer was invented in the 1600s, and temperature sensors are found in many consumer electronics, scientists are still working to mimic our ability to accurately sense temperature and constantly adapt to our environment, especially by sweating to cool down. This is because our slightly gross form of thermal management could enable untethered, high-powered robots to operate for long periods without overheating, particularly in remote locations.
A team at Cornell University has taken the first step on this ambitious journey by creating a soft robot muscle that can regulate its temperature through sweating.
“The ability to perspire is one of the most remarkable features of humans,” says co-lead author T J Wallin, a research scientist at Facebook Reality Labs. “Sweating takes advantage of evaporated water loss to rapidly dissipate heat and can cool below the ambient environmental temperature… so as is often the case, biology provided an excellent guide for us as engineers.”
The team created nanopolymer materials for sweating using a 3D-printing technique called multi-material stereolithography, which uses light to cure resin into predesigned shapes. They fabricated finger-like actuators composed of two hydrogel materials that can retain water and respond to temperature. The base layer is made of a material that reacts to temperatures above 30°C by shrinking, which squeezes water up into a top layer that is perforated with micron-sized pores. These pores are sensitive to the same temperature range and automatically dilate to release the ‘sweat’, closing again when the temperature drops below 30°C. At 70°C the evaporation of this water reduces the actuator’s surface temperature by 21°C within just 30 seconds.
While this is a huge achievement, autonomous robots roaming in remote locations may be some years off, and because they would need to take on water to sweat, there is a chance they may have to learn to drink like us, too.
Pain is notoriously hard to measure in humans and therefore pain perception is difficult to recreate. Scientists have been working for decades to build artificial skin with touch sensitivity. One widely explored method is spreading an array of contact or pressure sensors across an electronic skin’s surface to allow it to detect when it comes into contact with an object. Data from the sensors is then sent to a computer to be processed and interpreted. Because a large volume of data is generated, it is only recently with advances in processing that it has been possible to create helpfully responsive electronic skin.
In 2019, a team at the Technical University of Munich developed a system combining artificial skin with control algorithms, creating what they claimed was the first autonomous humanoid robot with full-body artificial skin.
The electronic skin consists of hexagonal cells which are each equipped with a microprocessor and sensors to detect contact, acceleration, proximity and temperature. They work together to allow the robot to perceive its surroundings in detail and with sensitivity. To avoid data overload, the system is not designed to monitor skin cells continuously. Instead, individual cells transmit information from their sensors only when values are changed.
This technique was inspired by the human nervous system and is being used by other researchers, including a team at the University of Glasgow that recently built an electronic skin capable of a computationally efficient, synapse-like response.
The researchers printed a grid of 168 synaptic transistors made from zinc-oxide nanowires directly onto the surface of a flexible plastic surface. Then they connected the synaptic transistor with the skin sensor present over the palm of a fully articulated, human-shaped robot hand. When the sensor is touched, it registers a change in its electrical resistance. A small change corresponds to a light touch, while a harder touch creates a larger change in resistance.
In earlier generations of electronic skin, the input data would be sent to a computer to be processed, but in this case, a circuit built into the skin acts as an artificial synapse, reducing the input down into a simple spike of voltage whose frequency varies according to the level of pressure applied to the skin, speeding up the process of reaction.
The engineers used the varying output of the voltage spike to teach the skin appropriate responses to simulated pain, triggering the robot hand to react. By setting a threshold of input voltage to cause a reaction, the team could make the robot hand recoil from a sharp jab in the centre of its palm. In other words, it learned to move away from a source of simulated discomfort through a process of onboard information processing that mimics how the human nervous system works.
“The development of this new form of electronic skin didn’t involve inflicting pain as we know it – it’s simply a shorthand way to explain the process of learning from external stimulus,” says Professor Ravinder Dahiya of the University’s James Watt School of Engineering. This research could not only lead to more advanced electronic skin but could be used to build prosthetic limbs which are capable of near-human levels of touch sensitivity.
Our incredible sensory abilities are slowly being replicated by machines, but robots still have a long way to go before they catch up with the human body, which is arguably the ultimate machine. While it is unlikely that humans will benefit from enhanced super senses like Spider-Man and other comic book heroes, engineering artificial senses could pave the way for sensitive, bionic skin and more dexterous prostheses, while robots with a great sense of balance and temperature regulation could help us out with undesirable jobs.
While the future is uncertain, developing robots with similar sensory abilities to us is just good common sense.
Robotics: The battle to recreate a sense of humour
Programming a robot to have a good sense of humour is no laughing matter. Humans can’t agree on what makes something funny, which makes teaching robots how to laugh almost impossible.
Researchers at Kyoto University in Japan have designed an AI to pick up nuances of humour for an android named Erica. It takes its cues through a shared laughter system and is intended to improve natural conversations between humans and robots. In the shared-laughter model, a human initially laughs, and the AI system responds with laughter as an empathetic response. This is based on three subsystems – one to detect laughter, a second to decide whether to laugh and a third to choose the type of appropriate laughter.
“We think that one of the important functions of conversational AI is empathy,” explains Dr Koji Inoue, an assistant professor at Kyoto University. “Robots should have a distinct character, and we think that they can show this through their conversational behaviours, such as laughing, eye gaze, gestures and speaking style.”
Sign up to the E&T News e-mail to get great stories like this delivered to your inbox every day.