Researchers To Incorporate Human Senses In Robots
Nausheen Shehnaz

Baseball players who have had years of experience and an understanding of the trajectories of various pitches are known to have acquired accurate reflexes. In baseball, when one’s vision, hearing, touch or, in this case – muscle memory has seamlessly worked together, the bat is swung and an accurate hit is made. In robots, a linkage system has to be used to slowly coordinate data from its motor capabilities in order to process and decide its next action.

A new way to combine perception and motor commands using the hyperdimensional computing theory, was introduced by researchers at University of Maryland. A technology that could fundamentally alter and improve the basic artificial intelligence (AI) task of sensorimotor representation. That would enable the agents to translate what they sense into what they do. This was published in Science Robotics.

Integration has been the most crucial challenge faced in the Robotics Field. Sensors and actuators that could move a robot are separate systems that have been linked together by a central learning mechanism inferring the right action when given sensory data or vice versa. The three-part AI system has been considered cumbersome because each part speaks its own language. When a robot’s perceptions and its motor capabilities have been integrated, an “active perception” is created from the fusion. This would make the robot autonomous, providing an efficient and quicker way for the robot to complete tasks.

The robot’s memories of what it had done or sensed in the past could alter its perception in the future and also influence its actions. Aloimonos Yiannis, a computer science professor at the University of Maryland, said, “An active perceiver knows why it wishes to sense, then chooses what to perceive, and determines how, when and where to achieve the perception.” He also added, “It selects and fixates on scenes, moments in time, and episodes. Then it aligns its mechanisms, sensors, and other components to act on what it wants to see, and selects viewpoints from which to best capture what it intends.”

The researcher’s ultimate goal is to extend far beyond robotics and also to use AI in a fundamentally different way, to move from concepts to signals to language. Deep Learning AI methods currently used in computing applications such as data mining, visual recognition and translating images to text could be made faster by hyperdimensional computing. Ph.D student Aton Mitrokhin said, “Our hyperdimensional theory method can create memories, which will require a lot less computation, and should make such tasks much faster and more efficient.”

Read More…..

Share

Twitter

Tumblr

Facebook

Digg

Flickr

Instagram

LinkedIn

Pinterest

StumbleUpon

Vimeo

YouTube

Contact Me

   

Email This Page