Robots can see better with eye imaging technology

Published Categorized as Robotic
Eye Imaging Technology
Eye Imaging Technology

Robotics technology is trying to improve its interaction with the world. In this regard, eye imaging technology plays a tremendous role to assist robotics technology. We know very well that robotic eyes don’t have retinas. Thus, optical coherence tomography (OCT) instruments help them better see the world.

LiDAR (light detection and ranging) is the most common eye imaging technology. Since LiDAR acts like a radar, most autonomous car developers are exponentially financing this technology. LiDAR uses short waves of light from the lasers. It doesn’t use the technique of sending broad waves and observing their reflection. However, it might be a bit challenging to use the traditional form of LiDAR technology due to its very weak reflected light signals detection. Furthermore, this system sometimes becomes sensitive even with mild sunlight.

FMCW- advanced eye imaging technology

Therefore, keeping these challenges in view, researchers have moved to another form of LiDAR technology. This form of LiDAR technology is based on frequency-modulated continuous-wave. FMCW is an even more advanced form of LiDAR. Though it takes a similar move to that of OCT, it can effectively differentiate between frequencies and light sources. Now, utilizing this approach in robotics technology as well as in autonomous cars can increase their visionary capacities to a great extent.

“FMCW LiDAR shares the same working principle as OCT. That is biomedical engineering field has been developing since the early 1990s,” said Ruobing Qian, a Ph.D. student working in the laboratory of Joseph Izatt, the Michael J. Fitzpatrick Distinguished Professor of Biomedical Engineering at Duke. “But 30 years ago, nobody knew autonomous cars or robots would be a thing. So, the technology-focused on tissue imaging. Now, to make it useful for these other emerging fields, we need to trade in its extremely high-resolution capabilities for more distance and speed.”

In the journal Nature Communications, the Duke team exhibits how a couple of stunts gained from their OCT research may develop past FMCW LiDAR data by 25 times while as yet accomplishing submillimeter depth exactness.

“It has been very exciting to see how the biological cell-scale imaging technology we have been working on for decades is directly translatable for large-scale, real-time 3D vision,” Izatt said. “These are exactly the capabilities needed for robots to see and interact with humans safely. Or even to replace avatars with live 3D video in augmented reality.”

“The world around us is 3D. So, if we want robots and other automated systems to interact with us naturally and safely, they need to be able to see us as well as we can see them,” Izatt said.

News Source

https://www.sciencedaily.com

https://www.analyticsinsight.net

Published
Categorized as Robotic

Leave a comment

Your email address will not be published. Required fields are marked *