This press release features multimedia. View the full release here: https://www.businesswire.com/news/home/20200304005029/en/
Most of the UAVs currently in use rely on the transmission and reflection of electromagnetic waves to detect and avoid obstacles, but this consumes a lot of power. An alternative approach to avoiding obstacles is to use optical lenses to capture and analyze images, but the amount of information to be processed is too large to be done quickly, and this approach also consumes much power.
Intrigued by the fruit fly’s uncanny ability to avoid obstacles, Tang figured that it might be possible to replicate the optical nerve of this tiny insect and adapt it to AI applications.
The first task was to solve the problem of information overload. According to Tang, the image sensor currently used in cameras and mobile phones have millions of pixels, whereas the eye of a fruit fly has only about 800 pixels. When the fruit fly’s brain processes such visual signals as contour and contrast, it utilizes a kind of detection mechanism which automatically filters out unimportant information, and only pays attention to moving objects liable to collide with.
By imitating this detection mechanism, the research team has developed an AI chip which makes it possible to use hand gestures and an image sensor to operate a drone.
First the drone is taught to focus on what’s most important, and then it’s taught how to judge distance and the likelihood of a collision. For this purpose, Lo conducted a detailed investigation on how the fruit fly detects optical flow, for which purpose he made extensive use of the maps of the fruit fly’s neural pathways produced by the Brain Research Center at NTHU. “Optical flow is the relative trajectory left in the field of vision by nearby moving objects, and which is used by the brain to determine its distance and to avoid obstacles,” Lo explained.
Tang said that the AI chip developed by his research team represents a major breakthrough in the area of in-memory computing. Computers and mobile phones first move data from the memory to the CPU’s central processing unit, and once it’s processed, the data is moved back to the memory for storage, and such a process is what consumes up to 90% of the energy and time of the AI deep-learning process. By contrast, the AI chip developed by the NTHU team mimics neuronal synapses, allowing it to perform computations in the memory, which greatly improves efficiency.
View source version on businesswire.com: https://www.businesswire.com/news/home/20200304005029/en/
Contacts
Holly Hsueh
(886)3-5162006
hoyu@mx.nthu.edu.tw
Source: National Tsing Hua University
Reporter: PR Wire
Editor: PR Wire
Copyright © ANTARA 2020