New pilots fly the wide open skies once ruled by birds. These aerial pioneers are drones, not living creatures but products of conscious innovation. But these aren’t your typical flying robots that buzz around like mechanical bees. Rather, they are avian-inspired marvels of aerial flight that activate fluid neural networks to navigate an unseen, ever-changing environment with precision and ease. Inspired by the adaptive nature of the organic brain, researchers at MIT’s Informatics and Artificial Intelligence Laboratory (CSAIL) have developed a method for reliable flight-navigation agents that can learn vision-based orientation tasks in complex and unfamiliar environments. Fluid neural networks that can continuously adapt to new data inputs have demonstrated robust decision-making in unfamiliar domains such as forests, urban landscapes, and environments with additional noise, rotation, and occlusion. By outperforming more complex models in navigational tasks, these adaptable models could enable potential real-world drone applications such as search and rescue, delivery, and wildlife monitoring.
A recent study by researchers published today in the journal Science Robotics explains how this new class of agents can adapt to the industry’s long-standing and important distributional changes. But the team’s new machine learning algorithm captures the causal structure of an action in large, unstructured data, such as the pixel inputs of a camera mounted on a drone. These networks then extract the salient aspects of the task (i.e., understanding the task) and ignore irrelevant features, allowing the learned navigation skills to be seamlessly transferred to the new target environment.
“We are excited about the enormous potential of a learning-based control approach for robots,” said CSAIL Director Daniela Roos, “because it lays the groundwork for solving the challenges of training in one environment and deploying it in a completely different environment without additional training. Because it delivers.” , Andrew (1956) and Erna Viterbi are professors of science in the Department of Electrical and Computer Engineering at MIT.
“We are ecstatic by the enormous potential of our learning-based control strategy for robots since it lays the framework for resolving issues that come when training in one environment and deploying in a completely other one without extra training.”
Daniela Rus, CSAIL director .
“Our experiment shows that we can effectively train drones to detect objects in the forest in the summer and then deploy the model in the winter in completely different environments or even in urban environments with different tasks, such as search and surveillance. These adaptations are made possible by the causal basis of our decisions. Such flexible algorithms could one day help make decisions based on time-changing data streams, such as in medical diagnostics and self-driving applications.
It was a big question up front. Can machine learning systems understand data-driven actions when flying drones over unmarked objects? And can they transfer their learned skills and tasks to new environments with dramatic changes in scenery, such as flying from a forest to an urban landscape?
Furthermore, unlike the remarkable abilities of the biological brain, deep learning systems struggle to infer causal relationships, often overfit training data, and fail to adapt to new environments or changing conditions. This is a particular concern for resource-constrained embedded systems such as drones that navigate a wide range of environments and respond quickly to obstacles.
Instead, fluid networks provide a promising preliminary indicator of the ability of deep learning systems to address this critical weakness. The team’s system was initially trained on data collected by human pilots to see how they transfer navigational skills learned during rapid changes in environment and conditions to new environments. Unlike traditional neural networks that only learn during the training phase, the parameters of fluid neural networks can change over time, making them robust to interpretation as well as to unpredictable or noisy data. In a series of quadrotor closed-loop control experiments, the drone was subjected to range tests, stress tests, target rotation and occlusion, enemy walking, target triangulation, and dynamic target tracking.
They outperformed other state-of-the-art products by tracking moving objects and performing multi-step loops between objects in never-before-seen environments. The team believes that the ability to learn from limited expert data and understand a specific task while generalizing to new environments can make autonomous drone deployment more efficient, cost-effective, and reliable. They point out that liquid neural networks could enable the use of autonomous aerial mobility drones for environmental monitoring, package delivery, autonomous vehicles, and robotic helpers.
Myth Csail Ramin Hasina said: “In our activity, the real approach will be controlled, and we will confirm the evidence of the various scenarios in a simple scenario.” There are still a lot of problems with the complex problems of AI system accidents in primary research and modern navigation that must be confirmed before using it safely in our society.”
The teacher of the security department said: “The results of the tutor and the scenarios in the tutor and the scenarios are to overcome autonomous robots and autonomous robot systems in important applications.” In this context, the performance of the fluid neural network, an inspiring new brain paradigm developed by the MIT authors and described in this study, is remarkable. If these results are confirmed in other experiments, the paradigm developed here will enable AI and robotic systems to be more reliable, robust, and efficient.
Clearly, the sky is no longer the limit but a vast playground for the endless possibilities of these heavenly wonders.
More information: Makram Chahine et al, Robust flight navigation out of distribution with liquid neural networks, Science Robotics (2023). DOI: 10.1126/scirobotics.adc8892