Autopilot is a tool for piloting commercial aircraft. Is it feasible to develop a software system with a feedback loop—a system that quickly evaluates how controls function on the damaged vessel and makes adjustments on the fly to give it the best chance of landing safely—but what happens if a wing gets damaged or an engine malfunctions?
The development of such a system is being facilitated by a research team from Princeton, the University of Texas, and Northeastern University. According to Amir Ali Ahmadi, a Princeton professor of operations research and financial engineering and a team member, the fundamental research the team is conducting could one day be applied to aircraft controls and a wide range of other applications, such as managing disease epidemics or producing more precise predictions about climate change or species survival.
A “dynamical system,” or one that changes as it moves, is what we’re trying to control in some way. In general, dynamical systems are notoriously challenging to predict and control. Ahmadi, as well as his coworkers Charles Fefferman and Herbert E. Jones Jr., ‘ 43, University Professor of Mathematics, and Sin-I Cheng Professor of Engineering Science Clarence Rowley, are attempting to create algorithms that can learn the behavior of dynamical systems from data.
“A dynamical system is any entity that evolves over time in some space. As a result, an airplane is a dynamical system; a robot is a dynamical system; even viral dissemination is a dynamical system.”
Amir Ali Ahmadi, a professor of operations research and financial engineering at Princeton .
According to Ahmadi, any object in a space that changes over time is referred to as a dynamic system, according to Ahmadi. Thus, a dynamical system includes an airplane, a robot, and the spread of a virus.
Gaining control is particularly difficult when data is scarce, according to Ahmadi. “The plane has changed,” he said, “and you have less than a minute to come up with a new model of control” in the event of a damaged aircraft.
It is frequently difficult to predict future performance using data that is extremely sparse. When little is known about how a disease spreads, for instance, it is challenging to suggest the best course of action.
Ahmadi’s research team recently presented an approach in an article published in the SIAM Review that makes use of additional data to quickly react to changing circumstances when little data is available for decision-making. Mathematicians refer to this extra information as “side information,” and it functions in a similar way to how experience or expert knowledge would in a human. The years of experience a doctor has, for instance, will enable her to treat a patient effectively even if she has never seen the disease in question.
Ahmadi said this project is entirely focused on that. The goal is to eventually learn a system from very little data and control it the way we want.”.
Simple beginnings.
The current project is not intended to pursue long-term objectives like aircraft controls. Instead, in order to understand more about managing a system with many unknowns, the work funded by the Air Force grant is concentrating on much simpler examples.
You can comprehend what the controls do using standard control theory. We’re attempting to create a more potent variation of that theory where you don’t know what the controls do but you learn by using them, Fefferman said. In his current project with Rowley, he is attempting to temporarily stop an object that is moving straight ahead at a constant speed in dynamical systems. Another example would be to temporarily stop the motion of a moving object. As a pilot would want to do in a plane with limited fuel, the researchers also want to exert control with the least amount of energy possible.
Controlling an inverted pendulum, which is akin to attempting to balance a broomstick in your hand, is another problem they might take on. This is an advanced version of one that is frequently given to undergraduate mechanical engineering majors. Without being aware of the system’s mass center, the controller would pick up on the behaviors of the system almost immediately. To accomplish that, they would develop control equations based on a brief period of observation, modify the controls after observing what happened, and then record the results. The learn-and-control iterations would be completed quickly by the model, according to its design.
Control as opposed to knowledge.
The issues the team investigates involve trade-offs between examining functionality and making use of the acquired knowledge, according to Rowley. “The model might not be good enough to land a plane if you use your knowledge too soon. However, if you spend too much time studying its behavior, the plane might crash.
Ufuk Topcu, a team member and associate professor at the University of Texas, said that there is no one method for controlling a system with unknown dynamics. But one of the secrets is to focus on the data that is most valuable. To determine what is worth learning, he said, “you have to approach it from multiple angles and chop the big problem into more manageable pieces.”.
The researchers anticipate having algorithms for controlling at least some elements of a dynamical system. Although their model might not be quick enough to operate in real time, Ahmadi said it should be able to demonstrate which controls are feasible in a changing system and the degree of certainty with which they can be successful.
“Learning Dynamical Systems with Side Information” was an article that appeared in the SIAM Review’s February issue.
More information: Amir Ali Ahmadi et al, Learning Dynamical Systems with Side Information, SIAM Review (2023). DOI: 10.1137/20M1388644