This feeling will be familiar to anyone who has played soccer with a robot. The smell of fresh grass fills the air as the sun glistens down on your face. You scan your surroundings. A four-legged robot is hurrying toward you while dribbling with determination.
The bot’s dribbling system is impressive in the real world, even though it doesn’t match Lionel Messi’s level of skill. A legged robotic system that can dribble a soccer ball under identical conditions to humans has been created by researchers from MIT’s Improbable Artificial Intelligence Lab, a division of the Computer Science and Artificial Intelligence Laboratory (CSAIL). The IEEE International Conference on Robotics and Automation (ICRA) in 2023 will feature the presentation of their paper.
Sand, gravel, mud, and snow were just a few of the various natural terrains that the bot had to navigate while adjusting to their various effects on the ball’s motion using a combination of onboard sensing and computing. Like any dedicated athlete, “DribbleBot” was able to get back up and retrieve the ball after falling.
“We must build a suitable incentive to aid the robot learning a successful dribbling behavior,” says one part of the reinforcement learning technique.
MIT Ph.D. student Gabe Margolis, who co-led the work along with Yandong Ji, research assistant in the Improbable AI Lab.
It has long been a topic of active research to program robots to play soccer. To enable the discovery of challenging-to-script skills for responding to various terrains like snow, gravel, sand, grass, and pavement, the team instead sought to automatically learn how to actuate the legs during dribbling. Simulacrum is now here.
Inside the simulation, which is a virtual replica of the real world, are a robot, a ball, and a landscape. The forward simulation of the dynamics is handled from there, after you load in the bot and other assets and set the physics parameters. Data collection is 4,000 times faster with the robot when 4,000 different versions are simulated in parallel and in real time. That is a lot of data.
The robot doesn’t know how to dribble the ball when it first starts, but it is only given positive reinforcement when it succeeds and negative reinforcement when it fails. It is thus essentially attempting to determine the order in which its legs should exert forces. According to MIT Ph.D., “This reinforcement learning approach includes the requirement that we design a good reward to facilitate the robot learning a successful dribbling behavior. D. Gabe Margolis, a student, and Yandong Ji, a research assistant in the Improbable AI Lab, co-led the project. “After we’ve created that reward, the robot will practice for a few days in real time and hundreds of days in the simulator. It learns to control the soccer ball to match the desired velocity over time, getting better and better at it.”.
The team added a recovery controller to the bot’s system, which allowed it to navigate unfamiliar terrain and recover from falls. This controller helps the robot navigate challenging terrain and out-of-distribution disruptions by allowing it to stand up after falling and quickly switch back to its dribbling controller to continue pursuing the ball.
Credit goes to the Massachusetts Institute of Technology.
“Most robots in use today are wheeled, if you look around. But suppose there were an earthquake or a flood, and we wanted robots to assist humans in the search and rescue procedure? Wheeled robots are unable to travel over uneven terrain, which is what we need the machines to do, claims Pulkit Agrawal, an MIT professor, CSAIL principal investigator, and the head of the Improbable AI Lab. He continues, “The whole point of studying legged robots is to explore terrain outside the reach of current robotic systems. Our aim in creating algorithms for legged robots is to give them autonomy in difficult and complicated terrains that are currently out of the reach of robotic systems.”.
The idea was first brought up by Canadian professor Alan Mackworth in his 1992 paper, “On Seeing Robots,” which he presented at VI-92. It reveals a deep-seated fascination with robot quadripedes and soccer. The topic of using soccer to advance science and technology was discussed at a workshop on “Grand Challenges in Artificial Intelligence” that was later organized by Japanese researchers. A year later, the Robot J-League was introduced, and instantaneous worldwide mania followed. “RoboCup” was created soon after that.
DribbleBot is more limited in its motion and the types of terrain it can travel through when dribbling a soccer ball than when it is walking by itself. To apply forces to the ball and dribble, the robot must modify its locomotion. The way a ball interacts with a given environment could be different from how a robot interacts with a given environment, such as dense grass or pavement. For instance, a soccer ball will encounter a drag force on grass that is absent on pavement, and an incline will apply an acceleration force, altering the ball’s usual path. The soccer test, however, can be sensitive to variations in terrain that locomotion alone isn’t because the bot’s ability to navigate various terrains is frequently less affected by these differences in dynamics—as long as it doesn’t slip.
“Previous methods simplified the dribbling issue by assuming flat, hard ground for the modeling. Additionally, the motion is intended to be more static; the robot isn’t attempting to run and manipulate the ball at the same time, according to Ji. ” At that point, more challenging dynamics enter the control problem. We overcame this by incorporating recent developments that have improved outdoor locomotion into this compound task, which combines elements of locomotion and dexterous manipulation.
From a technical standpoint, the robot is equipped with a variety of sensors that enable it to perceive its surroundings. These sensors enable it to feel its location, “understand” its position, and “see” some of its surroundings. It is equipped with a set of actuators that enable it to exert forces on itself and objects. The computer, or “brain,” is positioned between the sensors and actuators and is responsible for translating sensor data into actions that are applied via the motors. Although the robot cannot see snow when it is running on it, it can feel it thanks to its motor sensors. However, playing soccer is more difficult than walking, so the team used cameras on the robot’s head and body to add the new sensory capability of vision to its repertoire. We dribbled after that.
The fact that our robot has all of its sensors, cameras, and computing power on board allows it to operate in the wild. Margolis claims that some innovations were needed to fit the entire controller onto this onboard computer. “That’s one area where learning helps because we can use a lightweight neural network and train it to handle cluttered sensor data viewed by the moving robot. Contrary to the majority of robots in use today, which typically have fixed bases, workbenches, and giant computers plugged directly into them, this robot arm is attached to a mobile base and sits on the ground. The robotic arm is heavy and difficult to move around because neither the computer nor the sensors are inside of it.”.
Some terrains proved difficult for DribbleBot, and there is still a long way to go before these robots are as agile as their counterparts in nature. In simulated environments with slopes or stairs, the controller is not currently trained. The robot only makes estimates of the terrain’s material contact properties, such as friction, rather than sensing its geometry. The team wants to continue working on this problem because, for instance, if there is a step up, the robot will get stuck and be unable to lift the ball over the step. The researchers are also eager to use the knowledge gained from developing DribbleBot for other tasks that require coordinated locomotion and object manipulation, such as swiftly moving a variety of objects from one location to another using the arms or legs.
Provided by Massachusetts Institute of Technology