close
Robotics

FRIDA, an AI-Powered Robot, Collaborates with Humans to Create Art

Carnegie Mellon University’s Robotics Institute has a new artist-in-residence.

FRIDA, a robotic arm with a paintbrush affixed to it, collaborates on artwork with people using artificial intelligence. When you ask FRIDA to paint a picture, it immediately starts putting brush to canvas.

“There’s this one painting of a frog ballerina that I think turned out really nicely,” said Peter Schaldenbrand, a School of Computer Science Ph.D. student in the Robotics Institute working with FRIDA and exploring AI and creativity. “It is really silly and fun, and I think the surprise of what FRIDA generated based on my input was really fun to see.”

FRIDA is an acronym that stands for Framework and Robotics Initiative for Developing Arts and is named after Frida Kahlo. Schaldenbrand is leading the initiative, which includes RI faculty members Jean Oh and Jim McCann, and has drawn students and researchers from around CMU.

Users can instruct FRIDA by providing a text description, contributing other pieces of art to inspire its style, or uploading a photograph and requesting that it paint a likeness of it. The team is experimenting with other inputs as well, including audio. They played ABBA’s “Dancing Queen” and asked FRIDA to paint it.

“FRIDA is a robotic painting system, but FRIDA is not an artist,” Schaldenbrand said. “FRIDA is not generating the ideas to communicate. FRIDA is a system that an artist could collaborate with. The artist can specify high-level goals for FRIDA and then FRIDA can execute them.”

The robot uses AI models similar to those powering tools like OpenAI’s ChatGPT and DALL-E 2, which generate text or an image, respectively, in response to a prompt. FRIDA employs brush strokes to imitate painting an image and uses machine learning to evaluate its progress as it works.

People wonder if FRIDA is going to take artists’ jobs, but the main goal of the FRIDA project is quite the opposite. We want to really promote human creativity through FRIDA. For instance, I personally wanted to be an artist. Now, I can actually collaborate with FRIDA to express my ideas in painting.

Jean Oh

FRIDA’s final products are impressionistic and whimsical. The brushstrokes are bold. They lack the precision that is frequently desired in robotic undertakings. FRIDA riffs on mistakes, blending the accidental splatter of paint into the final effect.

“FRIDA is a project exploring the intersection of human and robotic creativity,” McCann said. “FRIDA is using the kind of AI models that have been developed to do things like caption images and understand scene content and applying it to this artistic generative problem.”

FRIDA taps into AI and machine learning several times during its artistic process. First, it spends an hour or more practicing with its paintbrush. Then, to analyze the input, it employs huge vision-language models trained on massive datasets that pair text and images scraped from the internet, such as OpenAI’s Contrastive Language-Image Pre-Training (CLIP). AI systems use these models to generate new text or images based on a prompt.

Other image-generating tools such as OpenAI’s DALL-E 2, use large vision-language models to produce digital images. FRIDA takes this a step further by producing tangible artworks with its embodied robotic system.

Reducing the simulation-to-real gap, or the difference between what FRIDA composes in simulation and what it paints on the canvas, is one of the most difficult technological hurdles in making a physical image.

FRIDA uses an idea known as real2sim2real. The robot’s actual brush strokes are used to train the simulator to reflect and mimic the physical capabilities of the robot and painting materials.

The FRIDA team is also working to solve some of the shortcomings in current big vision-language models by constantly upgrading the ones they employ. To eliminate an American or Western bias, the researchers fed the models headlines from news items to give them a feel of what was going on in the world and then trained them on images and language more reflective of other cultures.

This multicultural collaboration effort is led by Zhixuan Liu and Beverley-Claire Okogwu, first-year RI master’s students, and Youeun Shin and Youngsik Yun, visiting master’s students from Dongguk University in Korea. Their efforts include training data contributions from China, Japan, Korea, Mexico, Nigeria, Norway, Vietnam, and other countries.

After the human user of FRIDA has stated a high-level notion of the painting they want to create, the robot utilizes machine learning to generate its simulation and develop a strategy to make a painting to satisfy the user’s aims. On a computer screen, FRIDA presents a color palette for a human to combine and send to the robot.

Automatic paint mixing is currently being developed, led by Jiaying Wei, a master’s student in the School of Architecture, with Eunsu Kang, faculty in the Machine Learning Department.

Armed with a brush and paint, FRIDA will make its first strokes. The robot will occasionally utilize an above camera to obtain an image of the artwork. The graphic assists FRIDA in evaluating its progress and, if necessary, revising its plan. The whole process takes hours.

“People wonder if FRIDA is going to take artists’ jobs, but the main goal of the FRIDA project is quite the opposite. We want to really promote human creativity through FRIDA,” Oh said. “For instance, I personally wanted to be an artist. Now, I can actually collaborate with FRIDA to express my ideas in painting.”

More information about FRIDA is available on its website. The team will present its latest research from the project, “FRIDA: A Collaborative Robot Painter With a Differentiable, Real2Sim2Real Planning Environment” at the 2023 IEEE International Conference on Robotics and Automation this May in London.

FRIDA resides in the RI’s Bot Intelligence Group (BIG) lab in the Squirrel Hill neighborhood of Pittsburgh.

Topic : Article