Researchers at Carnegie Mellon University’s Robotics Institute (RI) have developed a robotic system that interactively co-paints with people. Collaborative FRIDA (CoFRIDA) can work with users of any artistic ability, inviting collaboration to create art in the real world.
“It’s like the drawing equivalent of a writing prompt,” said Jim McCann, an associate RI professor who runs the RI’s Textiles Lab. “If you’re stuck and you don’t know what to do, it can put something on the page for you. It can break the barrier of an empty page. It’s a really interesting way of enhancing human creativity.”
CoFRIDA builds on past work with FRIDA, a multilab collaboration in the School of Computer Science.
Named after the artist Frida Kahlo, FRIDA (Framework and Robotics Initiative for Developing Arts) can use a paintbrush or a Sharpie to create a painting from a human user’s text prompts or image examples. The project was founded by Jean Oh, an associate research professor in the RI and head of the Bot Intelligence Group (BIG), jointly with McCann and Ph.D. student Peter Schaldenbrand.
To support a more collaborative artistic creation experience, RI Ph.D. student Gaurav Parmar and Assistant Professor Jun-Yan Zhu joined the FRIDA team to develop CoFRIDA. The new system allows users to provide text inputs to describe what they want to paint. They can also participate in the creation process, taking turns painting directly on the canvas with the robot until they’ve realized their artistic vision.
“CoFRIDA requires a higher level of intelligence than the original FRIDA, which creates an artwork alone from start to completion,” Oh said. “Co-painting is analogous to working with another person, constantly needing to guess what they want. CoFRIDA has to understand the human user’s high-level goals to make that user’s strokes meaningful toward the goal.”
Co-painting is by its nature collaborative, and developing data that trains a robot to collaborate is difficult and time-consuming. To get around this complication, CoFRIDA uses self-supervised training data based on FRIDA’s stroke simulator and planner.
The researchers created a self-supervised, fine-tuning dataset by having FRIDA simulate paintings that consisted of a sequence of brush strokes, from which some strokes could be removed to produce examples of partial paintings.
The team had to determine how to remove elements from drawings in the training data while leaving enough of the image for CoFRIDA to recognize it. For example, researchers took away details like the rim of a wheel or windows in a car but left the outline of the vehicle.
“We tried to simulate different states of the drawing process,” Zhu said. “It’s easy to get to the final sketch, but it’s quite hard to imagine the intermediate stage of this process.”
Using the dataset of partial and complete paintings, the researchers fine-tuned a text-to-image model, InstructPix2Pix, that enabled CoFRIDA to add brush strokes and work with existing content on the canvas. This approach, which relies on data created using CoFRIDA’s brush simulator, means that generating a painting incorporates the robot’s real constraints, such as its limited set of tools.
Outside the lab, researchers hope CoFRIDA can teach people about robotics and expand creativity, encouraging people who may doubt their artistic abilities. CoFRIDA can also help make users’ visions come to life or take the artwork in a whole new direction.
“If you start from a very simple sketch, CoFRIDA takes the artwork in vastly different directions. If you ask for six different drawings, you’ll get six very different options,” Schaldenbrand said.
“It’s nice to be able to make decisions at a high level because it makes me feel like an art director. The robot makes these low-level decisions of where to put the marker, but I get to decide what the overall thing will look like. I still feel in control of the creative process, and in a world where artists fear replacement by AI, CoFRIDA as an example of a robot designed to support human creativity is incredibly relevant.”
Researchers hope further work can integrate personalization into CoFRIDA, giving users even more control over the style of the finished product.
The team’s paper, “CoFRIDA: Self-Supervised Fine-Tuning for Human-Robot Co-Painting,” won the Best Paper Award on Human Robot Interaction at the 2024 IEEE International Conference on Robotics and Automation (ICRA) in Yokohama, Japan. An accompanying CoFRIDA demonstration was a finalist for the Best Demo at the ICRA EXPO. The paper is available on the arXiv preprint server.
More information:
Peter Schaldenbrand et al, CoFRIDA: Self-Supervised Fine-Tuning for Human-Robot Co-Painting, arXiv (2024). DOI: 10.48550/arxiv.2402.13442
arXiv
Carnegie Mellon University
Citation:
Research brings together humans, robots and generative AI to create art (2024, May 31)
retrieved 31 May 2024
from https://techxplore.com/news/2024-05-humans-robots-generative-ai-art.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.