Scientists from the University of Surrey are teaching artificial intelligence models to re-light different body types and clothing without the need for 3D modeling. This may have a positive impact on the film, television and gaming industries in the near future.
Current AI approaches to re-lighting people often require a geometric 3D model to be created in the form of triangle meshes or point clouds before the correct shadowing effects can be applied.
This new model, in contrast, only requires 2D images of people, as well as the changes in lighting, to allow the proposed technique to render self-shadowing effects. The paper is published in The Eurographics Association journal.
Farshad Einabadi, a senior researcher at the University of Surrey’s Center for Vision, Speech and Signal Processing (CVSSP), said, “This AI4ME project is a great example of how AI can be used to help increase the accessibility and realism of compositing techniques in film and TV by teaching computer models what human body types, articulations and clothing look like in different lighting settings without relying on time-consuming and expensive 3D modeling. It allows an alternative approach to (re)lighting actors and presenters.
“We can considerably reduce the time and cost needed to render (re-light) full-body human images by training AI models on how to adapt to different lighting, body type, and clothing scenarios. This research and its findings should be useful, for example, for studios to augment their green screen production pipelines.”
Professor Adrian Hilton, the Director of the Surrey Institute for People-Centered AI, said, “AI4ME’s research is pioneering a new approach using AI to enable advanced video editing of lighting and shadows, which normally requires expensive 3D modeling.
“This advance will impact UK creative industries, introducing a new generation of AI-enabled creative tools for film and TV production which maintain video realism, and allowing consumers affordable access to advanced creative technologies and personalized media experiences.”
More information:
Farshad Einabadi et al, Learning Self-Shadowing for Clothed Human Bodies, The Eurographics Association (2024). DOI: 10.2312/sr.20241159. openresearch.surrey.ac.uk/espl … odies/99892866602346
University of Surrey
Citation:
A new generation of AI-enabled tools for accessible, personalized media experiences (2024, November 14)
retrieved 14 November 2024
from https://techxplore.com/news/2024-11-generation-ai-enabled-tools-accessible.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.