While some may think Snapchat is fading, the app continues to attract a considerable number of active users.
Acknowledging past shortcomings in machine learning utilisation, Snap’s CEO Evan Spiegel announced a new, assertive strategy to integrate AI and machine learning technologies into its services, marking a substantial departure from its long-term focus on revising its advertising approach.
In an interview with Bloomberg, Spiegel emphasised the need to improve their machine learning capabilities to reach cutting-edge standards. “We needed to improve there and bring together some of our most senior machine learning folks to just talk about what it would look like for us to get to state of the art and really invest,” he stated.
Soon afterward, Snap debuted its newest generative AI technology that allows phone cameras to create more lifelike lenses—the features on the app that let you turn into a dog or have giant bug eyes—when recording videos and taking photos. Snapchat hopes that this change will help it compete more effectively with other social media platforms.
Snap has been a pioneer in augmented reality (AR) technology, which layers digital effects onto real-world images or videos. Although Snap still operates in the shadow of larger rivals such as Meta, the company is making a significant bet on more sophisticated and, frankly, more fun AR lenses. They hope these will attract new users and advertisers to the Snapchat platform.
The company also unveiled that AR developers can now create AI-powered lenses, and Snapchatters will be able to extensively use these lenses in their content. Additionally, Snap announced a new iteration of its developer program: Lens Studio. This more advanced version of the software, introduced late last year, initially allowed creators to build their own AR experiences for Snapchat. Now, it extends to websites and other apps.
With the improved Lens Studio, Snap’s CTO Bobby Murphy said that the time required to create AR effects would be dramatically reduced from weeks to minutes or hours, and that it would also facilitate the development of more sophisticated work. “What’s fun for us is that these tools both stretch the creative space in which people can work, but they’re also easy to use, so newcomers can build something unique very quickly,” Murphy explained in an interview with Reuters.
The new Lens Studio includes a suite of generative AI tools, such as an AI assistant that can answer developers’ questions if they need help. Another tool allows artists to type a prompt and automatically generate a three-dimensional image that they can use for their AR lens, eliminating the need to develop a 3D model from scratch.
Early AR technologies only allowed users to perform simple tasks, such as placing a hat on someone’s head in a video. However, according to Murphy, Snap’s improvements will make it kind of hard to tell whether a digital hat is actually being worn, with the hat moving seamlessly with the person’s movements and the lighting on the hat matching the video perfectly.
Snap also eventually plans to create AR lenses that cover everything from your head to your toes—not just your face. Building a new wardrobe for individuals is really hard to do right go right now, said Murphy. Through its generative AI capabilities, Snap will provide advanced AR experiences to distinguish Snapchat from its peers and attract new users, even though it might struggle to gain users relative to its scale compared with giants like Meta.
See also: NVIDIA presents latest advancements in visual AI
Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.