Apple's latest AI tool can animate an image based on your description

Tim Hardwick

Apple has made another addition to its growing artificial intelligence repertoire, creating a tool that uses large language models (LLMs) to animate static images based on user text prompts.

MacRumors image created with DALL·E
Apple describes the innovation in a new research paper called “Keyframer: Empowering Animation Design with a Large Language » Models.”

“While interfaces with one-shot prompts are common in commercial text-to-image systems such as Dall·E and Midjourney, we argue that animation requires more complexity a set of user considerations such as time and coordination that are difficult to fully address in a single request, so alternative approaches may be required that allow users to iteratively create and refine generated designs, especially for animation.

” We have combined new principles designing language cues for design artifacts with LLM's code generation capabilities to create a new AI-powered animation tool called Keyframer. With Keyframer, users can create animated illustrations from static 2D images using natural language prompts. Using GPT-4 3, Keyframer generates CSS animation code to animate the input Scalable Vector Graphics (SVG).”

To create an animation, a user uploads an SVG image—say, a space rocket—and then enters a prompt like “create three designs where the sky shimmers in different colors and the stars twinkle.” Keyframer then generates the CSS code for the animation, which the user can then refine by editing the code directly or by entering additional text hints.

“Keyframer allowed users to iteratively refine their designs through sequential prompts, rather than having to consider the entire design upfront,” the authors explain. “Through this work, we hope to inspire future motion design tools that combine the powerful generative capabilities of LLMs to speed up design prototyping with dynamic editors that allow creators to maintain creative control.”

According to the article, the study was based on for interviews with professional animation designers and engineers. “I think it was a lot faster than a lot of what I've done,” said one study participant quoted in the paper. “I think doing something like this before would have taken hours.”

This innovation is just the latest in a series of breakthroughs from Apple in artificial intelligence. Last week, Apple researchers released an artificial intelligence model that leverages the power of multimodal LLMs to edit images at the pixel level.

In late December, Apple also announced that it had made progress in implementing LLM on the iPhone and other Apple devices with limited memory by inventing an innovative technique for using flash memory.

Both information and analyst Jeff Poo said Apple will introduce some kind of generative AI feature available on iPhones and iPads later this year when it releases iOS 18. According to Bloomberg reporter Mark, the next version of Apple's mobile software will include an enhanced version of Siri with generative AI functionality intelligence similar to ChatGPT and could potentially be the “biggest” update in the history of ‌iPhone‌ Gourmet.

(Via VentureBeat.)

Tag: Apple GPT Guide [65 comments]

Leave a Reply