Skip to main content

Adobe launches AI-powered Character Animator features in beta

Adobe Character Animator
Image Credit: Adobe

Join us in Atlanta on April 10th and explore the landscape of security workforce. We will explore the vision, benefits, and use cases of AI for security teams. Request an invite here.


Adobe today announced the beta launch of new features for Adobe Character Animator (version 3.4), its desktop software that combines live motion-capture with a recording system to control 2D puppets drawn in Photoshop or Illustrator. Several — including Speech-Aware Animation and Lip Sync — are powered by Sensei, Adobe’s cross-platform machine learning technology, and leverage algorithms to generate animation from recorded speech and align mouth movements for speaking parts.

AI is becoming increasingly central to film and television production, particularly as the pandemic necessitates resource-constrained remote work arrangements. Pixar is experimenting with AI and general adversarial networks to produce high-resolution animation content, while Disney recently detailed in a technical paper a system that creates storyboard animations from scripts. And stop-motion animation studios like Laika are employing AI to automatically remove seam lines in frames.

Speech-Aware Animation, which was previewed as Project SweetTalk at last year’s Adobe Max conference, generates head and eyebrow movements corresponding to a recorded character. It’s available on macOS 10.15 or newer and allows animators to blend computed movements with recordings for greater refinement, adjusting parameters like head position, scale, tilt, and turn sensitivity and eyebrow and parallel strength.

Adobe Character Animator

VB Event

The AI Impact Tour – Atlanta

Continuing our tour, we’re headed to Atlanta for the AI Impact Tour stop on April 10th. This exclusive, invite-only event, in partnership with Microsoft, will feature discussions on how generative AI is transforming the security workforce. Space is limited, so request an invite today.
Request an invite

The latest release of Character Animator’s lip-sync engine — Lip Sync — improves automatic lip-syncing and the timing of mouth shapes called “visemes.” Both viseme detection and audio-based muting settings can be adjusted via the settings menu, where users can also fall back to an earlier engine iteration.

Limb IK, previously Arm IK, is in tow with the new Character Animator. It controls the bend directions and stretching of legs as well as arms, allowing artists to pin hands in place while moving the rest of the body. Pin Feet When Standing complements Limb IK; it’s a new option that keeps characters’ feet grounded when they’re not walking, leading to more realistic mid-body poses, like squats.

Using the new Set Rest Pose option, users can animate back to the default position when they recalibrate, so they can use it during a live performance without causing their character to jump abruptly. Merge Takes lets them combine multiple Lip Sync or Trigger takes into a single row. And they gain the benefits of revamped Character Animator organization tools, including the ability to filter the timeline to focus on individual puppets, scenes, audio, or keyframes.

Character Animator 3.4 beta is available for download via the Creative Cloud desktop application and includes an in-app library of starter puppets and tutorials. It installs separately and has its own user preferences, and it can run alongside the release version.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.