Our struggles (and minor wins) with Rive for character animation in 2025

Inspired by Duolingo, we also gave the beta Rive Editor a try to make our AI virtual friends in MindChat come alive. TLDR; It didn't go too well.

Our struggles (and minor wins) with Rive for character animation in 2025
Hey, Solly!

Grayhat first met the Rive team at GDC 2024. Early startup energy, cool booth, tons of hype. At the time, we already knew about Rive, but we hadn’t actually battle-tested it in any real production scenario. It looked promising, and we filed it away as a “maybe one day” tool.

Fast forward to this year... We finally decided to give it a real shot.

Solly, Our First Rive Experiment

Our first test was simple. We built an animated mascot named Solly for MindChat. Just a cute, lively animation. It turned out pretty solid.

An animated Solly, the mascot for MindChat.

In our hubris, we decided to go big. Not just animated characters. We wanted full lip-sync and animated avatars reacting in real-time during video calls.

That’s when our rabbit hole journey began.

Research and Learnings

To understand what “good” looks like, we researched teams already doing this at scale.

We studied Duolingo’s approach to visemes and character speech, which instantly made them our north star. We also pored over Guido Rosso's breakdown of how Duolingo uses GenAI and Rive for more technical insight. It's an excellent blend of art and tech, I highly recommend watching it:

For character structure and workflow inspiration, we found Tolan’s process incredibly useful, detailed in their post on designing their character.

We also dove into the broader concepts of phonemes and visemes, looking at established solutions in the 3D space like Reallusion's iClone for AI-powered lip-sync.

Enough theory, let's build.

Intersecting art and tech (Rive + Flutter)

To get hands-on, we learned the basics through foundational tutorials like this Rive in Flutter video and this more advanced follow-up. Community tips, like this Reddit thread on a better way to animate, and the official Rive best practices guide were essential reading.

Then we got our hands dirty by running our first interactive test. We replaced the Interacting Bear Demo from GitHub with the Monster Mouth (found on the Rive Marketplace), and lipsynced it.

At this point, we were feeling pretty confident. We started exploring Rive's marketplace for rigs, especially mouth rigs and lip-sync templates:

We built our own character in a flat, simple style so iteration would be fast.

Nested Artboard Hell

Then came the pain. Nested artboards. Calling inputs inside nested rigs from the parent board. This should have been trivial, as most of our designers came from an Adobe After Effects and Sony Vegas background. Instead it became a multi-day excavation.

We even built a minimal reproducible Rive file just to understand the problem first, with data binding and nested artboards.

We searched everywhere for answers. We scoured Stack Overflow for solutions, read through Reddit threads where users lamented broken nested animations, and followed a critical GitHub issue in the rive-flutter repository that highlighted our exact problem. The Rive community forum had a thread on passing input values that never got a definitive answer.

Finally, we found a random deep-internet video that solved it. From Rive, but not on the surface. Not from official docs. Just some obscure YouTube video that succeeded where the actual product failed to explain itself.

And that’s when it hit me.

Rive is not designed for professionals yet.

Rive is built for early adopters, vibers, Behance-Dribbble motion designers, landing page razzle dazzle, and social media loops. Not structured engineering-grade animation pipelines.

Anyone doing real character systems will eventually hit architecture problems. Rive isn’t there yet.

But we were too deep in this to give up.

Building Our Own Playground

Once we finally got our rig working, we wanted to test it with our custom lip-sync algorithm.

We tried using the third-party playground, rive.rip. Cool idea, but feature-limited. Not useful for live algorithmic testing.

So we built our own playground in Flutter, specifically for real-time lip-sync with AI voices and Rive characters. You can find the source code for our Rive Playground on GitHub.

And here is the deployed version you can play with:

Drag to resize
Open in new tab

Final Result

We managed to make our animation work, with nested artboards and lip-sync. Check it out here:

It's still a work-in-progress, but we continue to learn and improve.

The Rive Reality Check

After this entire journey, here’s the honest breakdown.

The Cons

  • The interface is unintuitive and tries too hard to cosplay as Figma without earning it.
  • Features like components and preview mode feel half-baked.
  • Pricing is preposterous, especially when they suddenly made exports paid mid-project.
  • Performance is unoptimized and multiplayer is buggy.
  • Browser sync pauses make it feel clunky instead of modern.

The Pros

  • It works, at least in Beta terms.
  • Web-based.
  • Collaborative.
  • Fun for simple animation use cases.

My Verdict

If you already have working pipelines in Blender, After Effects, Spline, or your own tooling, stick with them. Rive is cool, but it is not ready for serious production beyond funky assets, splash animations, and loading screens.

I still believe Rive should eventually be acquired by Figma and merged into a full pipeline. But if I see them pull a “we’ll stay independent forever” move like Figma did with Adobe, my remaining hope will evaporate.

To anyone from the Rive team: If you're reading this, I'd love to talk and have a feedback session on our team's experience on the Paid plan. Maybe we can find a better way out!

Peace ✌️

Credits to:

  • Umayma binte Saleem, our Web, Interactions and Graphics intern, for the first Solly animation
  • Ayesha Aamir (UI/UX Designer) for her research, character design and animation implementation
  • AbdurRehman Subhani (Lead Engineer) for building the open source Playground, and developing the lip sync algorithm with AI
  • Aniqa Sadaf (Product Designer) for world-building