Skip to content
Fabien Rousselot

Senior Product/UX Designer

Fabien Rousselot

Senior Product/UX Designer

Drone Smoothie

AI, Software
Work In Progress

Drone Smoothie is video analysis software that automatically isolates cinematic segments from drone footage, a full-cycle project I’m building solo by leveraging modern AI workflows to ship production-ready software independently. 

Editing Assistant Software

Building a software that helps detect cinematic drone shots to jump-start your creative process.

Vibe Coding with Claude

Using Claude code to build initial prototypes, smooth detection logic, ship and maintain production-level code.

Figma MCP + Code Connect

Using Figma MCP to implement finished designs and code connect to easily maintain components.

Vibe coded Prototype

The current state of the vide-coded prototype. The analysis pipeline is functional with the latest smooth detection script, as well as XML and direct clips export.

Figma Design

Design has been developed in parallel as soon as functionality was confirmed. Its implementation has started and will be gradual.

Overview

Drone pilots routinely spend hours sorting and trimming raw footage before any creative work can begin

With roughly 8% of Americans owning a drone, the volume of content is massive, but the real opportunity lies in the commercial segment, where editing complexity is higher and time directly impacts revenue. These professionals are far more likely to pay for tools that remove friction from their workflow. The data points to a clear gap in the market: intelligent video sorting and automated cinematic segment detection that accelerates the most frustrating part of editing without taking creative control away.

The Idea

This project is a personal experiment in modern product building in the age of AI

As a senior UX designer with coding literacy, I’ve watched the designer-developer divide narrow dramatically. With today’s AI tools and coding agents, it’s possible to take full ownership of the product lifecycle. Drone Smoothie is software I genuinely want for myself, but it’s also a deliberate attempt to go end-to-end: market analysis, prototyping, algorithm research, implementation, and ultimately a production-ready product with monetization potential. My goal is not just to ship software, but to prove that thoughtful execution, assisted by AI, can turn a real pain point into a viable, focused product.

Solution

Automating the most painful step of drone video editing without replacing creativity

The problem isn’t editing itself, it’s everything that comes before it. Drone Smoothie focuses on removing friction from the workflow by targeting the slow, repetitive work that blocks creative momentum.

Where existing tools fall short:

  • DJI LightCut prioritizes auto-generated edits, producing fast but generic results that limit creative control.
  • Gyroflow excels at stabilization for advanced users, but doesn’t address footage selection or narrative pacing.
  • Nothing targets the middle ground: good pilots who want ready-to-use clips but still want to edit their own story.

What Drone Smoothie does differently:

Drone Smoothie takes a different approach by automatically identifying smooth, usable, cinematic segments within raw drone footage, turning hours of manual sorting and trimming into ready-to-edit clips. By exporting directly into professional tools like Premiere, Final Cut, and DaVinci Resolve, it preserves creative control while dramatically accelerating workflows for both hobbyists and professionals, with particular value for time-sensitive, revenue-driven work.

Technical Specs

Drone Smoothie uses a hybrid desktop architecture designed to balance performance, flexibility, and scalability.

Heavy video analysis is handled locally using Python and OpenCV, while the interface is built with modern web technologies and packaged as a cross-platform desktop app. This approach allows the system to process large video files efficiently while maintaining a fast, responsive UI and a single shared codebase across platforms.

Tech Stack

Frontend
Electron • Vanilla JavaScript • Custom Design System

Backend
Python • Flask • OpenCV • NumPy • FFmpeg

Key Features
Sparse optical flow analysis • DaVinci Resolve/Premiere/Final Cut XML export  • Offline-first architecture

Why This Stack?
Python/OpenCV handles intensive video processing. Web technologies enable rapid UI development. Everything runs locally, no upload delays, no cloud costs.

Progress Tracking

Version Alpha
40%

Version Alpha is when the product will reach a point operational enough to have it tested by a few selected users. Collecting feedback for a last iteration before MVP.

Smooth Detection Algorythm
75%
  • Core motion detection working across most footage types
  • Performance optimized (20x faster with sparse optical flow)
Design System Implementation
25%
  • Design tokens and foundations established
  • First component implemented (video file upload)
Feature Completeness
55%
  • Video analysis pipeline functional
  • Smooth segments preview and video scrubbing
  • DaVinci Resolve XML export working

The technical Heart

Smooth Detection Algorithm

The smooth detection logic is essential to the success of Drone Smoothie. Getting it right meant building a separate testing environment to rapidly iterate without breaking the main app.

Creating the playground allowed me to play with the many different optical flow parameters available and fine tune the smooth detection.

Lining up the video play back with the Frame Classification Timeline allowed me to clearly see how the smooth detection script was performing.

The challenge

Drone motion detection isn't just about measuring speed, it's about understanding context.

Accommodating vastly different pilot styles:

  • Beginners record entire flights (20+ minutes) with erratic movements mixed with usable segments.
  • Professionals shoot intentional clips (30-60 seconds) that are already mostly smooth.
  • The algorithm needs to adapt to both—strict enough to filter amateur footage without over-filtering pro work.

The Apparent Motion Problem:

  • Low altitude: ground rushes past quickly, small movements create large visual motion.
  • High altitude: drone flying 30 mph but distant landscape barely moves in frame.
  • Traditional motion detection sees “slow motion = hovering”, misclassifying smooth high-altitude shots as unusable.

Solutions

Rather than building one algorithm for all scenarios, I built a system that adapts.

Performance + intelligent detection:

  • Optimized from dense to sparse optical flow—20x faster processing with comparable accuracy.
  • Worked with Claude to extract insights from computer vision research papers.
  • Introduced smart pattern coherence to distinguish intentional camera movement from chaotic corrections.

Performance + intelligent detection:

  • Built Frame Classification Timeline with interactive metrics: hover over any frame to see real-time data.
  • Iterated through 10+ versions, each addressing specific edge cases from real-world footage.
  • Currently refining high-altitude hover detection to solve the apparent motion edge cases.

User Personas

Our Typical Users

AI was used to accelerate early persona exploration, helping synthesize patterns from pilot communities and my own professional experience.

Michael

The Hobbyist Explorer

A casual drone pilot who flies for fun and personal enjoyment with no editing skills.

Michael brings his drone on trips, hikes, or vacations and treats it as a high-end adult toy rather than a filmmaking tool. Flights are often recorded from takeoff to landing with little planning or shot intention. Editing is usually handled by automated tools like DJI’s built-in apps, and the final videos are shared on social platforms or with friends. While the results may be rough, finishing a video feels empowering and rewarding, which is what matters most to him.

Fabian

The Enthusiast Editor

An experienced drone pilot who enjoys editing and has developed a personal workflow.

Fabian has been filming and editing for years, often holds a Part 107 license, and has completed paid work such as real estate or event coverage. He flies with more intention than a beginner and cares about smooth motion, pacing, and storytelling. However, reviewing long flights and trimming footage can feel tedious and time-consuming. He is looking for ways to speed up the editing process without giving up creative control.

Thomas

The Professional Pilot

A seasoned drone professional who approaches flying and editing with intention.

Thomas plans his flights, understands camera movement, shoots in high-quality formats, and color grades his footage. Editing is a core part of his craft, and he maintains a large archive of drone material for both personal and client work. While he values precision and control, the volume of footage can become a bottleneck. He expects tools to integrate cleanly into professional post-production workflows and support efficiency without compromising quality.

96% of Pilots are men

Drone Smoothie is built for pilots who enjoy editing their own footage.
From casual hobbyists to experienced professionals, these users handle their entire workflow themselves and value speed, clarity, and creative control. Drone Smoothie supports this range of skill levels by adapting to different volumes of footage and editing intent, without taking authorship away from the creator.

Market Analysis

Crushing the numbers

I used Perplexity to create a full US drone market analysis as well as defining main tendencies about editing habits and favorite video editing tools.

+2,5M
Worldwide Drone Shipments (2024-25)
80%
DJI Drones
+855,000
Registered Drones
37%
Professional
63%
Hobbyist
+500,000
Certified Pilots
+316,000
Commercial Drone Registrations
Commercial Drone Usage

The photography/real-estate category (which includes creative aerial imaging) clearly dominates the registered commercial fleet.

Video Editing Tools

Most drone footage is edited in general-purpose video editors like Premiere, Final Cut, and DaVinci Resolve.

Typical Editing Workflow Timeline

The sorting crisis is real: Phases 1-2 alone (Organization + Culling) consume 5-7 hours, or 35-50% of total project time. This is the highest-value automation opportunity.

What it means

The data confirms an opportunity for Drone Smoothie

The drone market is large, growing, and dominated by DJI, allowing Drone Smoothie to focus on a well-defined ecosystem. Editing remains a major pain point, especially the time-consuming work of sorting and trimming footage, which many pilots still do manually. By prioritizing XML exports for Premiere Pro, Final Cut Pro, and DaVinci Resolve, Drone Smoothie fits directly into existing workflows for both hobbyists and professionals. Within this space, real estate drone operators stand out as a strong early niche, where faster editing translates directly into higher productivity and clear willingness to pay.

Design System

Starting with Strong Foundations

I’m intentionally keeping the design system lean right now, but debt-proof for easy scalability. Once the core motion detection is dialed in, I’ll expand the component library and refine the visual design.

The color primitives have an extensive palette to cover all use cases. I wanted a vibrant palette and had a little fun picking names in relation to the concept!

The semantic color tokens for surface, borders, icons and texts have been defined with light and dark mode available from the start.

First Component Implementation

Starting Simple on Purpose

The video file card was the first component implemented with MCP and intiated the code/design connection.

Video File Card

I chose the video file card component (the display after a file is dropped) as my first implementation test. The component uses a minimal set of colors, fonts and spacing variables, making it a good candidate to establish the token workflow.

The process:

  1. Imported the finished Figma component using MCP.
  2. Asked Claude to extract only the tokens needed for this specific component.
  3. Stored tokens in styles.css for now (will refactor to separate token files later).
  4. Implementation matched the design with minimal iteration.

Why this matters: Starting with a straightforward component let me validate the Figma MCP workflow before tackling complexity. It’s a foundation for what comes next.

What is Next?

Connecting Design and Code

Code Connect UI helps you link design components in your Figma libraries to the corresponding code components in your codebase.

Linking complex component to code would make it easy to manage if/when the design needs to evolve.

Closing the Gap

As the design system for Drone Smoothie starts to take shape, my next focus is reducing the gap between design intent and implementation. I want components to act as a shared contract, not static visuals that slowly drift away from the code. Even in a solo project, this mindset helps keep decisions explicit and scalable from day one.

Designing for AI-Assisted Development

To move in that direction, I plan to experiment with Figma’s Code Connect UI. The goal is simple: explicitly link design system components in Figma to their real counterparts in the codebase, directly from Dev Mode. A component in Figma is no longer just a reference, it points to the exact source developers or tools should use.

This approach helps:

  • Clear mapping between design components and source files
  • Less ambiguity around which component is “the right one”
  • A single place to document usage rules, props, and accessibility notes

Even as a one-person team, this creates a cleaner feedback loop between design, code, and tooling. It reflects how I think about design systems: fewer handoffs, less translation, and stronger alignment between intent and execution.

Prototyping

with Figma Make

After building the main components, I was able to experiment with what the smooth analysis would feel like using Figma Make.

Becoming a Digital Architect

Why this workflow matters

The future of product work isn’t “designers who code” or “engineers who design”, it’s digital architects who do both, using AI to execute at the speed of thought.

Speed without sacrificing quality

Traditional timeline: 6+ months with a small team
AI-assisted timeline: 6 weeks, solo

The difference? AI handles implementation while I focus on product decisions, design quality, and user experience. This shift unlocks rapid prototyping and validation. I can test ideas with working software in days instead of weeks.

Human engineers remain invaluable for architectural thinking, optimization expertise, and problem-solving at scale. But for solo product exploration and rapid validation? AI as a development partner changes everything.

Bridging disciplines operationally

I’m not “learning to code”—I’m using AI to execute at engineering speed while maintaining design craft. This develops an engineering mindset: thinking in systems, data flows, and technical constraints makes me a better designer.

But here’s what excites me most: taste still matters deeply in UX. Efficient code is efficient code, but crafting an experience that feels right requires human judgment, empathy, and iteration (for now?). The result is production-ready software, not just prototypes.

MORE WORK IN THIS CATEGORY

Fabien Rousselot

Senior UX/Product Designer

With 12+ years of experience designing user-centered digital products, I create intuitive and engaging interfaces across mobile, web, and desktop. My work spans the full design process—from research to prototyping—with a strong focus on scalable design systems that ensure consistency and efficiency.

Back To Top