
Disco Elevator – Oopsie Daisy feat. Peder Pan
Disco Elevator – Oopsie Daisy feat. Peder Pan

Oopsie Daisy is one of my new AI-assisted musical project that explores the intersection of classic disco fundamentals with contemporary electronic production. “Disco Elevator” marks my latest experiment, featuring the alterego character Peder Pan as creative collaborator — the pink bunny-suited character who has appeared across multiple Stimulus projects as both visual anchor and creative catalyst.
The musical experiment channels the euphoric energy of classic disco while incorporating modern electronic production techniques that give the sound a distinctly contemporary edge. “Disco Elevator” embodies a philosophy of taking listeners on an unexpected journey — much like stepping into an elevator that doesn’t quite follow the laws of physics – or even life itself. The track draws inspiration from the pioneering work of disco legends like Chic, Donna Summer and Earth, Wind & Fire, while incorporating the playful electronic sensibilities found in artists like Justice and Daft Punk.
The collaboration with Peder Pan brings an additional layer of creative unpredictability to the mix. Known for his appearances across various Stimulus projects, Peder Pan represents the intersection of established character IP and experimental music creation. His influence can be heard in the track’s more adventurous sonic territories — the moments where conventional disco structure gives way to unexpected electronic flourishes and rhythmic surprises.
Why I finally decided to test the full AI pipeline
Sometimes the stars align in unexpected ways. When MidJourney dropped their significantly improved video generation capabilities around the same time Suno finally cracked the code on audio quality, I knew I had to test these tools against a real creative challenge. This wasn’t just about playing with new features—it was about seeing if AI had finally matured enough to support a complete vision from concept to finished music video. For the first time, the vision of a complete AI-assisted music video seemed genuinely achievable rather than merely aspirational.
Suno’s Quality Revolution proved to be the primary enabler for this project. Previous iterations of AI music generation suffered from two critical limitations that rendered them unsuitable for serious creative work: inconsistent composition and song structure, and most notably, persistent audio artifacts. Earlier versions were plagued by a characteristic high-pitched ambient white noise that permeated every track, creating an unmistakable “AI signature” that broke immersion immediately.
The latest Suno update represented a quantum leap in both areas. Not only did the platform demonstrate a more sophisticated understanding of song structure — properly developing verses, choruses, and bridges with musical logic—but the audio quality reached a threshold where the technology became invisible. The elimination of those telltale artifacts meant that for the first time, an AI-generated track could stand alongside human-produced music without immediately revealing its origins.
MidJourney’s video capabilities completely surprised me—and probably no one expected this level of quality this quickly. The new video module is genuinely fast, and using image references actually enhances otherwise lesser images. What really impressed me was the movement and character consistency: no morphing into something else, facial expressions that feel natural and real, and compositions that don’t end up with subjects cut off or quirky clip endings.
The best part? Almost every rendered video clip is actually workable. You’re not spending several evenings trying to get a few decent clips like with previous AI video tools. Your real work becomes crafting prompt descriptions for what should actually happen in each scene. This was my first experience with a GenAI video tool that actually delivers on its promises. Combined with the established visual reference library of Peder Pan from previous projects, this created an unprecedented opportunity to test the full AI creative pipeline against a real creative challenge.
This wasn’t just about using AI tools—it was about testing whether they had matured enough to support a complete creative vision from concept to finished product.

It’s still a lot of manual work behind the magic
While “Disco Elevator” leverages multiple AI tools throughout its production pipeline, the reality of working with these technologies reveals a crucial truth: every part of the production requires manual work, real skills, and strategic thinking. AI tools function as powerful accelerators and creative partners, but they don’t eliminate the need for human expertise — they amplify it.
And the planning still matters a lot
The project began with traditional creative planning that no AI tool could handle: developing a basic storyline, mapping out where verses and choruses would fall, marking beat tempos, and identifying key musical moments that needed visual emphasis to create a coherent audio-visual experience. This foundational work required understanding both musical structure and visual storytelling — skills that remain fundamentally human.
The storyline turned out to be crucial. Even if you start with just a vague idea of what should happen, sooner or later everything comes down to how strong that storyline is. That’s why it’s important to nail this down as early as possible in a project like this. If you’re planning to spend weeks of evening work, you don’t want to shoot from the hip—unless you’re prepared to spend double the time catching up later.
Tool Integration Strategy:
-
- Claude: Lyrical brainstorming and creative direction
- Midjourney: Character-consistent visual generation in 2:3 portrait format (optimized for mobile viewing since 50% of video consumption happens on mobile devices)
- Magnific AI: Image enhancement and quality refinement
- Suno: Complete song composition and production
- Cubase 14: Audio post-production, intro trimming, LFO/EQ sweep + reverb effects to create the “closed room” sound, plus mastering with compressor and limiter to meet streaming loudness standards
- CapCut: Video editing and post-production
- Affinity Photo/Design: Graphics and final visual polish
The video editing — Where I started making a mess
The most significant challenge — and learning experience — came during the video editing phase. The abundance of AI-generated content created a deceptive sense of abundance that masked the complexity of proper post-production workflow.
The initial mistake: I began adding transitions and effects far too early in the process, before establishing the foundational structure. While this approach occasionally sparked creative ideas, it primarily created chaos. With dozens of video clips requiring integration, starting with effects rather than structure led to an increasingly messy timeline that became difficult to navigate and modify.
How I finally got organized: The project transformed when I stepped back and approached it systematically:
-
- Storyline first: Having a solid narrative foundation made all other decisions clearer and prevented endless revisions
- Trim first, enhance later: All video clips were carefully trimmed and organized before any effects were applied
- Align with musical structure: Clips were precisely placed within the established timeframes of verses, choruses, and instrumental breaks
- Establish stable foundation: Only after achieving a coherent basic cut did I begin applying transitions and effects
This restructuring created what I call “stable experience flow”—a solid foundation that made it much easier to apply the right amount of enhancement without overdoing it. While I may have still overreached slightly in the final effects application, the mess factor was dramatically reduced compared to the chaotic earlier approach.
What actually worked and what didn’t
Working through this complete AI-assisted pipeline revealed several important insights:
MidJourney video breakthrough:
-
- Almost every generated clip was actually usable (a first for AI video tools)
- Character consistency stayed solid without morphing issues
- Image references significantly enhanced even lesser source images
- Natural facial expressions and proper composition framing
- Focus shifted to crafting better prompts rather than generating dozens of clips hoping for one good one
Suno’s audio quality revolution:
-
- Complete elimination of the high-pitched white noise that plagued earlier versions
- Sound artifacts that made tracks instantly recognizable as AI-generated are gone
- Song structure and composition finally understand musical logic
- Audio quality reached the threshold where the technology becomes invisible
Workflow discoveries:
-
- Storyline preparation prevents exponential time waste later in the project
- Starting with effects before structural organization creates chaos quickly
- Having established character references (like Peder Pan) dramatically accelerates AI visual workflows
- The “stable experience flow” principle: solid foundation first, enhancement second
Integration insights:
-
- Different AI tools complement each other when approached strategically
- Format decisions matter: choosing 2:3 portrait over traditional 16:9 because mobile viewing accounts for 50% of consumption
- Professional audio post-production remains essential even with high-quality AI generation
- Human workflow knowledge becomes more critical, not less, when managing AI-generated abundance
- Creative vision and aesthetic judgment remain the binding elements that make everything coherent

Final thoughts: Where this leaves us
“Disco Elevator” represents a successful test of the current AI creative pipeline, but more importantly, it reveals where we stand in the ongoing evolution of human-AI collaboration. The technology has reached a threshold where serious creative work is possible, but it demands more sophisticated human skills, not fewer.
The project succeeds not because AI tools did everything automatically, but because they enabled a level of creative experimentation and production value that would have been impossible with traditional resources. Peder Pan’s pink bunny suit may be digitally generated, but the creative vision that brings him to life in this disco universe remains distinctly human.
Listen to “Disco Elevator” and decide for yourself: How close are we to seamless human-AI creative collaboration?
How this project was made…
This music video was created through a collaboration between myself (Michael Käppi) and multiple AI tools: Claude for creative brainstorming, Suno for music generation, MidJourney for visual content, Magnific AI for enhancement, CapCut for editing, and Affinity suite for final polish. The Peder Pan character builds on established visual references from previous Stimulus projects, demonstrating how character IP can accelerate AI-assisted creative workflows.