Simulating Reactions To The Astronomer Ad

Simulating Reactions To The Astronomer Ad

Before flying a paraglider 430 kilometers across the Pyrenees, I rehearsed the entire journey in my head using nothing but Google Earth on an iPhone. It changed how I understand preparation forever. That same mindset led me to build a tool that lets you watch how people might react to your message before it matters, turning AI personas into a flight simulator for human empathy.

July 29, 2025
← Back to Articles
Summarize in
OR

In 2017, I flew a non-powered paraglider across the Pyrenees with two friends, over 430 kilometers from Saint-Jean-de-Luz to the Mediterranean. The expedition threw curveballs at us: unexpected storms, emergency landings, weather that forced us miles off course. But here's what struck me: despite the inherent danger of mountain flying, I never felt truly out of control. The reason? I had already flown this route hundreds of times in my mind.

In the weeks before departure, I spent my nights mentally rehearsing every ridge, every valley, every potential landing zone using Google Earth. Just raw visual pattern recognition using Google Earth on an Iphone. When I'd lose my bearings in the simulation, I'd zoom out, recalibrate, and restart. Just like playing a video game. No real consequences. By the time I launched into actual thermic air, it all felt strangely rehearsed.

That experience taught me something profound about the power of simulation. Mental rehearsal doesn't just prepare you, it fundamentally changes how you perceive and respond to reality.

So when Mike Taylor showed me the first prototype of AskRally, I instinctively knew the magnitude of what we were building. If simulating my own thought processes could help me navigate physical mountains, what could simulating human behavior do for our collective challenges? The ability to safely explore different perspectives, to test empathy in low-stakes environments, to practice difficult conversations before they matter, this might be the key to collaboration at the scale humanity increasingly looks like it desperately needs.

Whether we're reaching Mars, preventing conflicts, or building sustainable civilizations, our biggest obstacle isn't technical, it's learning to work together despite our differences. Maybe these AI systems can teach us something about understanding each other. And maybe that understanding is exactly what we need to get to the future we're all hoping for.

You often can’t watch how humans react in the moments that matter, so we’ve been simulating it. But when Veo3 dropped, I wondered if we could simulate more than just text responses. What if it were possible to build an experience that lets creators and communicators rehearse their messages the same way I rehearsed that mountain flight? 

After some testing and iteration, we finally have the ability to upload a file of any concept and get AI personas to react. So I tested it on the recent Astronomer Ad.

This article is the behind the scenes on how the video reaction simulator came to be.

Cloning Defender

The breakthrough came from an unexpected request. A Twitter micro-influencer and former Snapchat developer who goes by Defender, reached out asking if I could clone him using our AskRally system. It seemed like a perfect test case.

I fed over 800 of his tweets into our persona clone-via-text generator, watching as the AI distilled years of his digital personality into behavioral patterns and response tendencies. The resulting clone was unnervingly accurate. 

When I shared the results with his community, something fascinating happened. They didn't just recognize Defender in the AI persona description, they became acutely aware of how easily any of them could be replicated. The realization that their entire online personality could be compressed into a paragraph of behavioral traits sparked a mix of both excitement and fear in his community. 

But I wanted to push further. If AskRally could clone someone where their predicted thinking patterns were close approximations of reality, what happens if we plug that into a video generator? This became my first real experiment with Veo3, Google's video generation model.

I used an LLM to infer Defender's appearance and scene based on the persona data we'd gathered. The result was a short clip that captured something essential about him: not just how he might look, but how he might carry himself, react, gesture. It wasn't perfect, but I recognized him in a way that went beyond mere visual similarity. AI defender was born Jun 27, 2025

After sending him a few examples of his clone reacting to memes, Defender's response said everything.

Morpheus & Mintos

Even with a working persona generation system, I noticed something frustrating. People struggle with “what to ask”. The issue wasn't intelligence; it was structure. I'd learned that complex questions become manageable when you break them down using the minto framework: situation, complication, question, resolution. It's a simple mental scaffold that transforms mental noise into well framed situations people can act on fast.

So I built Morpheus, an AI agent designed to act like air traffic control for simulations. And then tailored v2 toward a custom GPT so I could trigger simulations directly from within my favorite LLMs, without leaving flow! Adding video generation felt like the natural next step, why just read what personas thought when you could watch them react?

With the third iteration of Morpheus, I could submit a form in minto structure, and it would orchestrate everything: collaborate with other AI agents to refine the question, design the study parameters, run multiple persona simulations, parse the results, and even trigger reruns if the data seemed off. The final output? A clean recommendation you could actually act on paired with video reactions.

I was proud of the system's sophistication. Then I demoed it to my cofounder.

His expression said everything before he even spoke. When he finally did, his words cut straight through my pride.

Every extra thinking step the user needs to take is our fuckup getting bigger.

He wasn't wrong. I'd spent years watching CEOs struggle to write proper mintos. The framework that seemed obvious to me would actually be a barrier for most. 

I needed to dumb it down and reduce the failure points. 

Finding The Core

I stripped everything back to basics. No more air traffic control, no more framework requirements, no more thinking steps. Just one thing: video reactions.

But without Morpheus figuring out what to ask, users would need to formulate their own questions. Or would they?

I found myself browsing through our AskRally playbook gallery when I spotted something Mike had added months earlier: a concept testing prompt. Simple, direct, and with a small tweak, I could use it to get AI personas to evaluate any piece of content. Perfect.

The solution became elegantly straightforward: I rebuilt the n8n workflow around this single use case. Upload a file, run the concept testing simulation, generate the video reactions, output a reaction reel. No mintos required, no air traffic control, just results.

When I started sharing examples, the response was immediate. Comments lit up, DMs flooded in, and new AskRally customers were genuinely excited by what they were seeing. The reactions felt real, the signal was actionable, and the format was instantly shareable.

But then I started getting on calls with potential users. A pattern emerged that I should have seen coming: the moment I mentioned "API" or "auth" and started explaining the n8n setup process, I could literally watch a segment of people's enthusiasm drain away. Their eyes would glaze over, anxiety would creep into their voices.

I'd solved the wrong problem again. The technical barrier had just shifted. From complex frameworks to technical setup. I needed something even simpler.

Vibe Coding A Front End

I'd been "vibe coding" in Cursor for over 12 months, building dozens of scripts on pure intuition and caffeine. G2 review trend spotters, automated content production tools, data scrapers that would make compliance lawyers nervous. But this was my first attempt at building an actual front end and connecting it to my n8n backend.

It was surprisingly easy. Not without mistakes, though.

After my first late-night coding session that stretched until 3am, I woke up the next morning and decided to back everything up to GitHub. I'd forgotten the basic Git commands, so I installed the GitHub desktop app, created a new folder, and casually asked Cursor to move my project into it.

What happened next still haunts me. Within seconds, every folder on my desktop vanished. Gone. All I could think of was that scene from Silicon Valley.

A few frantic hours on calls with full-stack developers later, I had to accept reality. Some scripts were backed up elsewhere. Many weren't. Goodbye, beautiful automations. Time to get serious about version control.

By some miracle, however, I managed to restore a backup of the app I'd been building—about 70% of it intact. I spent that day reconstructing what I'd lost (made it better) and with some tips from my co-founder, finally got the front end demo working. It looked like this:

To Be Continued...

What started as a mental rehearsal for flying across mountains became a tool for rehearsing human understanding. The journey from those late nights with Google Earth to watching AI personas react to The Astronomer ad taught me something fundamental: the most powerful simulations aren't the most sophisticated ones, they're the ones people actually use.

Each iteration stripped away complexity that I thought was essential. Morpheus and his minto frameworks felt brilliant until my cofounder reminded me that every extra step is a barrier. The n8n workflows were technically elegant until real users showed me that "API setup" triggers anxiety faster than excitement. Even my desktop disaster was a lesson in disguise. Sometimes you have to lose what you've built to understand what actually matters.

The video reaction simulator we ended up with is deceptively simple: upload content, get authentic reactions from diverse AI personas, understand how your message lands before it matters. But simple isn't the same as easy. Behind that clean interface is everything we learned about human behavior, technical constraints, and the delicate balance between capability and accessibility.

The Astronomer ad test proved something important: we can simulate not just what people think, but how they feel, how they react, the subtle ways their backgrounds shape their responses. Whether it's preventing a PR disaster, crafting a message that truly resonates, or just understanding why people see the same content so differently. Simulation gives us a safe space to explore before the stakes get real.

Maybe that's what we're really building: a flight simulator for human empathy. And if we get it right, maybe those late-night Google Earth sessions across the Pyrenees were just the beginning of learning to navigate much more complex terrain together.

PS: The video reaction simulator is ready to be used as a custom project (and soon will be available self serve). DM me if interested.

 

Stay Updated

Subscribe to our newsletter to get notified about new articles and updates.

Rhys Fisher
Rhys Fisher

Rhys Fisher is the COO & Co-Founder of Rally. He previously co-founded a boutique analytics agency called Unvanity, crossed the Pyrenees coast-to-coast via paraglider, and now watches virtual crowds respond to memes. Follow him on Twitter @virtual_rf

← Back to Articles