Talking to customers is priceless, but pricey: a round of interviews can eat four weeks and five-figure budgets. Point the same questions at a large-language model, spin up a crowd of “synthetic personas”, and you’ll get answers in minutes for the cost of a latte. The math is disgusting, easy 99% savings. So much faster and cheaper than any focus group, it's no wonder businesses can’t resist.
This isn’t sci-fi. It’s possible today. Harvard Business Review now lists these AI role-plays among the “next big frontiers” in market research. What does it mean for people like you and me? Instead of begging eight humans to join our focus groups or complete surveys, you click generate and an always-on panel of AI critics light up, ready to roast and give feedback 24/7.
Early adopters are already running with this idea, using personas-of-thought prompts to model whole markets and pull weeks of insight in minutes. At the end of the day, you can’t test every idea in your head with real people, because interrupting humans is slow and expensive. But you can let a virtual crowd cast a low-cost vote. Let silicon surface the initial insights, then take the survivors back to real humans for truth-testing. Speed from bots, sanity from people. That’s the balance.
What kinds of things can you simulate?
My co-founder and I have tested flipping virtual elections, gamer preferences, predicting comment sentiment, converting impossible buying committees, predicting shopping-cart drop off, speed running PMF, anticipating churn, book title optimization and watching bots expose the same double standard us humans do.
By now you probably fall into one of two camps:
“Rhys, you’ve just handed me the keys to the kingdom.”
Or
“What a bunch of BS!"
If you’re skeptical, I get it. I was too. When I pitched a qual-research agency alongside executive strategist, John James, after 30-40 interviews everyone said the same.
“Market insights. We need more of them."
But with a mid five figure price tag, we got no bites. So when I first heard of synthetic research it felt like a direct threat to the craft I’d tied my self-worth to. Cheap, instant, and good enough? It sounded like vaporware. But no money, no honey, and I began to wonder how reliable surveys with real humans really were.
Prospect: "Yeah but how accurate is it compare to humans?"
— Rhys Fisher (@virtual_rf) April 24, 2025
Me: "How are you assessing the accuracy of your humans."
Prospect: "surveys"
THE SURVEY: pic.twitter.com/bzJLXtcbdl
I started asking GPT research to crunch through the science, reading about one Stanford-led study that found synthetic personas picked the same survey answers as the real people almost as well as the same people could match their own answers when they retook the survey two weeks later—about 85% as close. A level far higher than when prompting ChatGPT with one-liners like “answer as a UX researcher”.
Separately, I found out that some big household names like Google (and 63 others), had poured money into product bets based on online-panel insights that turned out to be fraudulent respondents that were shamelessly sold as real. The irony here being that synthetic personas would have been more "real" than whatever they bought.
You likely don't have Google budgets to burn money on fake surveys.
So your options are simple:
A. do nothing
B. spend 100k traditionally
C. synthetic research (fast, cheap, ~85 % right)
Synthetic research lets anyone make smarter day-to-day calls with “good-enough” precision and opens new earning power for people who know where to apply it. Yet LinkedIn is still full of loud objections.
To understand why, I dug and found the same contra-narrative kept resurfacing, almost word-for-word, across multiple people and threads.
So let’s talk about it.
Synthetic Personas: Game Changer or Just Hype?
Hala’s critique is straightforward. Synthetic personas aren’t real people, you still have to sell to humans, and off-the-shelf AI is too polite to deliver the nuanced feedback that real customers provide. Her conclusion, keep talking to customers. Sounds sensible, and I agree that you shouldn't just give humans the silent treatment. But viewing it as an either-or choice misses how top teams will actually work in the post-AGI era.
First, synthetic audiences aren’t literal flesh-and-blood, but with the right context they behave uncannily like real humans. They can mimic subtle cultural behaviours pretty much out of the box, and get better with intentional design. Serving as a rapid-fire proxy for moments when real conversations would be slow, costly, or unethical. They exist to extend, not replace, your contact with living customers.
Second, it’s true we still sell to humans. At least today. Yet the edge of commerce is already blurring: I have friends who let autonomous agents comparison-shop on their behalf while they sleep. As those agents mature, the line between “selling to humans” and “selling to AI role-playing as humans” will keep shifting. Stress-testing ideas in a synthetic arena prepares you for that transition.
Third, the “AI is too nice” only applies to default chat settings. When working through the API, value adding wrappers can side-step the default to please, and instead tap into a sort of unhinged mode where messy human bias is treated as the feature not the bug.
“Don’t Stop Talking With Customers"
Far from killing face-to-face research, synthetic testing helps build a business case for doing more of it. The bots blitz through all the cheap, fast “what-ifs,” then flag the ideas worth the price of real interviews or experiments. Every live session with humans pumps fresh context back into the system, so the next simulation comes out sharper.
Picture this loop:
-
A human interview digs up raw customer slang and quirky edge cases.
-
Transcripts (along with social media engagement, media diets, and purchase history) are used to recalibrate synthetic personas.
-
Product and marketing teams hammer questions at an up-to-date virtual audience, testing dozens of angles before lunch.
-
The top ideas get tested in real, some head straight back into interviews the next morning.
-
Rinse → refine → repeat.
Each spin of the flywheel makes decisions cheaper, faster, and more grounded in reality. Researchers shift from being interview bottlenecks to becoming self-serve enablers and architects of a calibration engine that runs at the speed of the market.
On the left side of this feedback loop, you have a business that takes action, generates data, and ideally earns enough profit to fund ongoing discovery work. This real-world signal moves slower than simulations because developing, delivering, consuming products, and establishing consumer habits naturally takes time.
On the right side, there is a faster, synthetic feedback loop. Here, researchers leverage insights gathered from business simulations to test ideas in silicon to inform decisions in real-world activities, guiding choices such as what to test next. Any discrepancies between real-world results and simulated outcomes are subsequently used for further calibration, refining the accuracy of the simulations over time. The more you do this, the bigger your edge.
None of this eliminates classic qualitative research. It simply repositions it as the calibration layer that keeps the synthetic mirror honest. The future isn’t “real or synthetic”; it’s a flywheel powered by both.
Ready to give synthetic personas a go?
Signup to askrally.com and ask your (virtual) audience anything. You won't be the only one.