Synthetic Research: The Secret Ingredient is People

Synthetic Research: The Secret Ingredient is People

Ask Rally evolved its AI research tool from purely generative models to a system grounded in real human data. Facing issues like AI-sounding responses, bias, and a lack of nuanced preferences, they refined AI personas using diverse data and real human examples. Key developments include "survey boosting" and "GenPop," a virtual panel built on authentic human interviews. This approach ensures realistic responses, reduces hallucinations, and makes market research more accessible and insightful. Real people are the "secret ingredient" for authentic AI personas.

October 31, 2025
← Back to Articles
Summarize in
OR

"My God, what if the secret ingredient is people?" "No, there's already a soda like that - Soylent Cola" "Oh. How is it?" "It varies from person to person" –– Futurama

When we launched Rally back in February, we built what seemed like the obvious solution: a pure generative AI tool for synthetic research. You'd describe your target audience, our system would generate personas that fit that description, and those AI personas would answer your research questions. Simple, right?

The problem with purely generative AI personas is that they don’t vary enough from person to person. It turns out the secret to making synthetic research actually work is counterintuitive: you need to start with real people.

We built our own panel called GenPop™, with 100 personas (to start) each trained on the responses of a real person until the superhuman LLM judge can't tell the difference. Here's our launch video:

https://www.youtube.com/watch?v=t452CQZPsSw

You should watch that video and go check it out by creating an account. Or keep reading to hear the story of how we got there:

The Journey from Pure AI to Human-Grounded Personas

Our initial approach came from a prompt I developed called "Personas of Thought" - you'd ask ChatGPT to generate five different personas to answer your question to capture a more diverse range of opinions, then combine their responses into a summary. We turned this into a product where you could generate audiences and reuse them, solving the latency issues of regenerating personas each time.

The response was promising: 1,500+ people tried it, we landed 50 paying customers, and completed several large-scale projects, including a couple with 5,000 AI personas. The promise was there: why spend thousands on traditional market research when AI can do it to 80-90% accuracy for one hundredth the cost? But we kept hitting a wall –– people didn’t know if they could trust the results.

Three Core Problems To Solve

The fundamental issue: people worried that pitching AI-based market research to their boss would get them laughed out of the room. When something matters enough to research, companies typically spring for real human interviews. Market researchers actively dismissed AI - it was even trendy to be against it on LinkedIn. (Partly self-preservation, I suspect.) 

 

Setting aside performative ideological stances against AI, we dug into why people don’t trust AI results, and it came down to three things:

Problem #1: AI-Sounding Responses

The responses were too obviously AI - too smart, too pedantic, littered with telltale phrases like "you're absolutely right" and overusing em dashes. They always didn't sound like actual humans talking. Like your mother, they always tell you your idea is great and sure to work.

This breaks the suspension of disbelief, and makes people embarrassed to show the results to the rest of the organization, even in cases where AI really did provide predictive or useful results. For people to take action on the result of research, it doesn’t have to just be correct, it also has to be believable.

My area of expertise is prompt engineering (I published an O’Reilly book on the topic last year), so we managed to stamp out a lot of the sycophancy early on, and we frequently hear from customers they’re shocked by how human our personas sound. However, this wasn’t enough to close the trust gap in itself.

Problem #2: Political Correctness

We ran a study optimizing prompts to predict the 2024 election. Without optimization, 90% of our AI personas would vote for Kamala Harris - even the ones marked as Republican. This matched findings from the paper "A Persona is a Promise with a Catch," which showed AI personas biased toward politically correct responses: eco-friendly cars over regular ones, La La Land over Transformers. Then there’s sycophancy–parodied by South Park–which is ChatGPT’s tendency to tell you your idea is great even if it’s not. We used DSPy, a prompt optimization library, to optimize our prompts and stamp out much of this bias, but it remains a challenge. 

https://askrally.com/article/correcting-bias-in-llms-with-dspy 

We started offering this calibration service on a case by case basis to large enterprises, and we’re pretty good at optimizing a set of AI personas to replicate the results of existing studies. The problem is that this is a game of whack-a-mole because every customer has some different go-to question they use to check accuracy against what they know.

Problem #3: Preference Convergence

LLMs do often pick the right winner - they'll correctly predict which option would win most of the time, for example when testing different ideas for marketing copy or designs they would usually identify which variation got the most votes. But the results were too spiky. You'd see 66% of AI personas vote for option C and none for option A, when real humans would show a more nuanced spread: 44% for C, 18% for A, and so on.

This convergence made results look fake. And when people don't trust that AI can predict real-world behavior, they won't adopt your tool. We had evaluation metrics that measured diversity of thought, but we could only push it so far with our existing approach. 

What We Learned About Building Better Personas

Through extensive testing (we’ve run hundreds of experiments), we discovered what actually goes into an effective AI persona. There are three components to work with: the system prompt, message history, and user prompt. Each can be optimized to bring a persona to life.

We tested different types of persona information:

  • Demographics (age, gender, etc.) - contrary to industry skepticism, these predicted behavior as well as other options

  • Psychographics (Big Five personality traits) - useful but not sufficient alone

  • Behavioral (recent actions with your business) - the best single predictor when you only have one

  • Contextual (jobs-to-be-done questions, current market position) - the most useful addition to demographics, and source of the most unique data on a person

https://askrally.com/article/whats-predictive-in-a-persona 

We also found that few-shot examples were crucial - showing lots of examples of how real humans actually answer, with all the AI-sounding language stripped out. By providing three different examples showing three very different response styles, we increased diversity in the results. While all of these insights were useful in incrementally improving the product, we needed a big bang to wake everybody up to the value of synthetics.

The Turning Point: Survey Boosting

Working with skeptical market researchers and trying to convince them, we finally hit upon an idea that made their eyes light up: survey boosting. You conduct real research - surveys, customer interviews - but use AI to boost the ROI invested in that research. AIs don’t get tired, so you can keep asking questions after the interview ends, exploring angles you didn't think of during the original conversation. 

The answers you get back are far more trustworthy because it’s grounded in the data you just gathered. We rolled out a few custom projects to do this for clients and responses improved dramatically. Now market researchers had something AI-related to pitch that wasn’t replacing their role, but actually helping them make the case for more investment in market research. If you can now ask 100 questions instead of 10 in your survey, you’re getting 10x the value for very little incremental cost.

The funny thing is that this aligned with how we were already using our own product. We'd conducted 126 real customer interviews from our initial 1,500 signups, then cloned those people from the interview transcripts in order to ask questions of them in a private Rally panel we created. We've been querying that audience ever since for feedback on pricing changes, new features, and strategic decisions, and found it immensely valuable.

The Problem: Most People Don't Have Good Data

Here's where we hit another wall. Most people coming to us didn't have quality existing research. If they had budget constraints that brought them to us in the first place, they couldn't afford to run their own surveys. And those already in market research worried about GDPR and consent issues with repurposing old data.

The answer became obvious: we needed to build our own panel. Vertical integration makes sense in an innovative new industry, because there’s no real established supply chain for virtual panels yet. All our top competitors are black box and not keen on sharing their secret sauce, and the open-source libraries available require a level of technical expertise and experience in synthetics that most companies just don’t have. So we’ll do it for them.

Introducing GenPop: A Virtual Panel Built on Real People

We've spent the last few months testing and optimizing the process of creating authentic AI personas based on interviews with real people. We know what interview questions to ask, how to structure the data, and how to deploy personas that actually sound real.

Last month, we ran an initial test batch of 20 real people. The results were a breath of fresh air – they sounded far more human and represented a range of diverse opinions. What was most pleasing to me is that they make references that purely AI generated personas just weren’t making before. There’s still room to improve, but this feels like a huge leap forward.

We're now launching our first batch of 100 people in what we call GenPop - a virtual panel based on real people. Every person has given consent and recorded video interviews, so we have proof they're real humans. We don't suffer from the problem traditional research has where you can't be sure if someone used AI to respond to your survey (ironically, a lot of people are getting synthetics even if they’re paying for real human studies!).

Why We Think This Solves Everything

GenPop addresses all three of our major problems:

  • No more convergence - responses show realistic distribution across options

  • Responses sound authentically human - because they're grounded in how real people actually talk

  • Rich contextual memories - when you ask if someone prefers a sandwich or a wrap, the AI might recall that this specific person mentioned their lunch preferences, or can make accurate inferences from dozens of other data points rather than hallucinating

The profiles are rich - not just demographics and psychographics, but deep unstructured discussions on different topics. When someone talks about their last trip to get lunch, they naturally mention brand names, describe their context (working remotely, living nearby), share what they listened to on the walk (podcasts), and reveal countless attributes about their preferences and habits.

That's the arbitrage opportunity: it's much easier for people to give unstructured interviews than fill out structured surveys, and AI excels at converting unstructured text into queryable personas. Most of the time the AI doesn’t even need to generate something new, it’s retrieving something the real person said and reformatting it. This massively decreases the chance of hallucinations and makes the hallucinations that remain far more consistent with something the real person would have said. 

The Human Element Makes AI Work

This is ultimately a positive story about AI and employment. In this case, AI isn't coming for researchers' jobs - it's amplifying the ROI of every piece of research you conduct. We can help researchers build panels from their own data, or interview their customers and run the results through our system as a service. They can keep asking AI personas long after the real-world survey or panel finishes. The business case for market research just got more compelling – you’re getting answers to 10x or 100x more questions per dollar spent.

It's a different approach from competitors who offer black-box systems claiming 80-90% accuracy. When you get to choose how prediction is measured, hitting high accuracy numbers is easy. What's actually valuable is building persona-level models based on real people that generate responses authentic enough to change your mind. What’s the point in replicating past studies to 80% accuracy when you could be asking questions you couldn’t dream of asking in real life?

I get it, I come from an economics background and ran a data-driven marketing agency that ran over 8,000 A/B tests per year. I love that rigorous approach. But the truth is executives make decisions based on anecdotes. The elevator conversation with the CEO drives strategy more often than statistical reports. When you hear an anecdote, you can evaluate it against your own mental model. You can decide if it rings true. And when you hear something that makes you think, "Oh crap, I didn't consider that" - that's when strategy actually changes. Now you can do that qual research by typing it into a chat box in a meeting, rather than spending three months in the field.

That's what we're building toward: authentic synthetic research that delivers those "aha" moments at scale. Lowering the cost and increasing the ROI of market research will make it more accessible to more decision-makers. The vast majority of decisions are made backed by precisely zero research – making those decisions more informed is the prize.

What's Next

We're making significant changes to Rally over the coming months. We now have a compelling use case and process. We're happy to work with people who want us to build custom panels for them, but even regular Rally customers will benefit as our pool of personas continues growing and our techniques keep improving.

If you want to be cloned and join our persona pool, reach out - I can add you to the survey. We anonymize everything so it won’t be linked back to you, we train on your preferences not your identity.

We have just over 100 people in the panel now, but it’ll be over 300 next month with the next batch, and we're expanding continuously. We'll also be offering more technology and services for people wanting to build their own custom panels trained on real people.

Because it turns out, if you want your AI personas to truly vary from person to person, the secret ingredient is people.

Stay Updated

Subscribe to our newsletter to get notified about new articles and updates.

Mike Taylor
Mike Taylor

Mike Taylor is the CEO & Co-Founder of Rally. He previously co-founded a 50-person growth marketing agency called Ladder, created marketing & AI courses on LinkedIn, Vexpower, and Udemy taken by over 450,000 people, and published a book with O’Reilly on prompt engineering.

← Back to Articles