I Fed My AI Personas a Critic's Post, Then My Article. Here's What Happened to Their Buying Intent.

I Fed My AI Personas a Critic's Post, Then My Article. Here's What Happened to Their Buying Intent.

It's possible to feed AI personas different pieces of content to measure how exposure to competing narratives shifts purchase intent in real time.

October 15, 2025
← Back to Articles
Summarize in
OR

Last week I came across this LinkedIn post from the founder of Charles Burdett, CEO and founder of PipDecks.



So I did something that most marketers (and sales teams) are sleeping on. I tested purchase intent change with media-diet testing, a simulation method only the bold and brave are asking for right now (but soon every marketer will have it on their first week-on-job checklist).

What is a media diet test?
In plain English. A media diet test is where you feed AI personas some content (media) and run a simulation (asking a panel of AI personas a question). 

In this test, I kept it simple, took 64 AI personas that were cloned from sales transcripts, and asked them if they'd invest in AI personas. This served as my bench mark.



Next, while in that same session, I fed them my recent article, What Focus Groups Are Really Telling You. And asked the same question.




But it dawned on me. This wasn't a fair fight. The whole point of this exercise was to gauge how my efforts to educate a miss perception in the market might be impacting purchase intent. In the real world, my messages don't stand alone. They go toe-to-toe with other ideas, battling it out in the mind. And I'm not always the first voice in the room. 

So I spun up a new 'universe' (ie, 'session'), and switched on hard mode by feeding the AI personas with Charles's post, to see how it changed their minds. 

 



After reading Charle's LinkedIn post, 22 AI personas now said they would not consider investing vs 4. Yikes. But perfect. Because now I had a ‘universe’ to simulate how my article might change opinions of people exposed to both sets of ideas, but in this timeline, I was late to the party. 



None of this took very long. A couple minutes. And I did it using our google sheet template.

 


But you could automate this to run across all your content at once. Let’s call it the 'synthetic purchase intent audit'.

If you wanted to do this in n8n, here's a media diet workflow that with a few minor tweaks, you could be running on your entire content library in minutes to power a report of commercial impact across one or hundreds of survey questions!. 

To calculate a commercial impact score from this. I kept it simple. 

If an AI voter was in the "no" camp. But after reading my response said "yes". "No" to "Yes" is huge. So +2 points.

From "maybe" to "yes" is less of a jump. So +1.

"Yes" to "no". -2. 

So on and so forth.

The result?




My article had a commercial impact score of +15. Whatever that means. 

But that's not the interesting part.

The interesting part is how exactly my content influences the virtual panel.

What It Means to Simulate a Media Diet

Think about the last time you read something that shifted how you thought about a product. Not a dramatic conversion, just a slight tilt in perspective. Maybe you were skeptical about some tech tool, read a thoughtful breakdown, and suddenly the value proposition clicked.

That mental shift, that's what happens in the “discourse layer”.

I know the last time it happened to me. It was when buying a bike. I was in the market for a an all road endurance gravel bike, and a friend mentioned “... titanium bikes are bomb proof. They are a lifetime bike”. He doesn't even own a titanium bike, and up until that moment, I was gravitating to the carbon he had just bought.

I hadn't even thought about Titanium before. But after the comment, I thought something along the lines of:

“wow, buying a bike in my mid 30s and still riding it in my 50s. That’d be cool. Not just a bike. An investment. Something to help me build a health base so I don’t blow up in my 40s.”

Then I checked the price tag. Ouch. 

So I asked “why are these bikes so expensive”. Founder mode kicked in and I ended up designing a bike and putting in an order with a manufacturer. Easy +$1500 savings. But that’s not the point.

The point is that if synthetic personas are good at anything, it's operating in that exact space. The layer where people articulate reasoning, weigh trade-offs, and construct the narratives they'll use when someone asks "is this thing worth it?"

I wasn't trying to predict if these people would actually invest. Because let's be real. Do you really know what bike you’re buying next? Especially when you’re swimming in options, feel overwhelmed by what to choose from, have random stuff in home break and eat into disposable income, and then add random words that get sequenced by whoever enters the “discourse layer” with you at the right (or wrong) time.

That's not what personas do. 

I was testing whether exposure to a specific argument, delivered through written content, would shift how they talk about the decision. Whether the reasoning in the article would become part of their cognitive terrain. Would they reject it? If so, how so? Or what things would trigger resonance? 

And here's the thing about simulating media diets: it's not about force-feeding content and measuring compliance. It's about watching what sticks, what gets integrated into their reasoning, what objections get addressed, and crucially what new concerns surface.

It’s the perfect sparring partner. Agency to the modern messaging strategist. 

The Starting Position

Let me show you what Charles's post did to the room.

Persona 1:

This guy's right, LLMs will keep suggesting changes because that's what they're programmed to do, even if it's pointless. I don't want to waste time chasing phantom optimizations or make my product sterile. Better to focus on real users and real-world testing. So no, not going to invest in that hype, been there before with other tech trends, wasted a bunch of time. Focus on users, not synthetic BS.

Vote: No.

That's not just disagreement. That's Charles Burdett's frame fully integrated. "Phantom optimizations," "sterile product," "synthetic BS," "hype." The critique landed completely. This person isn't just skeptical, they've adopted the exact vocabulary of the critic. They've even added their own war story ("been there before with other tech trends") to reinforce the frame.

Persona 2:

Investing in AI personas... that's a tough one. On one hand, if these personas really could give solid feedback on designs or strategies, that'd be huge, right? I could validate my content ideas faster, maybe even use them to help clients envision change impacts. That would be unreal. But this whole article makes a good point, though. AI always wants to tweak and 'optimize', and that's not how real people work. Messy, contradictory humans are where the real insights are. So, maybe not invest heavily, but keep an eye on it? The tech is new, it could evolve. So I'm on the fence

Vote: Maybe.

Watch the internal battle here. They can see the potential ("that'd be huge, right?"), they can imagine specific use cases ("validate my content ideas faster"), but the tonsillectomy argument has poisoned the well. The phrase "AI always wants to tweak" is now a fact in their mind. They're hedging so hard they're practically apologizing for even considering it.

Persona 3:

This is so on the nose, the CEO of pipdecks is right! It's like asking if I want to invest in myself! But, I mean, it's AI, so there's always a chance it could be something revolutionary. Synthetic users... the point is messy humans though and not 'pleasing' me, the AI? I'm so torn! I'm gonna say maybe, because I can see the potential for it to go either way. The real answer is probably no, actually. But maybe…

Vote: Maybe (but really no).

This is what total cognitive dissonance looks like. They're agreeing with Charles ("the CEO of pipdecks is right!"), acknowledging the contradiction ("the point is messy humans"), and talking themselves in circles. Notice they even correct themselves mid-answer: "The real answer is probably no, actually. But maybe…"

When someone's this torn, it's because they're trying to reconcile two competing frames and neither one is winning cleanly.

These aren't bad responses. They're realistic responses. The kind you get from smart people who've seen enough shiny objects to know better than to commit immediately. These are people who want to believe there's value here but have been given a compelling reason to stay skeptical.

This is what you're up against when your prospect reads a critic first. Not just skepticism, articulated skepticism with a memorable frame. And that frame is now the lens through which they'll evaluate everything else they read.

But watch what happens when they encounter the discourse layer with a different mental model.

When a Reframe Lands

Persona 1 (was No):

Okay, this article makes a good point. I was thinking about 'synthetic users' all wrong. It's not about predicting behavior, but getting insights into conversations and reasoning. Understanding those discussions could be pretty valuable for messaging and positioning. The 'conversation mirror' idea is intriguing. I'm not completely sold, but it's worth exploring further. Maybe some small-scale tests to see if it actually provides useful insights, this is worth exploring. So I'm leaning towards maybe.

Vote shifted from No to Maybe.

Look at the frame swap. They went from "phantom optimizations" and "synthetic BS" to "insights into conversations" and "conversation mirror." That's not random vocabulary, that's language from the article becoming part of how they think about the tool.

The skepticism hasn't vanished (notice "I'm not completely sold"), but it's been redirected. They're no longer dismissing the category. They're negotiating terms: "small-scale tests," "see if it actually provides useful insights." That's the language of someone who's moved from "absolutely not" to "convince me."

Persona 2 (was Maybe):

Damn, that article hits hard. It's not about predicting if someone will buy, it's about understanding the conversations around the product. Calibrated personas that get the tone right... that's useful for getting the vibe of what people really think. It's more market research rather than some crystal ball that will tell me if my ads are good or not before launching. Okay, yes. I see the value in that, I'm in.

Vote shifted from Maybe to Yes.

This is what resolution looks like. And what a banger quote. They were stuck between seeing potential and accepting Charles's critique. The article gave them permission to value the tool for what it actually does rather than dismissing it for what it doesn't do.

"It's more market research rather than some crystal ball", that line is the reframe working. They're not arguing with Charles anymore. They're acknowledging his point (it's not a crystal ball) while seeing the actual value (understanding conversations). The tension dissolved because the article gave them a way to be right and say yes.

But not everyone converted:

Persona 3 (was Maybe):

Damn, this is a tough one. Part of me wants to jump on the bandwagon and act like I'm future-proofing my business but the reality is that these AI personas are overhyped. I think it's best to continue working with real users and trust my gut and my experience.

Vote: Still unsure, leaning No.

Here's what's interesting about this response: they're still wrestling. The article didn't fail to land, notice "this is a tough one," which means they're taking it seriously. But Charles's frame is stickier for them. The word "overhyped" survived the reframe attempt.

This isn't a failure of the content. This is a signal about which objection matters most to this persona. For them, the trust issue ("trust my gut and my experience") runs deeper than the use case clarification. That's valuable intelligence: some prospects need proof points more than they need better framing.

Why This Matters 

Here's where people usually get suspicious. "Of course they agreed with your article. You fed them your argument. This is circular."

But that's exactly the point.

What I tested wasn't whether AI personas would become believers. I tested whether they'd integrate the reasoning when it's good reasoning. Whether they'd use it to resolve uncertainty, address concerns, and shift their position in a way that sounds like actual human deliberation.

People can't predict their own behavior, but they're remarkably good at articulating their reasoning. And that reasoning, voiced and shared and socially reinforced, is what ultimately moves markets.

When someone reads a compelling comment about buying one bike for everything, they don't just update their internal Bayesian prior. They rehearse the argument. I sent my bike plan to others and practiced explaining why I thought it was the best choice for me. Teased out any objections. Reinforced the yes until I felt ready to spend real money.

And if you want to know whether your content will work in the wild, whether it'll actually shift the conversation, you need to test it in the discourse layer before you put $50K on a paid to push it hard.

The Part Where I Admit This Is Also Marketing

Look, I’m very much pro synthetics. Co-founder of AskRally after all. This is a demo of what the tool can do.

But here's why I'm showing you the raw data, the skeptics, the bears as well as why i’m on the side of the bulls.

Because if I just showed you "21 people said yes!" without the context, you'd be right to dismiss it as cherry-picked garbage.

The value isn't in the "yes" votes. It's in understanding how they got there. What clicked. What didn't. What phrases they grabbed onto. What concerns persisted.

That's what AI personas give you. 

And when you're intentional about their creation or calibrated properly. They’re not a crystal ball. A conversation mirror. You know that when you spin up your simulation, certain truths still hold true.

Then you feed them content. Watch how they integrate it. Learn what works… before you spend the budget.

And if that's not worth testing, I don't know what is.

 

Stay Updated

Subscribe to our newsletter to get notified about new articles and updates.

Rhys Fisher
Rhys Fisher

Rhys Fisher is the COO & Co-Founder of Rally. He previously co-founded a boutique analytics agency called Unvanity, crossed the Pyrenees coast-to-coast via paraglider, and now watches virtual crowds respond to memes. Follow him on Twitter @virtual_rf

← Back to Articles