Your Reviews Are AI Source Material, But Do They Actually Say Anything?
By Ali de Bold, Founder and CEO, Butterly
Before I started ChickAdvisor in 2006, I had a bathroom counter covered in products that didn’t work. Makeup I bought based on packaging or a splashy ad. A pump that fell apart. A moisturizer that made me break out because nobody mentioned it was terrible for sensitive skin. Numerous failed brushes or applicators and countless other gimmicks. I’d spent real money on all of it, and the information that would have saved me, information from someone who’d actually used these things, simply wasn’t available.
That was the gap. Not a lack of marketing, but a lack of honest detail from real people about real user experiences.
Twenty years later, ChickAdvisor is now Butterly and I’m still working on the same problem. The conditions around it have changed completely. People don’t browse product pages crafted by the brand the way they used to. Increasingly, what they see first is an AI-generated summary, a synthesized recommendation, or a snippet pulled from sources they never visit directly. The actual reviews and forum threads are still out there, but they’re another click or two away.
For marketers, this changes what feedback needs to do. What you’re collecting from customers isn’t just content anymore. It’s source material for AI systems that decide what gets surfaced, summarized, and recommended. The quality of that material matters in ways it didn’t 18 months ago.
That’s what prompted us to study it.
Butterly surveyed more than 2,100 Canadian consumers about how they experience trust when sharing product feedback. We wanted to understand what makes people comfortable being honest, what makes feedback feel authentic to them, and why they choose to recommend products without being paid.
Some of the findings confirmed what I’ve been seeing for years. Others genuinely surprised me.
What the survey said
Nearly all respondents said they’re comfortable being fully honest in reviews, including when their experience is negative. That, on its own, is worth thinking about. Most marketers assume consumers are holding back, or that honesty has to be coaxed out of people. The data says otherwise. People are ready to be candid. The question is whether brands are creating the conditions for it.
Just as striking: most respondents believe brands actually listen to and use their feedback. That level of trust is higher than I expected. It creates a real opportunity. It also creates an obligation. If people believe their input matters and then feel like it disappeared into a void, that trust erodes quickly.
More than 80 percent said they organically share product experiences when they think the information could help someone else. Not because they were prompted. Not because there was an incentive. Because they thought it would be useful. And when we asked what makes feedback feel authentic, they didn’t point to enthusiasm or polish. They pointed to honesty, balance, and real-life specificity.
The gap between volume and value
Here’s where I think a lot of brands are getting it wrong.
Most feedback programs are still built for volume. More reviews, more star ratings, more content to deploy across channels. I understand the pressure behind that. Teams need visible proof. Dashboards need numbers. But volume and value aren’t the same thing.
The goal isn’t less content. It’s better content, built consistently over time through programs that generate real consumer experiences and genuine word of mouth, not one-off campaigns that produce a burst of reviews and then go quiet.
A five-star review that says “Love it, would buy again” looks fine in a report. But it doesn’t tell you what problem the product solved, what almost stopped the person from buying, or what kind of consumer it’s actually right for. It doesn’t give you anything you can use for messaging, positioning, or product development. And it doesn’t give AI systems anything real to work with when they’re deciding what to surface.
Too many brands still treat anything below four or five stars as a problem to manage rather than a chance to learn. It creates a kind of Truman Show effect in the boardroom. Everything looks positive, everyone’s smiling, and none of it is especially real. The result is a feedback culture quietly shaped around comfort instead of usefulness.
The feedback that actually helps, the kind that builds trust with other consumers and carries weight in AI-mediated discovery, sounds a lot more human than that. It includes texture and a bit of imperfection.
The person who says a cleaning product worked exactly as promised, but the scent was so strong it gave them a headache in a small bathroom. The person who says the formula worked great but wouldn’t recommend it for very sensitive skin. That’s the kind of detail people trust, because it sounds like real life. It’s also the kind of detail brands can learn from, if they’re set up to hear it.
What makes the difference
After nearly 20 years of watching how consumer feedback works in practice (and now with research to back it up), I’ve seen four things that consistently separate useful feedback from noise.
The first is fit. When the right product gets to the right person, the feedback is richer because the experience is relevant. If someone has no real use or need for the item, the review might still be positive, but it’s unlikely to tell you anything you didn’t already know.
The second is expectations. People give better feedback when they understand what kind of feedback is actually helpful. Not brand language or forced enthusiasm, but specifics. What worked, what didn’t, what they noticed first, what kind of person they’d recommend it to. When you tell people that honesty and detail are what you’re looking for, they deliver.
The third is making honesty genuinely welcome. This one is subtle but powerful. If consumers sense that the real goal is praise, they adjust. They keep it safe. They smooth out the edges. But when people feel that constructive criticism is valued, the quality shifts. Feedback becomes more candid, more textured, and more useful.
The fourth is showing that feedback goes somewhere. Our research found that one reason people were so willing to be honest is that they believed brands were listening. That’s a trust signal in itself. When people feel their input matters, they contribute differently.
Why this matters now
These four conditions aren’t new. Good researchers and product teams have understood them for a long time. What’s new is what happens when you get them wrong.
AI systems surface insight more reliably when the source material is honest, balanced, specific, and grounded in real use. Those are the same qualities consumers say make feedback feel authentic. The trust question and the discoverability question have converged. Brands that create better conditions for honest feedback aren’t just getting better insight. They’re producing the kind of signal that carries weight in how products are now discovered and recommended.
That’s not a soft brand value. It’s a practical input with real consequences for how your product shows up.
I started this work because of a bathroom counter full of bad purchases and the need for honest information that would have prevented them. I simply wanted real people to help each other make better purchasing decisions. The tools have changed enormously since then. The core of it hasn’t. People trust specificity, honesty, and real experience. They always have.
The difference today is that those qualities don’t just build consumer trust. They determine how your brand is understood at scale.
Ali de Bold is the founder and CEO of Butterly and a longtime consumer advocate. The research referenced in this article is from the Butterly 2026 Trust Index, a survey of more than 2,100 Canadian consumers.
















