AI enabled focus groups

genAI; research-skills
Published

May 14, 2026

I had my first encounter with an ‘AI-enabled focus group’ this week.

Focus groups are traditionally used to get the opinions of a cross-section of society on a new product, policy or advertising campaign. ‘AI-enabled’ focus groups are simulations of human participants in a focus groups.

The ‘AI-enabled’ is a bit misleading, they are 100% AI participants.

The selling point is they are much cheaper, faster and more reliable than human focus groups. The downsides are obvious: do they really represent what a focus group would tell you.

The marketing of the tools is also excellent. The results are presented as if they are from real focus groups, with sample sizes for different groups of people. Ours had ‘n=5’ for each group. As if sample size is a limitation for an AI tool. Asking an AI something 50 times versus 5 would cost at most $1-$2 extra.

I was curious as to the results, but also skeptical that AI could make a fair representation of our user group. We know generative AI has many biases in the text it writes, and in my experience no amount of declaring personas or writing explicit system prompts guarantees overcoming those biases.

For example, most large language models perform better at coding and reasoning tasks with one of instructions rather than engaging in conversation.

As I understand it, you set up the ‘focus group’ (I’m refusing to call it a real focus group) with relevant context such as notes from meetings or comments from existing users.

Other than the potential for biases, there is the well known behaviour of needing to please the user (large language models tend to be sycophantic by design).

The ‘focus group’ basically just regurgitated what was in our meeting notes. It told us exactly what we wanted to hear. And those suggestions don’t check out with my lived experience talking to end-users. What we are doing currently (and what AI told us to keep doing) is not working and what we need is a new perspective on the issue.

The AI focus group was just trying to please us.

The recent drug overdose lawsuit against OpenAI is a particularly horrifying example of sycophantic behaviour.

The chat logs have been made public as part of that case and include conflicting advice to seek medical help, but also advice like “here’s how to optimize your trip for comfort, introspection and enjoyment.” (and doesn’t the “here’s how to optimize…” sound just like chatGPT speaking about just about anything!)

Now my use case isn’t life or death. I would just classify it as another example of one colleague’s enthusiasm for AI slop wasting another colleague’s time.

I’m confident these AI ‘focus groups’ will become more prevalent, given their convenience. But I worry that they won’t ever encourage us to ask hard questions about what we are doing.

You can imagine the disastrous results they could lead to in public policy, for instance.

AI tools can be helpful brainstorming partners. But I wouldn’t reach for those as my first choice. There are many people in our group with extensive experience of the issue we are addressing. A good starting point would be to just ask those people.

Its not clear to me that you would then need a an AI ‘focus group’. Why not just write down your ideas, what you think, then get an old reasoning AI to work through possible perspectives of different personas. That would give you a nice list of potential perspectives to explore with real thinking in your human brain.

But I wouldn’t treat AI output like its human data and assign it sample sizes as if its representative of a human population.

The other issue with these ‘focus groups’ is that they are almost impossible to validate, without engaging a real human focus group. If you do that, then why bother with the AI?

I suspect I’m stuck with my colleague’s enthusiasm for the AI ‘focus groups’. I’ll tell them the results are no good. They will say but look it says ‘X’, and ‘X’ makes sense. Neither of us can ever prove who is right without a real human study.