Everything you should not do when using language models
December 16, 2025
Do you work with language models in qualitative analysis? Then here is a guide to what you should do if you want to create meaningless filler, ignore GDPR, and show little concern for responsibility in the research process. Ironic distance may occur.
Language models read interviews from your fieldwork, create overviews of observation notes, identify patterns across data, and formulate insights at an impressively fast pace.
But they can also do the exact opposite: disrupt, distort, and oversimplify the analysis. And it can be surprisingly hard to tell when this happens — at least if our use of language models is not grounded in solid professional reflection.
As humans, we tend to ascribe human traits and behaviours to all kinds of objects and beings. Cars have “faces,” animals can be “cute,” and AI can “write, create, and analyse.”
But ultimately, the output of language models is the logical result of an automated, machine-driven process. If you are indifferent to concepts such as uniquely human contextual understanding and perception, analytical training, and interpretive sensitivity — then by all means, read on.
Because this is a guide for those of you who wish to get lost and produce meaningless filler in your qualitative research work.
1. Let AI define the analysis, the truth, the data, and the methodology for you
Start by delegating the choice of research methods and study design to your language model — and then ask it to generate an interview guide.
Once your fieldwork is completed, make sure to have the model quickly produce a thematic analysis and a fully written insights report. It is important that you reassure yourself that this text is comprehensive and value-adding for the purpose of your project. Everything is under control — right?
Whether or not the insights are actually grounded in the empirical material, the answers sound so confident that it feels wrong to question them. And if the model happens to invent a few patterns or quotes that never appeared in your interviews — well, it is still well written and semantically coherent. Isn’t that what really matters?
Bonus tip: If you avoid giving the model a professional framework or structured prompts, the analysis becomes even more creative.
And to complete the recipe for an irresponsible research process: drop any shared prompting standards or internal guidelines for AI use within your organisation. The more varied prompting styles, unstructured inputs, and individual workflows, the better. That way, it becomes nearly impossible to reproduce or even understand how the AI arrived at its insights in the first place. Who needs methodological accountability anyway?
In short: If you let AI control method, data, and judgement, you may get polished results — but not necessarily correct ones.
2. Context is king – and the difference between generative and verificatory use
It is essential to be conscious of whether you are using a language model to generate content or to verify your analysis. A key concept here is context.
Back in the Tumblr days of the early 2010s, people used to say: “Content is king.”
In the age of language models, this has become: “Context is king.”
In practice, this means that language models must be thoroughly introduced to the project context in order to succeed in both generating and verifying your analytical work.
The generative role:
When the model operates in a generative role, you should think of it as an analytical co-author that can, for example:
- suggest themes, categories, and segments
- generate summaries, theoretical ideas, or concepts
- formulate user stories, metaphors, or perspectives
- create structure and overview in large amounts of unstructured data
The generative role works best early in the process, when you are still open and exploratory. Here, the model can help you notice things you do not yet have language for — much like skilled analysts can, but across far larger volumes of text.
However, generation still requires context. Without it, the model will suggest themes based on its general knowledge. And since you are the expert, it is up to you to provide that context.
The verificatory role:
When the model operates in a verificatory role, it assesses whether your interpretation linguistically and thematically resembles the material you have provided. This distinction is crucial.
AI cannot verify analysis in a classical methodological sense. It compares language — not truth. In short, it works with semantic meaning patterns, not empirical validation.
This means that when you use AI for verification, you get:
- a semantic mirror showing whether your reading resembles the language of the data
- an alternative perspective on your own interpretations
- an opportunity to spot patterns or nuances you may have overlooked
An important nuance should be added here: AI’s way of “verifying” is actually closer to human analysis than many assume. When we read and categorise texts ourselves, we also work with meaning patterns, intuition, and linguistic similarity. We may disagree, overlook elements, or categorise material differently. And in several areas, research shows that strong language models can be just as good — or even better — than humans at certain types of text classification.
But they still operate statistically, not methodologically. And that is the difference: AI can reflect our analysis — but it cannot replace it.
3. Ignore GDPR, data security, and client confidentiality
If you want to make your work both easy and extremely risky, feel free to paste sensitive interview data into open AI models. It is fast, convenient — and completely incompatible with sound data ethics.
Assume that all platforms handle data in the same way. Assume that everything is secure simply because it is popular. Assume that no third parties gain access. Assume that your client does not care about business-critical information — and that your organisation’s reputation will easily survive a data breach.
Conclusion: A good and understanding sparring partner
Okay — sarcasm aside.
At Epinion, this is exactly why we work with a closed, GDPR-compliant model. Because data security is not an afterthought. It is the foundation.
Language models are not a shortcut to deep analyses and meaningful insights. But they can be a shortcut to better questions and a broader analytical overview — if the analyst has thoroughly understood the project context, problem definition, and purpose. That kind of understanding requires a higher level of judgement, contextual sensitivity, and curiosity — and it is worth paying attention to.
So by all means, prompt away — and use language models where they genuinely add value.
Do you have ideas, experiences, or questions about using AI in qualitative analysis? As always, feel free to reach out.

