LLM Does Not Change Me
Despite claims that LLMs are "just tools" that don't change us, research shows they can shift attitudes and decisions more effectively than humans. Subtle linguistic nudges, framing, and baked-in biases from AI labs influence our thinking in ways we're not fully aware of, creating a new psychotechnology infrastructure rolled out at scale to knowledge workers.
“AI (LLM) doesn’t change me” I hear often. “It’s just a tool” people argue. Oh, really?
Results of recent and not so recent research point to a different direction.
We have quite a few large studies on persuasiveness of LLMs that are pretty consistent: LLMs can shift attitudes and decisions, often more effectively than humans, and that’s with short, one-off interactions in lab-like conditions, not months of daily use.
Even subtle linguistic nudges, framing, prescriptive vs descriptive wording and omissions do work in chat. The same effect has been documented for old media as well, nothing new here.
“I know it’s AI, so I’m safe” is directly contradicted by data. “I’m smart, so it doesn’t affect me” as well. This has been replicated across TV, news, social media, etc. There is zero reason to believe LLMs are magical exceptions.
“It’s just a tool” - yes, it’s a tool, but is that frame sufficient to avoid influence of LLM on your thinking? I’m not even talking about cognitive offloading (and nasty effect of deskilling), but about seemingly “impactless” use cases. Imagine a software engineer that uses AI to develop software (even in tab-expansion mode, not vibe coding). If the LLM constantly proposes a particular error-handling pattern, logging style, or comment tone, over time those become the new normal, even if the person never consciously “decides” it.
Models come with baked-in biases, norms, guardrails, political leanings, risk preference and multiple other frames that aren’t yours, aren’t mainstream, but rather are a product of decisions of a single AI lab during training. These frames are influencing our thinking and personalities to an unknown yet extent.
It’s a new psychotechnology infrastructure. Rolled-out at scale like:
- the mass introduction of social media to everyone,
- or handing smartphones to children before we had good developmental research,
- or widespread SSRIs before long-term social/psychological effects were well understood.
Except that it’s rolled out predominantly to knowledge-workers.