AI Doesn't Just Summarize Reviews: It Rewrites Them to Influence You

11/02/2026
David Lahoz

Research reveals AI summaries introduce cognitive biases in 26% of cases, subtly manipulating human choices and purchasing decisions. Discover the specific prompt engineering strategies needed to neutralize these risks and ensure neutral summaries.

When we ask ChatGPT, Claude, or any other language model to summarize a text, we implicitly assume the machine acts as a neutral secretary: trimming the fluff and delivering the essence. However, a recent study presented at the IJCNLP 2025 conference reveals a more unsettling reality: AI doesn't just shorten texts; it often alters their meaning, introduces cognitive biases, and, most worryingly, shifts our purchasing decisions or professional judgment without us realizing it.

The research analyzed six language models (including GPT-3.5, Llama, Phi, Qwen, and Gemma) on everyday tasks such as summarizing Amazon reviews, news, and interviews. The goal was not just to verify technical accuracy, but to measure how these summaries trigger specific human biases.

The Three Biases Distorting Reality

The study identified three main ways AI summaries warp original information:

  1. Framing Bias: This occurs when the summary changes the tone or sentiment of the original text. For example, a nuanced review ("good product, but slow service") becomes a purely positive summary ("functional product"). Researchers found that, on average, 26.42% of summaries altered the framing enough to be considered a significant bias.

  2. Primacy Bias: Models tend to give much more weight to information appearing at the beginning of the text, ignoring what comes at the end. This happened in 10.12% of cases. If a contract's key warning or a review's critical "but" is in the last paragraph, it is likely to vanish.

  3. Hallucinations due to Knowledge Cutoffs: When dealing with news or data post-dating the model's knowledge cutoff, the error rate skyrockets to 60.33%. The AI fills gaps with inventions presented with total confidence.

Real Impact: Manipulating Human Decisions

What is truly alarming is not the technical errors, but their persuasive effectiveness. The study conducted an experiment asking humans to choose products. When participants read original reviews, 52.3% chose a specific product. However, when they read a positively biased AI summary, that figure jumped to 83.7%. Furthermore, the willingness to pay for that product increased by 4.5%.

This has direct implications for any professional. If you work in marketing, an automated summary of customer feedback might be hiding recurring complaints (framing bias). In legal or compliance fields, a summary omitting final clauses (primacy bias) can be dangerous. The lesson is clear: "summary" is not synonymous with "neutral." We are introducing an algorithmic editorial layer into our workflows.

Strategies to Mitigate Risk

Fortunately, we are not helpless. There are prompting and design techniques to reduce these biases:

  • For Framing Bias: Use "self-control prompts." Be explicit: "Summarize this text preserving the original tone. Include both positive and negative aspects." Forcing a structure (e.g., "List pros and cons") also helps prevent the model from leaning too heavily on the positives.

  • For Primacy Bias: Divide and conquer. If the text is long, ask for summaries in chunks (first third, second third, last third) and then combine them. This ensures the end of the text is represented.

  • For Hallucinations: If you need to verify recent facts, connect the model to external sources (RAG) or limit its scope with strict instructions: "Summarize ONLY what this document says; do not add external information."

  • Epistemic Labeling: Ask the model to indicate its confidence level. If the AI says "low confidence," it’s a signal for a human to review the original.

Generative AI is an indispensable tool for productivity, but treating it as an infallible black box is a costly mistake. The golden rule must be transparency: whenever an automated summary is presented, the original text must be accessible via a single click ("View original"). We must stop viewing summaries as absolute truths and start treating them for what they are: probabilistic interpretations that require supervision.