Google has removed a number of AI-generated health summaries from its search results after an investigation found that the company’s flagship AI Overviews feature was serving misleading and potentially harmful medical advice that experts described as “dangerous” and “alarming”.
The move intensifies scrutiny of how generative artificial intelligence is being folded into everyday search, especially when the answers concern life‑and‑death health decisions.news18
The change follows reporting by the Guardian, which tested AI Overviews on a series of medical queries and then had the responses reviewed by clinicians and health organizations.
Investigators found that, in several cases, Google’s AI summaries either omitted critical context or offered advice that directly contradicted accepted medical guidance, creating a risk that patients could delay diagnosis, ignore symptoms, or adopt unsafe behaviors.business-standard
Liver blood tests at the center of the controversy
Among the most troubling examples were searches related to liver function tests, such as “what is the normal range for liver blood tests” and “what is the normal range for liver function tests”.
For these queries, AI Overviews presented clean numerical ranges without explaining that “normal” values depend on factors such as age, sex, ethnicity, health history, medication use, and laboratory reference standards.independent
Experts warned that such simplified outputs could lead people with serious liver disease to wrongly believe their results were within a healthy range, potentially discouraging them from attending follow‑up appointments or seeking specialist care.
Following publication of the investigation, AI Overviews for those liver‑related searches disappeared, replaced once again by traditional lists of links.aol
Google has not confirmed the specific triggers for the removals.
In a statement responding to questions about the change, the company said it does “not comment on individual removals within Search” but added that, in cases where AI Overviews “miss some context,” it works to make “broad improvements” and takes action under its policies “where appropriate”.business-standard
The company also said an internal team of clinicians reviewed the examples highlighted and found that, in many instances, the information itself was not strictly inaccurate and was supported by high‑quality websites.
Critics argue that such a defense misses the point: medical advice that is technically correct yet stripped of nuance can still be dangerous when presented as a definitive, personalized answer at the top of a search page.theoutpost
“Dangerous” advice on cancer, tests, and mental health
The liver examples were not isolated. The Guardian’s investigation and subsequent coverage by other outlets documented several additional failures that raised alarm among clinicians.theverge
In one case involving pancreatic cancer, Google’s AI overview advised patients to avoid high‑fat foods, advice that specialists said was the opposite of what many people with the condition require.
For some patients, a high‑fat diet, under medical supervision, is crucial to maintaining weight and strength; incorrect guidance to avoid fat could increase the risk of malnutrition and even death.theoutpost
In another instance, the system reportedly provided incorrect information about vaginal cancer tests, listing Pap tests as a diagnostic tool for vaginal cancer when experts noted that Pap smears are designed to detect cervical, not vaginal, cancer.
Such confusion could lead people to misinterpret their screening history and downplay concerning symptoms.
The problems extended into mental health. Summaries related to psychosis and eating disorders were described by Stephen Buckley, head of information at mental health charity Mind, as offering “very dangerous advice” that was “incorrect, harmful, or could lead people to avoid seeking help”.
Some responses were said to minimize the seriousness of symptoms or imply that professional treatment might not be necessary, despite clinical guidance stressing early intervention.
Despite these concerns, AI Overviews remain active for a range of other health‑related searches, including some about cancer and mental health. Google has argued that, in these cases, the summaries link to reputable sources and include prompts encouraging people to seek professional care when appropriate.
That selective approach has fueled criticism that the company is addressing isolated symptoms of a broader safety problem rather than the underlying design of the system.newsbytesapp
Health charities welcome removal but warn of systemic risk
Patient organizations and medical charities have welcomed the removal of the most clearly problematic outputs while stressing that the fix is partial at best.
Vanessa Hebditch, director of communications and policy at the British Liver Trust, called the withdrawal of the liver‑related AI summaries “excellent news” but emphasized that simply disabling responses for a handful of specific queries does not address the wider risk created by automated health advice.
She warned that if similar questions are phrased differently, AI Overviews may still generate misleading answers, and that “other AI‑produced health information can be inaccurate and confusing”.aol
Other experts echoed that concern, arguing that Google’s ability to turn off AI Overviews for individual queries should not distract from the more fundamental question of whether a general‑purpose search assistant is an appropriate vehicle for quasi‑clinical guidance.
Several specialists quoted in coverage of the investigation said they feared it was only a matter of time before an AI‑generated health summary contributes to a serious adverse outcome.theverge
How AI Overviews work — and why health advice is so fraught
Launched widely in 2024, AI Overviews use large language models to generate short, synthesized answers that appear above the traditional list of links on a Google results page.
These snapshots are designed to distill what the system identifies as key information about a topic, pulling from multiple web sources and presenting the material in natural language.bbc
Google has promoted AI Overviews as a way to save time and reduce information overload, promising “helpful” and “reliable” answers with prominent links to underlying sources for those who want to dig deeper.
The feature is now visible for billions of queries, from recipe ideas to travel tips and complex technical questions.forbes
Health information, however, presents a uniquely difficult challenge. Test ranges, risk factors, treatment options, and red‑flag symptoms vary according to personal characteristics and clinical context that a general search query does not supply.
Even small omissions or generalizations can cause outsized harm when people interpret the result as tailored medical advice.independent
Generative AI models also have a known tendency to “hallucinate” — to confidently state inaccurate or invented claims — particularly in domains where data is sparse, conflicting, or heavily influenced by opinion and satire.
Google has already faced embarrassment over AI Overviews that suggested people eat rocks for their health, use glue on pizza, or rely on false claims about public figures, often after misinterpreting jokes or social‑media posts as authoritative sources.youtubeplatformer
In response to those earlier incidents, the company said it introduced “better detection mechanisms” for nonsensical or satirical queries and promised to “pause” some answers on health topics.
More than a dozen technical changes were reportedly made to reduce the likelihood of bizarre or clearly wrong outputs. The latest medical‑advice controversy suggests that, even with these safeguards, the line between everyday search help and unlicensed medical guidance remains difficult to police.cnet
Growing reliance on AI for healthcare information
The stakes are high because large numbers of people already turn to search engines and AI chatbots for health guidance before contacting a doctor.
One recent estimate cited in coverage of the Guardian probe suggested tens of millions of people globally use tools such as ChatGPT for healthcare‑related questions. For many, Google remains the first point of contact with the medical system, shaping whether symptoms are ignored, monitored, or escalated.
Clinicians interviewed about AI Overviews stress that the problem is not only outright falsehoods but also a shift in how people experience search. Rather than scanning several links, comparing sources, and encountering caveats about uncertainty, users are now greeted with a single confident‑sounding paragraph that feels more like an answer than a starting point.
That change in framing, they argue, can encourage over‑reliance on a system that has no access to medical records and no formal duty of care.business-standard
Several doctors quoted in recent reports said they already spend time in consultations correcting misunderstandings that originate from online searches.
Generative AI risks deepening that burden by producing information that is polished in tone but subtly misleading in content or emphasis — particularly when it relates to test interpretation or borderline symptoms, areas where nuance is essential.independent
Regulatory and ethical questions for AI in search
The episode also feeds into a wider debate about how AI‑driven services should be regulated when they intersect with health.
Regulators in Europe, the United States, and elsewhere are still determining when a digital tool crosses the line from general information provider into a medical device subject to stricter oversight.
AI Overviews occupy a legally and ethically ambiguous space: they are part of a consumer search engine, not a certified diagnostic product, but in practice they deliver advice that some users may treat as medical guidance.
That tension raises questions about liability when an automated summary is wrong, the adequacy of current disclosures, and the degree to which companies should restrict AI answers on clinically sensitive topics.
Privacy advocates add another dimension, warning that the blending of search history with AI‑powered personalization could, over time, encourage highly tailored health‑related outputs without the explicit protections that govern doctor‑patient interactions.
Health organizations are increasingly pushing for clearer guardrails: stronger disclaimers, narrower use of AI summaries on medical queries, default prioritization of links to official guidelines or public‑health agencies, and transparent auditing of high‑risk categories such as cancer, mental health, and chronic disease management.
Google under pressure to prove AI Overviews are safe
For Google, the latest controversy underscores the difficulty of deploying generative AI at the core of a product that billions of people treat as a neutral gateway to information. The company insists that most AI Overviews provide high‑quality results and says feedback on errors is helping it refine the system.
Yet each viral failure — from advice to eat rocks to misleading summaries on cancer and liver disease — chips away at trust in both the feature and the broader AI strategy behind it.bbc
By quietly removing AI responses for some high‑risk medical queries while keeping the feature active for many others, Google appears to be pursuing a calibrated approach: reducing obvious dangers without abandoning its bet that AI‑written summaries are the future of search.
Health experts and patient advocates, however, continue to warn that, in medicine, partial fixes and incremental safeguards may not be enough.
The phrase “dangerous and alarming,” used by specialists to describe some of the AI‑generated advice, captures the core concern. A system that excels at packaging information but has no deep understanding of human biology, no awareness of individual circumstances, and no accountability for outcomes is now embedded at the top of the world’s most influential search engine.
Until its medical limitations are fully addressed, each adjustment or quiet removal is likely to be seen less as reassurance than as further evidence that the technology was pushed into a critical domain before it was ready.news18

