Latest AI news, expert analysis, bold opinions, and key trends — delivered to your inbox.
Google has quietly removed AI Overviews from certain health-related search queries following an investigation that found the feature returning potentially misleading medical information.
The change comes after reporting showed that when users asked questions like “what is the normal range for liver blood tests,” Google’s AI Overviews presented simplified numerical ranges that failed to account for key variables such as age, sex, ethnicity, or nationality. Critics warned that this could falsely reassure users that their results were normal — when they might not be.
In response, AI Overviews no longer appear for some of those exact queries. However, variations like “lft reference range” initially continued to surface AI-generated summaries, highlighting how hard it is to fully contain AI behavior once it’s deployed at scale.
According to follow-up testing, Google has removed AI Overviews from several of the most prominent liver test queries, with traditional search results now taking their place. In some cases, the top result is coverage explaining why the AI summaries were removed in the first place.
That said, Google hasn’t shut off AI health summaries entirely. Users are still prompted to try AI Mode, and Google continues to invest in healthcare-focused AI models and improved medical search features.
A Google spokesperson said the company does not comment on individual removals, adding that internal clinicians reviewed the flagged queries and found that “in many instances, the information was not inaccurate and was also supported by high-quality websites.”
This episode highlights a fundamental tension in AI-powered search:
Medical information is context-dependent. A single “normal range” rarely applies universally, yet AI models are optimized to provide concise, confident answers.
Accuracy isn’t enough — safety matters. Even information that is technically correct can be misleading without proper medical framing.
Search carries authority. When AI Overviews appear at the top of results, users may treat them as definitive guidance rather than general information.
Health organizations welcomed the removals but warned that this is only a partial fix. The concern isn’t one query — it’s whether AI Overviews should be summarizing sensitive medical topics at all without stricter safeguards.
For Google, this is another reminder that AI in health search is a high-risk zone:
Manual rollbacks don’t scale. Turning off AI Overviews query by query is reactive, not systemic.
Regulatory pressure is looming. Misleading health information could draw scrutiny from regulators, especially in regions with strong consumer protection laws.
Trust is fragile. Search engines rely on credibility, and AI mistakes in healthcare can erode that trust quickly.
For the broader AI industry, the lesson is clear: AI summaries work best where context loss is harmless. Health isn’t one of those areas.
Google’s rollback doesn’t signal the end of AI in medical search — but it does suggest a recalibration. As AI Overviews expand, companies are learning that not all information should be compressed into a single answer, especially when real-world health decisions are involved.
The future of AI in healthcare search may depend less on speed and convenience — and more on restraint, transparency, and clear boundaries.