Google scraps AI search feature that crowdsourced amateur medical advice
- Google discontinued its AI-powered “What People Suggest” feature that aggregated health advice from non-experts.
- The removal is part of a broader simplification of Google’s search interface, not due to safety or quality concerns.
- Concerns over AI-generated health content accuracy and user safety continue to challenge tech companies.
- Google continues to explore AI innovations in health but faces scrutiny over misinformation risks.
Google recently decided to discontinue its AI search feature known as “What People Suggest,” which provided users with crowdsourced health advice from individuals sharing similar medical experiences. This feature aimed to enhance user engagement by offering perspectives beyond traditional expert sources, using artificial intelligence to organize and present these insights.
Despite initial enthusiasm, the feature was quietly removed as part of a broader effort to simplify Google’s search results page. This decision comes amid growing concerns about the reliability and safety of AI-generated health information, highlighting the challenges tech giants face when integrating healthcare AI into mainstream search platforms.
Continue Reading
What was the “What People Suggest” feature?
The “What People Suggest” AI feature was introduced by Google as a way to provide users with health advice crowdsourced from online communities and forums. Unlike traditional search results that prioritize expert medical content, this feature aimed to surface real-life experiences and tips from people who had lived through similar health conditions. For example, someone searching for arthritis management could see how others with arthritis exercised or coped with symptoms.
Google used advanced AI algorithms to analyze and categorize diverse perspectives from online discussions, presenting them in easy-to-understand themes. This approach was designed to complement authoritative medical information by adding a human dimension to health searches, acknowledging that patients often value peer insights alongside professional advice.
Why did Google remove the crowdsourced AI health advice feature?
Google confirmed that the “What People Suggest” feature was discontinued months ago as part of a broader simplification of the search results page. The company emphasized that the removal was not related to concerns about the feature’s quality or safety. Instead, it was a strategic decision to streamline the user experience on its search platform.
However, the decision comes at a time when Google faces increased scrutiny over the accuracy and potential risks of AI-generated health content. Earlier investigations revealed that some AI-generated health summaries on Google’s platform contained misleading or false information, which could potentially harm users seeking medical advice.
How does AI impact health information on search engines?
Artificial intelligence in healthcare search has the potential to revolutionize how people access medical information by providing personalized, context-aware, and comprehensive answers. AI can analyze vast amounts of data, including medical literature, patient forums, and clinical guidelines, to generate summaries and recommendations quickly.
However, the integration of AI into health search results also introduces risks. AI models may inadvertently amplify misinformation, fail to distinguish between expert and amateur advice, or present unverified user-generated content as credible. This can lead to confusion, misdiagnosis, or inappropriate self-treatment by users relying on search engines for health guidance.
What challenges do companies face when deploying AI for health advice?
- Data reliability: Ensuring AI systems use accurate, up-to-date, and evidence-based medical data is critical to avoid spreading misinformation.
- User safety: AI must prioritize patient safety by clearly distinguishing professional advice from anecdotal experiences and encouraging consultation with healthcare providers.
- Content moderation: Filtering out harmful or misleading content from crowdsourced inputs requires sophisticated moderation and ethical guidelines.
- Regulatory compliance: Navigating healthcare regulations and privacy laws adds complexity to deploying AI in medical contexts.
- User trust: Building and maintaining trust requires transparency about AI capabilities, limitations, and data sources.
What is Google’s current approach to AI and health information?
Following the removal of “What People Suggest,” Google continues to invest in AI-driven health technologies but with a more cautious and measured approach. The company still provides AI-generated health summaries, known as AI Overviews, which appear above traditional search results. These summaries aim to synthesize information from reputable sources and encourage users to seek professional medical advice.
Google’s health leadership has publicly stated their commitment to combining AI research, technological innovation, and partnerships to address global health challenges. The company’s upcoming events and updates are expected to focus on enhancing the reliability and safety of AI-powered health features.
How can businesses leverage AI in healthcare search responsibly?
Organizations looking to integrate AI into healthcare search or advice platforms should adopt a multi-faceted strategy:
- Implement rigorous data validation processes to ensure information accuracy.
- Use AI to augment, not replace, expert medical advice and clearly communicate this to users.
- Develop transparent AI models that explain how recommendations are generated.
- Engage healthcare professionals in content review and oversight.
- Continuously monitor and update AI outputs to reflect the latest medical knowledge.
- Prioritize user privacy and comply with healthcare regulations.
By balancing innovation with caution, businesses can harness the power of AI in medical search to improve patient outcomes while minimizing risks.
What are the implications for the future of AI in health information?
The discontinuation of Google’s crowdsourced AI health advice feature underscores the complexities of deploying AI in sensitive domains like healthcare. While AI offers unprecedented opportunities to democratize access to health information, it also demands rigorous safeguards to protect users from misinformation and harm.
Future developments will likely emphasize hybrid models that combine AI’s analytical power with human expertise and community oversight. Advances in natural language processing, trustworthiness metrics, and ethical AI frameworks will be essential to create reliable and scalable health information systems.
For businesses and consumers alike, staying informed about the evolving landscape of AI healthcare tools and maintaining a critical perspective on AI-generated content will be crucial as these technologies continue to mature.
Frequently Asked Questions
Call To Action
Explore how your business can responsibly integrate AI-driven health information tools to enhance user engagement while maintaining trust and safety in medical content delivery.
Note: Provide a strategic conclusion reinforcing long-term business impact and keyword relevance.

