Grammarly Is Facing a Class Action Lawsuit Over Its AI ‘Expert Review’ Feature
- Grammarly’s AI feature used names and identities of authors without consent, sparking legal action.
- The lawsuit claims misappropriation of personal likenesses for commercial gain exceeding $5 million.
- Superhuman, Grammarly’s parent company, has discontinued the controversial AI “Expert Review” tool.
- The case raises critical questions about AI ethics, privacy, and intellectual property rights.
The recent class action lawsuit against Grammarly highlights the growing legal and ethical challenges surrounding artificial intelligence applications in content creation and editing tools. Grammarly’s AI-powered “Expert Review” feature, which presented writing suggestions as if coming from famous authors and experts, has been accused of misusing their names and reputations without permission.
This controversy underscores the complexities of integrating AI writing assistants into mainstream software, especially when leveraging the identities of well-known figures. The lawsuit not only demands accountability but also sparks a broader conversation about the responsible use of AI-generated content and the protection of intellectual property in the digital age.
Continue Reading
What Is Grammarly’s ‘Expert Review’ Feature and Why Is It Controversial?
The Expert Review was an AI-driven tool integrated into Grammarly’s platform that offered users editing suggestions as if they were coming from renowned authors, journalists, and academics. This feature simulated the writing style and critique of figures such as Julia Angwin, Stephen King, and Neil deGrasse Tyson, among others.
However, the controversy arose because these individuals never consented to have their names or likenesses used in this manner. The AI-generated feedback attributed to them was not only unauthorized but sometimes inaccurate or misleading, damaging the reputations of those involved and misleading users about the source of the advice.
Legal Grounds for the Class Action Lawsuit
The lawsuit, filed in the Southern District of New York, alleges that Grammarly and its parent company Superhuman violated laws in New York and California by commercially exploiting the names and identities of hundreds of professionals without permission. The complaint emphasizes that this misappropriation is unlawful regardless of the individuals’ fame.
Julia Angwin, the named plaintiff and investigative journalist, represents a class of affected individuals whose names and reputations were used to generate profits for Grammarly. The suit demands an end to this practice and claims damages exceeding $5 million collectively.
Key Legal Issues Highlighted:
- Unauthorized use of personal identity for commercial purposes
- Violation of privacy and publicity rights under state laws
- Misrepresentation of AI-generated content as expert advice
- Potential harm to professional reputations due to inaccurate AI feedback
Superhuman’s Response and Discontinuation of the Feature
Following significant public backlash and expert criticism, Superhuman announced the immediate disabling of the Expert Review feature. In a statement, the company acknowledged missing the mark in how the feature represented experts and committed to redesigning it with greater respect for individual control over identity representation.
Superhuman’s product management director, Ailian Gan, emphasized the intent to help users access expert insights while giving those experts real authority over how their identities are used. CEO Shishir Mehrotra also publicly addressed the concerns, recognizing the importance of scrutiny in improving AI products.
Implications for AI Ethics and Intellectual Property
This lawsuit against Grammarly is emblematic of broader challenges in the AI ethics landscape. As AI systems increasingly generate content based on vast datasets, questions arise about the ownership of styles, ideas, and identities embedded within those datasets.
Experts argue that using a person’s name or likeness without consent in AI-generated outputs infringes on personal rights and intellectual property. This case may set a precedent for how companies develop and deploy AI tools that simulate human expertise.
Considerations for Businesses Using AI Tools:
- Ensure compliance with privacy laws and intellectual property rights when incorporating AI features.
- Obtain explicit consent from individuals whose identities or work inform AI outputs.
- Maintain transparency with users about the nature and source of AI-generated suggestions.
- Implement mechanisms allowing subjects to control or opt out of AI representation.
How This Lawsuit Reflects Broader AI Industry Trends
The Grammarly case is part of a growing wave of legal scrutiny targeting AI companies over data use, consent, and transparency. As AI-powered writing tools, chatbots, and content generators become mainstream, regulators and users demand clearer ethical standards.
Companies face increasing pressure to balance innovation with respect for individual rights, especially in creative and professional domains. The outcome of this lawsuit could influence future regulatory frameworks and industry best practices for AI content creation.
What Users Should Know About AI Writing Tools and Privacy
Users of AI writing assistants should be aware that some tools may incorporate data or styles derived from third parties, sometimes without explicit disclosure or consent. This raises concerns about the authenticity of AI feedback and the ethical implications of relying on such tools.
Consumers are encouraged to:
- Review the terms of service and privacy policies of AI platforms.
- Seek transparency about how AI suggestions are generated.
- Be cautious about attributing AI-generated advice to real individuals.
- Advocate for ethical AI practices that respect creators’ rights.
Future Directions for AI in Writing and Editing
The discontinuation of Grammarly’s Expert Review feature signals a shift toward more responsible AI integration. Future AI writing tools are likely to emphasize:
- Ethical AI development with user and expert consent.
- Enhanced transparency about AI-generated content origins.
- Greater user control over AI personalization and expert representation.
- Collaboration between AI developers, legal experts, and creators to establish fair standards.
These changes aim to build trust and ensure that AI enhances rather than undermines the value of human expertise.
Frequently Asked Questions
Call To Action
Protect your business and reputation by ensuring your AI tools comply with privacy and intellectual property laws. Contact us today to develop ethical and legally sound AI solutions tailored to your needs.
Note: Provide a strategic conclusion reinforcing long-term business impact and keyword relevance.

