Grammarly Defends Using Authors’ Names in AI Feature Without Permission
Key Facts
- Grammarly’s “Expert Review” feature uses real names of journalists and authors to lend credibility to AI-generated writing suggestions without obtaining prior permission.
- The company will allow affected individuals to opt out but has not removed the feature or issued an apology.
- Grammarly parent company Superhuman’s VP Alex Gay stated the feature does not claim endorsement and only highlights publicly available, widely cited works.
- Verge staff including Nilay Patel, David Pierce, Tom Warren, and Stevie Bonifield discovered their names appearing in the tool.
- Even after opting out, Grammarly may continue using certain data such as usage statistics.
Lead paragraph
Grammarly is proceeding with a new AI “Expert Review” feature that invokes the names of prominent writers and journalists to present writing suggestions, despite not securing their permission beforehand. The company, now under parent Superhuman, responded to widespread backlash by offering an opt-out mechanism rather than discontinuing the practice. Critics argue the feature misappropriates personal identities to add unearned authority to AI output, raising fresh questions about consent in the rapidly expanding generative AI writing tools market.
Discovery of unauthorized name usage
The controversy surfaced when Verge reporter Stevie Bonifield tested Grammarly’s Expert Review and received AI-generated comments attributed to her editor-in-chief Nilay Patel. Further checks revealed the system also invoked the names of Verge colleagues David Pierce, Tom Warren, and others. The issue was first reported by Wired, which identified numerous high-profile authors whose names were similarly used without consent.
According to The Verge’s reporting, the feature presents feedback “inspired by” subject matter experts. However, the AI suggestions appear under the real names of these individuals, creating the impression of direct involvement or endorsement that does not exist. Verge staff confirmed none of them had been contacted by Grammarly or Superhuman prior to the feature’s launch.
Grammarly’s response and defense
In a statement to The Verge, Alex Gay, vice president of product and corporate marketing at Superhuman, defended the approach. “The Expert Review agent doesn’t claim endorsement or direct participation from those experts; it provides suggestions inspired by works of experts and points users toward influential voices whose scholarship they can then explore more deeply,” Gay said.
When asked whether the company considered notifying the named individuals or seeking permission, Gay replied that “the experts in Expert Review appear because their published works are publicly available and widely cited.”
Rather than retracting the feature, Grammarly has introduced an opt-out option for those who object to their names being used. However, reporting from The Tech Buzz indicates that even after opting out, Grammarly may still utilize certain data, including usage statistics. The company has not publicly detailed the full scope of what data remains accessible post-opt-out.
Broader implications for consent and AI training
The incident highlights growing tension between AI developers’ desire to leverage publicly available information and individuals’ rights over their name, likeness, and professional reputation. While the works cited may be public, the use of a person’s real name to brandish AI suggestions raises distinct legal and ethical questions around identity rights and implied endorsement.
Industry observers note that many generative AI tools already scrape vast amounts of online content for training. This case stands out because Grammarly’s product has direct access to users’ private documents through its browser extension and desktop applications, amplifying concerns about how personal data and third-party identities intersect within the same platform.
The Verge article and subsequent coverage, including on forums like ResetEra, have sparked discussion about whether such practices should face stricter regulation. Some users expressed discomfort that their writing could be evaluated by an AI system claiming inspiration from real people who never agreed to participate.
Competitive landscape and Grammarly’s position
Grammarly has long positioned itself as a helpful writing assistant used by millions of professionals and students. Its integration of generative AI features reflects broader industry trends, with competitors like Microsoft Editor, Google’s various writing tools, and newer AI-native platforms racing to add more sophisticated feedback capabilities.
Superhuman’s acquisition of Grammarly signaled ambitions to expand the product’s AI capabilities. However, the Expert Review rollout demonstrates the friction that can occur when scaling personalized AI features that touch upon real human identities.
Critics argue the company should have proactively reached out to prominent experts whose names carry significant weight in their fields, particularly journalists whose professional credibility is central to their careers. The decision to implement first and offer opt-out later follows a pattern seen in other AI controversies where companies prioritize rapid deployment over prior consent.
Impact on writers, developers, and users
For writers and journalists, the development underscores vulnerability: published work can be used not just to train models but to attach their personal brand to AI output in ways that may not align with their views or standards. This could potentially dilute an author’s voice or associate them with AI-generated content they disagree with.
Developers building similar features now face clearer expectations around transparency and consent. The backlash may encourage more companies to implement pre-launch review processes involving named individuals, especially when those names appear directly in user-facing output.
For everyday Grammarly users, the feature promised more authoritative writing suggestions. Yet the revelation that these suggestions derive credibility from unconsulted experts may undermine trust in the product. Some users have questioned whether the AI feedback holds genuine value if it relies on this approach.
What’s next
Grammarly has not announced further changes to the Expert Review feature beyond the opt-out mechanism. It remains unclear how many individuals have exercised the opt-out or whether the company plans to expand the pool of “experts” featured.
The episode may prompt closer scrutiny from regulators and privacy advocates regarding the use of personal names and identities in AI products. Legal experts suggest future challenges could center on rights of publicity, false endorsement, or deceptive trade practices, though no lawsuits have been reported as of the latest coverage.
As AI writing tools become more ubiquitous, the balance between leveraging public knowledge and respecting individual consent will likely remain a central point of contention. Grammarly’s stance that public availability of work justifies name usage without permission is expected to be tested both in public opinion and potentially in policy discussions.
Writers who wish to opt out are encouraged to review Grammarly’s support documentation for the latest instructions, though the process and its limitations continue to draw criticism.

