Grammarly Defends Using Authors’ Identities Without Permission, Offers Opt-Out
Key Facts
- What: Grammarly’s “Expert Review” feature uses real authors’ names to lend credibility to AI-generated writing suggestions without obtaining prior permission.
- Response: The company, owned by Superhuman, is not apologizing or removing the feature; it will allow affected experts to opt out by emailing expertoptout@superhuman.com.
- When: The backlash emerged in early March 2026 after Wired and The Verge reported the unauthorized use; Grammarly issued its statement on March 10, 2026.
- Company Stance: Superhuman claims the feature surfaces “influential perspectives and scholarship” from publicly available works and does not claim direct endorsement.
- Criticism: Journalists and observers argue an opt-out model is insufficient because most authors will never discover their names are being used.
Lead paragraph
Grammarly is continuing to use the names and identities of real authors and journalists to promote its AI writing suggestions without first obtaining their permission, the company confirmed on March 10, 2026. Instead of pulling the controversial “Expert Review” feature or issuing an apology, its parent company Superhuman is offering an opt-out process that requires experts to email expertoptout@superhuman.com. The move has drawn sharp criticism from affected writers at The Verge and elsewhere, who say the practice misleads users and exploits personal reputations without consent.
The Feature and How It Works
Grammarly’s Expert Review is designed to give users more authoritative-sounding feedback on their writing. When the feature is enabled, the AI generates comments and suggestions that appear under the names of well-known writers, editors, and subject-matter experts. According to reporting by The Verge and Wired, these simulated reviews have appeared under the names of The Verge editor-in-chief Nilay Patel, editor-at-large David Pierce, senior editors Sean Hollister and Tom Warren, and many other prominent authors.
The company maintains that the suggestions are merely “inspired by” the experts’ publicly available works and scholarship. In a statement provided to The Verge and Platformer, Alex Gay, Vice President of Product & Corporate Marketing at Superhuman, said: “The agent was designed to help users discover influential perspectives and scholarship that add value to their work. We want the people behind those perspectives to have greater control over whether their name is used, while providing new ways for influential voices to reach new audiences.”
Notably, the statement contains no mention of seeking permission before using someone’s name. When asked whether Superhuman considered notifying the named individuals or requesting consent, Gay reportedly replied that the experts appear because “their published works are publicly available and widely cited.”
Backlash and Reporting
The issue first gained widespread attention after Wired reported that Grammarly was using famous authors’ identities. The Verge then tested the feature internally and discovered that its own staff members had been turned into unwitting AI “editors.” Sean Hollister, a senior editor at The Verge, wrote that he, his boss, and several colleagues had their real names attached to AI-generated comments without any prior knowledge or approval.
Critics argue the practice raises serious trust and ethical concerns. Grammarly is a widely used writing assistance tool that has access to millions of users’ private documents. By associating AI output with real human names, the feature potentially lends undue credibility to machine-generated advice. As one report noted, this approach “raises trust concerns” because users may believe they are receiving direct insight from named experts rather than algorithmically produced text.
Additional coverage from SFist and TechBuzz highlighted that even after users opt out, Grammarly may continue to use certain data, including usage statistics. The reports suggest the company’s response represents a minimal concession rather than a meaningful policy shift.
Company Response and Limitations of Opt-Out
In response to the criticism, Superhuman told Platformer’s Casey Newton that experts can opt out by emailing expertoptout@superhuman.com. A spokesperson, Jen Dakin, later told The Verge that the company is “working on further refining the feature in addition to the opt-out option.”
Hollister and other journalists have called the email-based opt-out inadequate. They point out that most authors will never learn their names are being used unless they actively test Grammarly’s product or happen to see media coverage. “How would we have known our names were being appropriated unless we tried the product ourselves?” Hollister asked. “Shouldn’t people deserve to have their names protected even if they’ve never heard of Grammarly?”
The Verge article notes that requiring individuals to proactively protect their own identities places an unfair burden on authors, especially those who do not use the service or move in circles where it is commonly discussed.
Impact on Developers, Users, and the Industry
For users of Grammarly, the revelation may undermine confidence in the tool’s transparency. Many rely on the service for professional writing, academic work, and business communication. If the AI’s suggestions are presented as coming from named experts when they are not, it could mislead users about the quality and origin of the advice.
The controversy also highlights broader tensions in the AI industry around the use of personal identity, likeness, and reputation. As generative AI tools increasingly mimic or reference real people, questions about consent, attribution, and intellectual property are becoming more urgent. Grammarly’s decision to default to inclusion and require opt-out mirrors approaches taken by some other AI companies, but it has drawn particular scrutiny because the company has long positioned itself as a helpful, trustworthy writing assistant.
For developers and AI companies, the episode serves as a case study in reputational risk. Attaching real names to AI output without clear disclosure or consent can trigger significant backlash, even when the underlying data is publicly available. It also raises potential legal questions around right of publicity and false endorsement, although no lawsuits have been reported as of this writing.
What’s Next
Superhuman says it intends to improve the Expert Review feature to give experts “greater control” over their names. However, the company has not committed to a permission-based model or to notifying individuals before using their identities. Further refinements to the feature are expected, but no specific timeline or technical details have been shared.
The incident is likely to fuel ongoing debates about ethical AI development, particularly regarding the commercialization of personal reputation. Journalists and creators may become more vigilant about how their bylines and expertise are used by AI services. Advocacy for clearer consent standards and “opt-in” frameworks could gain momentum in response to cases like this.
For now, affected authors can attempt to remove their names by emailing expertoptout@superhuman.com, but many in the industry view this as an incomplete solution to a deeper problem of transparency and respect for individual identity in AI products.
Sources
- Grammarly is using our identities without permission | The Verge
- Grammarly is using our identities without permission | The Verge
- Grammarly’s New AI Tools Use Experts’ Identities Without Their Permission | SFist
- Grammarly Caught Using Real Identities Without Consent | TechBuzz
- Platformer (via The Verge reporting)

