The New York Times Drops Freelance Journalist Who Used AI to Write Book Review
- Freelance journalist Alex Preston was dropped by The New York Times after using AI to write a book review.
- The review contained unattributed language similar to a Guardian review of the same book.
- The incident highlights ethical challenges and risks of using AI tools in journalism.
- It raises questions about editorial standards and AI’s role in content creation.
The New York Times recently severed ties with freelance journalist Alex Preston following the discovery that he used artificial intelligence to assist in writing a book review. The review, published in January, contained language and descriptions strikingly similar to a Guardian review of the same book, “Watching Over Her” by Jean-Baptiste Andrea. This overlap was flagged by a reader, prompting an internal investigation by The New York Times.
Preston admitted to using an AI tool that incorporated material from the Guardian review into his draft, failing to identify and remove the overlapping content before submission. The incident has sparked widespread discussion about the ethical use of AI in journalism, editorial oversight, and the potential risks of relying on AI-generated content without proper verification and attribution.
Continue Reading
What Happened with The New York Times and Alex Preston?
The New York Times published a book review in January 2026 authored by Alex Preston, a freelance journalist and author. The review covered “Watching Over Her” by Jean-Baptiste Andrea. Shortly after publication, a reader noticed significant similarities between Preston’s review and a prior review of the same book published by the Guardian in August 2025, written by Christobel Kent. These similarities included specific phrases, character descriptions, and the overall tone of the concluding assessment.
Upon receiving the alert, The New York Times launched an investigation. Preston admitted to using an AI writing tool to assist with the review. This AI tool had incorporated material from the Guardian review into Preston’s draft. Unfortunately, Preston did not catch or remove the AI-inserted content that closely mirrored the Guardian’s wording before submitting the piece.
How Did The New York Times Respond?
The New York Times publicly acknowledged the issue by adding an editor’s note to the online review. The note explained that the review contained language and details similar to the Guardian’s earlier piece and that Preston had used an AI tool which incorporated that material. The Times stated that this use of AI and unattributed content violated their editorial standards.
Following the investigation, The New York Times decided to end its relationship with Preston. The spokesperson confirmed that Preston would no longer contribute reviews to the paper. Preston had written six reviews for the Times between 2021 and 2026 but claimed he had not used AI in any other articles.
What Exactly Was Copied or Paraphrased?
Examples of overlapping language include the description of a character named Stefano as “lazy, Machiavellian,” a phrase that appeared almost identically in both reviews. The concluding assessment of the novel also shared striking similarities. The Guardian described the book as “a song of love to a country of contradictions, battered, war-torn, divided, misguided and miraculous,” while the Times’ version stated it was “a love song to a country of contradictions: battered, divided, misguided and miraculous.”
These similarities suggest the AI tool Preston used pulled directly from the Guardian’s review and integrated the text into his draft without proper attribution or modification.
What Does This Incident Reveal About AI Use in Journalism?
This case highlights several critical issues surrounding the integration of artificial intelligence in journalism. First, it exposes the risks of relying on AI tools without rigorous human oversight. AI can inadvertently replicate or paraphrase existing content, leading to plagiarism concerns and ethical breaches.
Second, it raises questions about editorial policies for freelance contributors and the use of AI-generated content. Many news organizations are still developing guidelines for AI use, and this incident underscores the need for clear standards to ensure originality and proper attribution.
Finally, it demonstrates the challenges journalists face in balancing efficiency gains from AI with maintaining integrity and trustworthiness in their work.
Who Is Alex Preston?
Alex Preston is a British author and journalist with a notable career writing for major publications including the Observer, Financial Times, Guardian, and Economist. He is a six-time author, with his latest book, A Stranger in Corfu, published in February 2026. Preston also serves as head of advisory at the investment management firm Man Group, where he has written on AI topics such as “The AI Bubble: Hidden Risks and Opportunities.”
Despite his extensive experience, Preston acknowledged the mistake and expressed regret. In a statement, he said, “I made a serious mistake in using an AI tool on a draft review I had written, and I failed to identify and remove overlapping language from another review that the AI dropped in. I am hugely embarrassed by what happened and truly sorry.”
What Are the Broader Implications for the Publishing Industry?
The incident serves as a cautionary tale for publishers and freelance writers about the integration of AI in content creation. As AI tools become more prevalent, the publishing industry must adapt its editorial workflows and ethical guidelines to address new challenges, including:
- AI content verification to detect plagiarism or unintentional duplication.
- Clear policies on the acceptable use of AI in writing and research processes.
- Training for journalists and editors on how to responsibly use AI tools.
- Transparency with readers about the role of AI in content production.
Failure to address these issues could damage trust in media outlets and undermine the credibility of freelance contributors.
How Can Journalists Use AI Responsibly?
To harness the benefits of AI while maintaining journalistic integrity, writers should:
- Use AI tools as assistants rather than primary content creators.
- Thoroughly review and fact-check AI-generated text to ensure originality.
- Attribute sources properly and avoid copying existing content.
- Disclose AI usage to editors and, where appropriate, to readers.
Implementing these best practices can help journalists improve productivity without compromising ethical standards.
What Does This Mean for Readers and Media Consumers?
For readers, this incident is a reminder to critically evaluate the sources and originality of published content, especially as AI-generated writing becomes more common. Media consumers should expect transparency from publishers regarding the use of AI and trust that editorial teams uphold rigorous standards.
Ultimately, maintaining trust in journalism requires a collaborative effort between writers, editors, publishers, and readers to navigate the evolving landscape of AI-assisted content creation.
Summary of Key Takeaways
- The New York Times dropped Alex Preston after AI-assisted plagiarism was discovered in a book review.
- The AI tool used incorporated unattributed material from a Guardian review.
- The case highlights ethical, editorial, and operational challenges of AI in journalism.
- Responsible AI use requires transparency, attribution, and rigorous human oversight.
- Publishers must develop clear policies and verification methods to maintain content integrity.
Frequently Asked Questions
Call To Action
Ensure your organization adopts clear guidelines and robust editorial processes for AI-assisted content to maintain trust and quality in the evolving digital media landscape.
Note: Provide a strategic conclusion reinforcing long-term business impact and keyword relevance.

