Cyber-Authorship: AI vs. Human Submissions

Posted by

First-time letter contributors to The San Diego Union-Tribune often find themselves taken aback when they receive a verification phone call from an editor. Some individuals may react with defensiveness or suspicion, while others worry they have committed an error. However, by the conversation’s end, most are reassured by the human connection on both sides. We place that call to ensure authenticity, to respect privacy, and, ultimately, to establish that the letters are genuine and submitted by real people, addressing concerns that matter to them and their communities.

In a rapidly evolving world dominated by automation and anonymity, the letter to the editor remains a superior form of audience engagement when compared to other typical interactions on news or social media platforms. Those who send in letters can feel confident that their messages are read by several editors before entering the printing press and being published online. Additionally, the editor’s names appear alongside those of the letter writers, further solidifying the credibility of the publication.

Prior to recent developments, having a letter published only required a name, community, and a contact phone number for verification purposes. Accurate, fair, and civil content that adheres to the publication’s policies was expected. The original work of the authors was typically not questioned, as most people understand that authenticity is a basic requirement for reputable publications.

However, there have been instances when this fundamental principle has been breached. Instances of near-identical submissions point towards an organized letter-writing campaign, where participants plagiarize the original message and affix their name as the author. In these cases, we request that they reframe and resubmit their messages in their own words.

Determining the uniqueness of standalone letters is not always straightforward, but we have tricks for identifying borrowed content being passed off as original work. In such cases, we either collaborate with the author to rectify the issue or opt for another letter on the same topic that does not present similar concerns.

With the advent of advanced technology, the administration of this open, honest forum has become more complicated. The rise of artificial intelligence (AI) software and its widespread accessibility raise questions about the authenticity of written submissions. It is entirely possible that the words we receive have been crafted in part or entirely by a computer program instead of the individual who claims authorship.

It is not uncommon for busy executives and politicians to employ public relations or communications personnel to translate thoughts and talking points into coherent speeches, commentaries, or books. Now, AI programs like ChatGPT offer similar assistance to the general populace, enabling even novices to produce professionally-written content based on a few basic parameters.

When we engaged with ChatGPT to discuss the implications of such technology in a recent Your Say essay, we were struck by its clear and concise framing of the issues. The AI-generated response was published verbatim, sparking curiosity and a twinge of envy in our editorial team.

This leads us to question whether AI-generated content should cause concern in the realm of commentary and letters to the editor. Is it fair to allow AI-written content to compete with human authors? Would this constitute cheating, similar to teachers’ views on students using AI to complete assignments?

While few take issue with utilizing internet search engines for research, or apps for spell-checking and grammar purposes, is employing a program to create persuasive arguments that much of a leap? Is it possible that this might become the new normal?

We may soon find ourselves in a situation where the only way to identify AI-generated content is through another AI program. This notion raises a whole new set of quandaries for the editorial team.

For the time being, our approach to handling these scenarios involves relying on the honor system upheld by our writers. However, this may not be a sustainable solution as technology continues to progress and influence the way people communicate.

As futurists, we understand the power of AI and its potential to reshape various sectors, including journalism. Addressing these ethical concerns and the implications of AI-generated content requires amending our current strategies and adapting to the fast-paced technological landscape.

We encourage our readers to voice their opinions on this matter. Is there a better way to tackle the challenges posed by AI technology in the arena of letters to the editor? We’re all ears for intelligent solutions that factor in our rapidly evolving world.

As society moves deeper into the cyberpunk realm, we must keep our discussions genuine, truthful, and human. The quest for a solution that safeguards the integrity of our letters to the editor while embracing new technology is more critical now than ever.