Home Artificial Intelligence Authenticity In The AI Content Era Will Not Come Cheap

Authenticity In The AI Content Era Will Not Come Cheap

Journalism brands can learn something after recent reports of Sports Illustrated (SI) running “product review” articles by synthetic authors. Futurism’s piece identified several fake/non-existent authors – “Drew Ortiz”, “Sora Tanaka” and others – and traced their profile pictures to an AI headshot marketplace, in plain sight.

The pages deleted by SI are available for review, thanks to Internet Archive’s Wayback Machine feature. The rhetorical flourish in the language has the synthetic feel that generative AI machines exhibit. But that was the lesser problem here. The real exposé was the non-transparent use of fake authors to get readers to click on “Drew Ortiz’s” volleyball reviews and whatnot. This authorship claim is where the test of journalistic ethics failed.

As it turned out, Arena removed the content when Futurism reached out to the publisher for the story. “After we reached out with questions to the magazine’s publisher, The Arena Group, all the AI-generated authors disappeared from Sports Illustrated’s site without explanation,” wrote Maggie Harrison in her piece.

A Clarification That Justified The Exposé

But it was SI’s hastily issued clarification on their X (Twitter) account, that not only muddied the waters further but underscored how the generative AI-era was driving publishing and disclosure standards to an altogether new low. “The articles in question were product reviews and were licensed content from an external, third-party company, AdVon Commerce,” said a spokesperson for Arena.

Let’s stop right there. Completely missing in that line is the self-realization that people expect product reviews to be reviews of said product by human beings. The act of “reviewing” has a critique aspect to it. It is undertaken on behalf of others. It may involve a journalist, a critic, or a knowledgeable professional in the field to undertake the writing.

The real problem was the cloaking of the content as legitimate product reviews by legitimate-looking humans. The author pages in question used the term “Product Reviews Team Member”. This is how journalistic bylines usually work. You tell the reader who the critic, reviewer, or journalist is, with a brief biography. Except here those authors were not real. The Arena group, publisher of Sports Illustrated, had tried to humanize third-party AI content and fake or synthetic authors with journalistic veneer.

The second giveaway was in this line in SI’s explanation. “However, we have learned that AdVon had writers use a pen or pseudo name in certain articles to protect author privacy – actions we strongly condemn – and we are removing the content while our internal investigation continues and have since ended the partnership.” That comment alone generated a backlash on X.

The idea that product review articles receive their credibility when written by an actual human author seems to have somehow been eviscerated by Arena’s unique argument. Their explanation on X appeared to suggest that the “pen name” practice somehow ought to include going to AI headshot sites and creating avatars! Conjure up an imaginary “Drew Hortiz” who spends his time “camping, hiking, or just back on his parents’ farm”, or the “joyful” Sora Tanaka, who “loves to try different foods and drinks”. Behold, “pen names” went modern, with the AI touch.

The Lessons

Paid content relationships between publishers and contractors are nothing new. Before the generative AI era, the ethical principle was about making a clear disclosure to the reader or viewer that this was an advertorial or an advertiser feature. Authoring credits were usually not in question. Placement strategies by publishers in the interest of sales and commissions are neither inherently illegal nor even unethical. Arena did disclose in a disclaimer in italics that this arrangement was with a third–party for a content package.

What has changed now is that publishers are trying to find ways to dress up synthetic content funnels with journalistic authenticity.

One key lesson is that authenticity is as valuable to readers as it is to the publisher’s journalistic brand.

When anything presumed to have value to news readers and consumers is communicated in the public square, its authenticity should matter. Generative AI technology is driving down the costs of inauthenticity. The marginal costs of generating such content as the funnel to clicks, when amortized over hundreds of thousands of clicks, are lower. If not used carefully, it risks harm to the publisher’s brand.

The good news is that in the process, the value of human authenticity and oversight is only getting reaffirmed. Journalistic product reviews just happen to be the unethical AI use case here. The cost of authenticity is the price publishers pay to justify the trust reposed in product reviews by real consumers. Take consumer-written reviews, for that matter. What would happen tomorrow if you and I found out that product or restaurant reviews we were reading on AmazonAMZN, Yelp, or Google were machine-posted by human-like avatars to give the appearance of fellow consumers’ perspectives? That would mean those “experiences” underlying the reviewers’ posts were not even real. This is the kind of trickery that could destroy community-generated trust overnight.

As old-fashioned as this may seem, journalistic watchdogging does matter. It took a human journalism team to break the story about the fake profiles of “Drew Ortiz”, “Sora Tanaka” and their synthetic headshots.

Follow me on Twitter or LinkedIn


 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment