AI and Ethics in Journalism: The Sports Illustrated Scandal

  • Thread starter russ_watters
  • Start date
  • Tags
    Ai Ethics
In summary, the Sports Illustrated scandal highlights the growing concern over the use of artificial intelligence (AI) in journalism and the ethical implications of such technology. The controversy stemmed from the use of an AI program to generate sports articles, raising questions about the role of human journalists and the potential for biased or inaccurate reporting. This case serves as a reminder of the importance of ethical considerations in the use of AI in journalism, and the need for responsible and transparent implementation of this technology in the media industry.
  • #1
russ_watters
Mentor
23,482
10,809
TL;DR Summary
If you can't tell it's fake does it even matter?
Interesting article about an AI writing scandal at Sports Illustrated:
https://www.cnn.com/2023/11/29/opinions/sports-illustrated-ai-controversy-leitch/index.html

I hadn't heard about it in real-time, which is probably indicative about how far SI has fallen*. In short, the article discusses how SI was caught using AI and worse fake reporter photos/profiles to write game summaries. Game summaries are the short articles that summarize last night's Phillies game. They are so formulaic that they are easy for AI to write without people noticing they are AI-written. I think that's fine except, ya know, the lying.

This is juxtaposed against an NFL sideline reporter who made up reports when coaches wouldn't talk to her (and she wasn't allowed to report on what she overheard). She got away with it for years because even when the coaches do talk they always use the same bland platitudes. Here in Philly it was a running joke how Andy Reid (now in KC) always says "we gotta do a better job" in response to any question about negative performance:

Video owner doesn't like embedded videos.

So I get it. I don't like being lied to, but I sympathize with a BS job you're just trying to do while you can. Of course, I don't sympathize enough to be very sad for them when that job disappears because it's so pointless that AI can do it just as well and cheaper. Maybe I will when the jobs are more....human.

*I don't subscribe anymore, but at least I don't have to worry if there's AI content when I buy the Swimsuit Edition in an airport bookstore...yet.

Disclaimer 1: I'm not sure if this thread is about AI, journalism, ethics or economics.
Disclaimer 2: This isn't AI but I can't argue that every time the term is misused, which is basically all the time right now.
 
Last edited:
  • Like
Likes Bystander
Computer science news on Phys.org
  • #2
There was some buzz about Journatic a company that used computer generated and overseas writers to write articles for local newspapers.

They got into hot water when they started to add bylines with reporters names who didn't write the article. It became known as pink slime journalism.

https://en.wikipedia.org/wiki/Pink-slime_journalism?wprov=sfti1

So an AI doing it could cause similar ripples.
 
  • #3
russ_watters said:
TL;DR Summary: If you can't tell it's fake does it even matter?

Disclaimer 1: I'm not sure if this thread is about AI, journalism, ethics or economics.
AI - misused term, from the AI community themselves
journalism - Journalism school seems to have sideswiped.
Perhaps a Society of Professional Journalists could solve the problem.
Ethics - unethical behavior from the "AI" community in promotion of their products.
( Just wondering if the AI industry should be held accountable to a standard as high as say the pharmaceutical or aviation industries before "release".
Economics - Publishing industry is hurting, so they will try what might work, including using substandard AI releases.
They have to back peddle with egg on their face when trouble hits. I would think that the "AI" people should be the ones red-faced, but they seem to control the messaging and face little scrutiny in their marketing approach.
 
  • #4
jedishrfu said:
There was some buzz about Journatic a company that used computer generated and overseas writers to write articles for local newspapers.

They got into hot water when they started to add bylines with reporters names who didn't write the article. It became known as pink slime journalism.

https://en.wikipedia.org/wiki/Pink-slime_journalism?wprov=sfti1

So an AI doing it could cause similar ripples.
The lie matters, sure, but I think the people lying about it do it because they know that the identity of the human reporters matters. I think it's about humans knowing humans are humans.

Hollywood doesn't really need actors anymore to make movies. So why do they use them? Because people know the identity of the actors and want to see those actors in movies. Real Tom Cruise has been a bankable movie star for almost 40 years and people want to see him in movies. So even if Digital Generic Action Hero is taller and better looking than Real Tom Cruise, it doesn't matter. I went to see 40 Years Later Top Gun in large part because Real Tom Cruise was the star. If the studio had decided Real Tom Cruise was too old and substituted Digital Generic Action Hero Son of Tom Cruise nobody would have bothered to go see it.

The risk in media therefore is if people stop caring about the human involvement. If I don't recognize the name on the byline of the news article, does it matter if that's a person or AI bot? This issue doesn't just apply to print media, it's already starting in visual; I was watching my favorite aviation vlogger today. I'm reasonably certain he's real. But he was critiquing another aviation vlogger video that was clearly AI generated (really badly) and didn't seem to know it. That fake vlogger video had something like a million views. But someone who doesn't know anything about aviation watching a video by someone/AI that also doesn't know anything about aviation...and also has poor writing and speech synthesis....isn't going to be able to identify it.
 
  • #5
256bits said:
( Just wondering if the AI industry should be held accountable to a standard as high as say the pharmaceutical or aviation industries before "release".
Probably. But even in heavily regulated industries such as cars, the regulation of new capabilities has lagged.
256bits said:
Economics - Publishing industry is hurting, so they will try what might work, including using substandard AI releases.
They have to back peddle with egg on their face when trouble hits.
Sure. And linked to your prior point about ethics, they are sacrificing their ethics for the money. I think they don't realize that even if ethics doesn't matter to them, it matters to the consumer. Or maybe they do, but hope they don't get caught in the lie.
256bits said:
I would think that the "AI" people should be the ones red-faced, but they seem to control the messaging and face little scrutiny in their marketing approach.
I really don't blame Open AI and others for this. They do what works to market their product. Heck, I blame it more on the public and since we're falling for the salesmanship. Open AI's website is an awful corporate marketing caricature. Like an evil company from an '80s/'90s movie. It's Initrode. But wow, do people buy into the schtick. So why not keep doing it?
 
Last edited:
  • Like
Likes 256bits
  • #6
russ_watters said:
Hollywood doesn't really need actors anymore to make movies. So why do they use them? Because people know the identity of the actors and want to see those actors in movies. Real Tom Cruise has been a bankable movie star for almost 40 years and people want to see him in movies.

This doesn't need to be an actor though. "Pixar" was a guarantee of quality for a while, that guaranteed profits for a while after the actual quality started slipping. It could be that an actor has enough pull that a) They can influence things beyond acting, like the script and story, and b) They know how to use that influence to improve quality. That could result in the audience recognizing their name as an indication of quality.
 
  • Like
Likes russ_watters
  • #7
russ_watters said:
I don't have to worry if there's AI content when I buy the Swimsuit Edition in an airport bookstore...yet.
How do you know? And is AIbrushing better or worse than airbrushing?
 
  • Like
Likes hutchphd
  • #8
Algr said:
This doesn't need to be an actor though. "Pixar"
Yeah, I was mulling that over after I posted it. It's a tough one. It's definitely true that the actors for the most part don't exist or matter in cartoons. I think cartoons are different but it's tough to put my finger on why. And Pixar's formula is somehow unique in that, and I don't know why, either. Maybe the uniqueness of the technology helped? Some thoughts that may or may not go anywhere:
  1. Some cartoon characters do become iconic: Mickey > Tom Cruise
  2. Pixar doesn't really create iconic characters, but in the beginning it didn't seem to matter.
  3. Nobody cares who the voice actors are in cartoons, for the most part. Nobody knows who voices most of them. Woody? Tom who?
  4. But, Shrek; Iconic. And he had to be Mike Myers. To a lesser extent, The Simpsons.
Can you replace the entire staff and contractors for Pixar with a computer program? Maybe. Cartoons are fake to begin with though. There's no lie. But I don't think you can replace human celebrities with AI look-alikes. I don't think people will accept that. I think that's the problem with the story in the OP.
 
  • #9
Vanadium 50 said:
How do you know? And is AIbrushing better or worse than airbrushing?
I was talking about the articles.
[edit] Eh, maybe I wasn't. I don't even remember anymore. Point taken though. There's probably a threshold, but I don't know what it is.
 
Last edited:
  • Haha
Likes Vanadium 50 and jedishrfu
  • #10
Not sure this is the right thread for this comment, which concerns the risk of AI generated content in the public domain, perhaps a version of journalism.

I searched today for a current update on wildfire management in wilderness areas in my state, and was puzzled that the most current update was for Nov. 6, 2024, since that date does not occur until next month. It turned out the update I read was summarized by AI from an official government source, but which read the date 6-11-2024 in the European manner, i.e., as Nov. 6, instead of June 11. I think this sort of thing could easily be dangerous to travelers.

Of course we are used to being consciously misled by bad human actors online, but this sort of AI misinformation is beginning to be embraced by, or disguised as coming from, trusted sources like local governments, making it less obvious as potentially suspect. I personally ignore all information specified as AI - generated as unreliable, but the information I found summarized today was mislabeled as coming directly from the local government source. Just another hazard, but one with the potential to render virtually all online information as unreliable, at least in my opinion.
 
  • #11
This whole thing reminds me of the decay that is HowStuffWorks.com it used to be a fun site to find some information (it was never terribly accurate but it was ok), now they rehash old articles using LLMs, it is so obvious that now they declare it in every article.
 
  • #12
Ronald Reagan used to simulate live radio broadcasts of the Cubs from the news wire feed. to des Moines IA.....is this nefarious (other than his occasional loss of contact with reality suffered as president )? It is an entertainment medium after all. Just a little more suspension of disbelief is required,
 
  • #13
What makes you think they care about the truth? If you buy the soap they advertise and vote the right way, haven't they done their job? Accuracy is only useful insofar as it leads to repeat clicks.
 

FAQ: AI and Ethics in Journalism: The Sports Illustrated Scandal

What is the Sports Illustrated scandal in relation to AI and ethics in journalism?

The Sports Illustrated scandal involves the use of AI-generated content without proper disclosure to readers. In this case, a company called Narrative Science provided AI-generated articles for Sports Illustrated, leading to concerns about transparency and ethical journalism practices.

How does AI impact the ethics of journalism in the Sports Illustrated scandal?

AI can impact the ethics of journalism by potentially misleading readers into believing that human journalists wrote the content. Lack of transparency in disclosing the use of AI-generated content can undermine trust between journalists and their audience.

What are the ethical considerations when using AI in journalism, as seen in the Sports Illustrated scandal?

Some ethical considerations when using AI in journalism include ensuring transparency about the use of AI-generated content, maintaining editorial control over the final output, and upholding the integrity and credibility of journalistic practices.

How can journalists and media organizations address ethical concerns related to AI in journalism, following the Sports Illustrated scandal?

Journalists and media organizations can address ethical concerns by being transparent about the use of AI-generated content, providing clear attribution to AI systems, maintaining editorial oversight, and upholding journalistic standards and integrity.

What are the potential implications of the Sports Illustrated scandal on the future of AI and ethics in journalism?

The Sports Illustrated scandal highlights the need for greater transparency and ethical considerations when using AI in journalism. It could lead to increased scrutiny and regulations around the use of AI-generated content in journalism to ensure trust and credibility with readers.

Similar threads

Back
Top