What The Sports Illustrated Scandal Says About The Reputational Risk Of Using AI Content

A neon image showing a city and ai graphics.

You don’t have to search far to find a news story about AI, and one we’ve come across is a scandal enveloping American publication Sports Illustrated concerning their undisclosed use of AI content in their magazine and digital platforms. The storm broke in late November 2023 when a technology blogger reported finding head shots of apparently human Sports Illustrated journalists on a website of AI-generated images.

Book A FREE Inbound Marketing Audit

Further investigations revealed that several of the images and biographies on the Sports Illustrated website were allegedly created by AI, alongside a large quantity of AI-fabricated content. In the latest development, Ross Levinsohn, CEO of the Arena Group that owns Sports Illustrated, has been forced to stand down over the scandal, despite personally having nothing to do with the AI issue and probably being unaware of it.

The ‘robotgate’ incident has been a hugely embarrassing issue for one of the USA’s leading lifestyle and entertainment magazines – and without claiming to be psychic, we would by no means be surprised if this turns out to be the tip of an iceberg of similar revelations in the coming year.

Closer to home then, what lessons does this cringe making incident have for UK businesses, many of whom have embraced AI generated content over the past year?

Firstly, it’s a useful reminder that artificial intelligence is in a different category to other types of software. It’s a contentious issue at the moment, and as such should be handled with kid gloves, if at all. People aren’t necessarily going to want to see AI content on your blog or website and may not be happy if they do find it there. Sports Illustrated has gotten in deep water because it’s such a public facing business, but even small businesses should be aware of the potential reputational risk of the software.

Secondly, the incident has given fuel and renewed impetus to the anti-AI lobby, which is a growing voice across the EU and has its sympathisers here in the UK. At the same time as the Arena Group was tripping over its words trying to explain away the presence of fake AI journalists, the EU was hammering out a political agreement on its long-awaited Artificial Intelligence Act, the world’s first attempted regulation of AI technology.

The legislation is likely to become law across the EU next year and will enforce greater transparency on businesses that use AI chat bots, ‘deep fake’ images, and ‘high risk’ AI systems (it’s unclear whether generative AI (e.g. ChatGPT) will be covered by the regulations). British businesses that trade within the EU should pay close attention to the progression of these laws, and it is likely that similar regulations will follow suit in the UK, USA, and internationally in the near term.

Does this mean I shouldn’t use AI applications?

Artificial Intelligence is a very wide field in software development with a lot of positive uses, so although it’s sensible to have your eyes open before adopting any new technology, there’s no reason to reject AI altogether. Let’s be clear, the Sports Illustrated scandal concerns the use of generative AI, not AI in general, and the real problem is that they weren’t transparent – i.e. they were trying to pass off software-generated content under a fake name as the work of genuine writers. This isn’t okay. It is the alleged dishonesty of Sports Illustrated that has caused such reputational damage.

The episode has also highlighted a growing scepticism and concern about AI among public opinion that businesses should be aware of. In late 2022, the Alan Turing Institute undertook a survey exploring the attitude of the British public towards AI technologies, and found that they were broadly positive about the potential benefits of AI. Fast forward 12 months, and a new study published by the UK government’s Centre for Data Ethics and Innovation about public attitudes to AI in December 2023 revealed that while general optimism remained, there was increased concern about the potential of AI to undermine human creativity and fairness, as well as its data security risk.

In this context, it is worth carefully considering the potential responses of your customers and their attitude towards AI and generated content (which could vary from sector to sector and even within your buyer personas) before adopting this technology in your digital marketing or any other aspect of your business. 

New call-to-action

Image Source: Canva