Does LinkedIn Have a Generative AI Problem?

A recent investigation found a lot of it

LinkedIn logo
Are machines writing more of the internet than you'd expect?
Nikolas Kokovlis/NurPhoto via Getty Images

As generative AI becomes more and more widespread, it can be especially daunting to attempt to figure out if something you’ve seen online originated from the mind of a human being or via an AI system responding to a prompt. With images, at least, there are telltale signs you can look for. When it comes to text, though, things get more complicated — and a new investigation just revealed that one popular social network may have a significant number of posts created via algorithm.

The social network in question is LinkedIn. As Kate Knibbs at WIRED reports, the number of generative AI posts on the platform rose dramatically after ChatGPT gained popularity in 2023. In conducting this investigation, the company Originality AI analyzed over 8,500 public posts on LinkedIn beginning in 2018. They found that, among posts in English, at least 54% were likely generated using AI.

As Knibbs points out, LinkedIn — like a number of social networks — offers AI features to its users, so it’s not surprising to see that that number is as high as it is. LinkedIn’s Adam Walkiewicz told WIRED that the company was working to “proactively identify low-quality, and exact or near-exact duplicate content.”

“We see AI as a tool that can help with review of a draft or to beat the blank page problem, but the original thoughts and ideas that our members share are what matter,” he added.

We Didn’t Ask for AI-Generative Sports Commentary But It’s Here
Highlights from Wimbledon will come courtesy of IBM’s Watsonx

LinkedIn’s abundance of AI-generated text isn’t the only case where a social network is navigating a world in which it’s a lot easier to generate whole paragraphs and post them ad nauseum. In October, 404 Media’s Emanuel Maiberg discussed an app called Impact that was designed to address political debates online. The app would — in Maiberg’s words — “provide [users] with AI-generated text they can copy and paste in order to flood the replies with counter arguments.”

What happens when you can’t necessarily trust that a person you know or follow actually wrote the post with their name at the top of it? As WIRED‘s investigation suggests, we’re getting closer to that reality — and it’s an unnerving prospect.

The InsideHook Newsletter.

News, advice and insights for the most interesting person in the room.