Downloading PDF. Please wait... Issue 2838

Can a robot write for a newspaper?

Advances in artificial intelligence (AI) programmes mean they are increasingly popular. And so are predictions about the ability to replace workers. Nick Clark investigates how AI affects our lives—and analyses its limitations
Issue 2838
Sci-fi robot examines equations on a blackboard

Will AI change our lives for the better or worse? (Picture: Mike MacKenzie)

Robots to steal our jobs! AI takeover looms as experts warn of widespread unemployment.

I didn’t write that. A ­programme called ChatGPT did after I asked it to write this article for me. 

It’s an impressive piece of artificial intelligence (AI) ­software launched at the end of last November to try online for free.

In a matter of seconds, it can write you almost anything you want—a letter, a script, a story, a song, computer code or a newspaper article. And it can often tell you what you want to know faster than a Google search could, and summarise it in a few neat paragraphs.

An equally impressive ­programme, Dall-E—from the same company, OpenAI—­creates images instead of text.

In the days after ChatGPT launched there was a slew of newspaper articles speculating whether AI could replace ­countless different jobs. More recently they’ve been ­worrying that clever-minded school students might use it to cheat on their homework. However, I do know of one teacher who says he can get it to write his lesson plans and worksheets.

Like every advance in ­technology, AI could be used to take the drudgery, stress and hardship out of work, and free us from the long hours we spend doing it. But, like every advance in technology under capitalism, it won’t be.

It’s telling that in all the ­commentary about ChatGPT, almost none of it is about how it could make life better. Instead, much of it is about whether AI will be used to get rid of people’s jobs. Or whether it will “augment” them, as Labour MP Darren Jones put it.

That is, whether it can be used to make us more “efficient”—to squeeze more out of us and make us work harder for less pay. In fact, it’s already doing both. Chatbots now often do jobs that might once have been filled by call centre workers.

And just last week, a court in Canada ordered a worker to compensate her bosses for “time theft” after AI spying software decided she hadn’t spent enough time on “work-related tasks.”

All this is big business. OpenAI, for example, hopes to make $1 billion next year charging other businesses to use its software. 

Yet ultimately the source of all this is still human labour and effort. For a start, ChatGPT and Dall-E both “learn” from masses of information that’s been scraped from the internet.

They also use the ­information they harvest from their own users, who simply by interacting with them provide them with more text and image data. All of this is information that has been created and put on the internet by human beings.

What’s more, human beings need to programme the software that gathers that information—to tell it what to look for, and how to store and label it. They also write the algorithms that ChatGPT and Dall-E use to interpret the information and respond to a user’s request or command.

AI software might be able to do things faster or better than people can. But it can’t do it without us. That tells us something about the value and indispensability of human labour.

At the most basic level we have to work to produce the essentials that we need to survive. Under capitalism, we also have to work to produce the commodities that capitalists buy and sell for profit.

Without human labour, ­nothing would ever be made. Not the physical goods that fill the shelves and warehouses, nor the data stored and traded on the internet, nor the machinery needed to manufacture both.

Neither would the ­infrastructure and services that are needed to facilitate the ­production and distribution of all this exist.

The revolutionary Karl Marx saw all this and realised that all of the labour going into the ­production of commodities is what determines their value. The more labour (at the  socially-determined level of the typical worker with typical tools) needed to produce a ­commodity, the more it is worth.

If a boss or firm sells this commodity for what it’s worth, they can make a profit by only paying back a fraction of that to workers. The rest is the source of profit. And, they figure, they can make even more profit—and outdo their competitors—if they can produce commodities faster and spend less on labour.

So they’re always looking for new technological ways to speed up production, squeeze more out of their workers, or get rid of them entirely. And that might make them more money in the short term.

But when they invest in that technology, what they’re really doing is paying for the labour that’s already gone into ­producing it. It’s not living, human labour—it’s dead labour—and it doesn’t produce new value.

Instead, because the amount of human labour it takes to ­produce the commodity has fallen, so does its value. Eventually, the amount spent on technology and dead labour eats into profits.

That means the more that ­capitalism tries to replace human labour with technology, the more it runs into crisis. 

This dependence on human labour is why, despite some transformative technological advances, it hasn’t managed to do away with human workers. Even if it manages to destroy one set of workers’ jobs with technology, that same technology depends on another set of workers elsewhere.

And because of that there’s always the potential to fight for a society where technology really does release us from the drudgery and hardship of work.

I asked ChatGPT what it thought about that. It said it was a “noble and important goal.


Because AI is rooted in human labour, it reflects and amplifies the very worst of what human labour is made to produce under capitalism. This is most obvious in some AI generated images.

Lensa is an app that sells its users AI-generated “magic avatars”—idealised representations of themselves. The user uploads selfies and the app refers to its own database of images to generate hundreds of fantasy portraits.

Yet while Lensa tends to illustrate men as astronauts or superheroes, women users find it over-sexualises them.It depicts them with large chests and often naked or in sexualised poses.

Black women often find their features Europeanised, their skin tones lightened. Melissa Heikkila, a tech journalist of Asian heritage, says her results were especially sexualised, bore her only a passing resemblance.

Creators of programmes such as Lensa say the AI is only reflecting what it gathers from the internet. And that’s true to an extent.

Lensa and other popular apps use an open database of images called Laion-5B. Heikkila said that when she searched the database for “Asian”, “All I got back was porn. Lots of porn. The only thing the AI seems to associate with the word ‘Asian’ is naked East Asian women.”

The internet is home to a vast porn industry that turns women’s bodies and sexuality into products to be bought and sold. It reflects the worst sexist and racist stereotypes and takes them to extremes.

And it’s not just pornography. Advertising, the media—every bit of society—is touched and shaped by racist ideas and the role of women.

AI programmes devour then regurgitate it all. But that shouldn’t let their creators off the hook.

They ultimately control the data their programmes gather. They can filter what information goes in, determining what comes out.

Perhaps the reason their AI programmes keep reproducing pornography is because it complements their purpose so well. Just like the material they consume, they’re also commodifying bodies for profit. 

Is it any wonder their AI seeks out and reproduces the worst and most blatant examples of what that means?

Sign up for our daily email update ‘Breakfast in Red’

Latest News

Make a donation to Socialist Worker

Help fund the resistance
One-off