By Zoe Kleinman, BBC Technology editor
“Move fast and break things” is a motto that continues to haunt the tech sector, some 20 years after it was coined by a young Mark Zuckerberg.
Those five words came to symbolise Silicon Valley at its worst – a combination of ruthless ambition and a rather breathtaking arrogance – profit-driven innovation without fear of consequence.
I was reminded of that phrase this week when the actor Scarlett Johansson clashed with OpenAI. Ms Johansson claimed both she and her agent had declined for her to be the voice of its new product for ChatGPT – and then when it was unveiled it sounded just like her anyway. OpenAI denies that it was an intentional imitation.
It’s a classic illustration of exactly what the creative industries are so worried about – being mimicked and eventually replaced by artificial intelligence.
There are echoes in all this of the macho Silicon Valley giants of old. Seeking forgiveness rather than permission as an unofficial business plan.
The tech firms of 2024 are extremely keen to distance themselves from that reputation.
And OpenAI wasn’t shaped from that mould. It was originally a non-profit organisation committed to investing any extra profits back into the business.
In 2019, when it formed a profit-making arm, the company said it would be led by the non-profit side, and there would be a cap on the returns for investors.
Not everybody was happy about the shift – it was said to have been a key reason behind co-founder Elon Musk’s decision to walk away. And when OpenAI CEO Sam Altman was suddenly fired by the board late last year, one of the theories was that he wanted to move further away from the original mission. We never found out for sure.
But even if OpenAI has become more profit-driven, it still has to face up to its responsibilities.
Stuff of nightmares
In the world of policy-making, almost everyone is agreed on the need for clear boundaries to keep companies like OpenAI in line before disaster strikes.
So far, the AI giants have largely played ball on paper. At the world’s first AI Safety Summit six months ago, a group of tech bosses signed a voluntary pledge to create responsible, safe products that would maximise the benefits of AI technology and minimise its risks.
Those risks they spoke of were the stuff of nightmares – this was Terminator, Doomsday, AI-goes-rogue-and-destroys-humanity territory.
Last week, a draft UK government report from a group of 30 independent experts concluded that there was “no evidence yet” that AI could generate a biological weapon or carry out a sophisticated cyber attack. The plausibility of humans losing control of AI was “highly contentious”, it said.
And when the summit reconvened earlier this week, the word “safety” had been removed entirely from the conference title.
Some people in the field have been saying for quite a while that the more immediate threat from AI tools was that they will replace jobs or cannot recognise skin colours. These are the real problems, says AI ethics expert Dr Rumman Chowdhury.
And there are further complications. That report claimed there was currently no reliable way of understanding exactly why AI tools generate the output that they do – even their developers aren’t sure. And the established safety testing practice known as red teaming, in which evaluators deliberately try to get an AI tool to misbehave, has no best-practice guidelines.
And at that follow-up summit this week, hosted jointly by the UK and South Korea in Seoul, tech firms committed to shelving a product if it didn’t meet certain safety thresholds – but these will not be set until the next gathering in 2025.
While the experts debate the nature of the threats posed by AI, the tech companies keep shipping products.
The past few days alone have seen the launch of ChatGPT-4O from OpenAI, Project Astra from Google, and CoPilot+ from Microsoft. The AI Safety Institute declined to say whether it had the opportunity to test these tools before their release.
OpenAI says it has a 10-point safety process, but one of its senior safety-focused engineers resigned earlier this week, saying his department had been “sailing against the wind” internally.
“Over the past years, safety culture and processes have taken a backseat to shiny products,” Jan Leike posted on X.
There are, of course, other teams at OpenAI who continue to focus on safety and security. But there’s no official, independent oversight of what any of these companies are actually doing.
“Volunteer agreements essentially are just a means of firms marking their own homework,” says Andrew Strait, associate director of the Ada Lovelace Institute, an independent research organisation. “It’s essentially no replacement for legally binding and enforceable rules which are required to incentivise responsible development of these technologies.”
“We have no guarantee that these companies are sticking to their pledges,” says Professor Dame Wendy Hall, one of the UK’s leading computer scientists.
“How do we hold them to account on what they’re saying, like we do with drugs companies or in other sectors where there is high risk?”
Tougher rules are coming. The EU passed its AI Act, the first law of its kind, and has tough penalties for non-compliance, but some argue it will impact users – who will have to risk-assess AI tools themselves – rather than those that develop the AI .
But this doesn’t necessarily mean that AI companies are off the hook.
“We need to move towards legal regulation over time but we can’t rush it,” says Prof Hall. “Setting up global governance principles that everyone signs up to is really hard.”
“We also need to make sure it’s genuinely worldwide and not just the Western world and China that we are protecting.”
The overriding issue, as ever, is that regulation and policy move a lot more slowly than innovation.
Prof Hall believes the “stars are aligning” at government levels.
The question is whether the tech giants can be persuaded to wait for them.