RISHI Sunak today defended inviting China to Britain’s first ever AI Safety Summit, amid fears the communist state will use the event as an intelligence gathering spree.
The PM insisted that keeping close to communist officials on such a dominant and high stakes issue is safer for the UK than disengaging.
APRishi Sunak delivers a speech on AI at the Royal Society based in Carlton House Terrace, London[/caption]
Mr Sunak will host world leaders at the first ever international AI Safety Summit at Bletchley Park, in Buckinghamshire next week.
Tory hawks had spent weeks urging the PM against inviting China.
They don’t want the authoritarian regime to learn how Britain intends to regulate and apply complex technologies in the future.
Ex-Tory chief Sir Iain Duncan Smith said: “China are a threat and until we wake up to that threat, engaging with them only makes us look weak.
READ MORE POLITICS
“The summit is an engagement for people who are in the free world – that’s what it should be.”
But in a major speech this morning, a defiant Mr Sunak hit back: “We’re bringing together the world’s leading representatives, from civil society to the companies pioneering AI and the countries most advanced in using it.
“And yes, we’ve invited China.
“I know there are some who will say they should have been excluded but there can be no serious strategy for AI without at least trying to engage all of the world’s leading AI powers.
“That might not have been the easy thing to do but it was the right thing to do.”
In his speech Mr Sunak issued a no-holds-barred stark warning about the risks of AI.
He said: “Get this wrong and it could make it easier to build chemical or biological weapons. Terrorist groups could use AI to spread fear and disruption on an even greater scale.
“Criminals could exploit AI for cyber attacks, disinformation, fraud or even child sexual abuse.
“And in the most unlikely but extreme cases, there is even the risk that humanity could lose control of AI completely through the kind of AI sometimes referred to as ‘super intelligence’.”
To mitigate risks, Mr Sunak said that he’ll use the summit to try and convince world leaders to establish “a truly global expert panel” who can publish a report on the state of AI.
He said: “Next week I will propose we establish a truly global expert panel nominated by the countries and organisations attending to publish a ‘state of AI science’ report.
“Of course, our efforts also depend on collaboration with the AI companies themselves.”
It comes as spooks in a shocking new report have warned that AI programmes have already learnt to be “persuasive liars” and could massively increase the powers of cyber hackers and terrorists.
New scams like fake kidnappings and “sextortion” through AI generated images and video are also looming.
Intelligence agencies have mapped the “catastrophic” threat from technology toward humanity.
In a stark warning, the British spooks report says: “Given the significant uncertainty in predicting AI developments, there is insufficient evidence to rule out that highly capable future frontier AI systems, if misaligned or inadequately controlled, could pose an existential threat.”
It adds that: “AI models can maintain coherent lies in simple deception games, and larger models are more persuasive liars.”
The report also raises a host of concerns that AI will boost the powers of hackers, warning: “Offensive cyber capabilities could allow AI systems to gain access to money, computing resources, and critical infrastructure.”
“AI systems can be used by potentially anyone to create faster paced, more effective and larger scale cyber intrusion via tailored phishing methods or replicating malware.”
It adds: “AI might increase the harms in the above categories and may also create novel harms, such as emotional distress caused by fake kidnapping or sextortion scams.”
Based on sources including UK spies, it says many experts believe it is a “risk with very low likelihood and few plausible routes”, and would need the technology to “outpace mitigations, gain control over critical systems and be able to avoid being switched off”.
Most read in The Sun
And it maps a number of pathways to “catastrophic” or existential risks as a self-improving system that can achieve goals in the physical world without oversight working to harm human interests.
Without new regulation, the report warns humans risk an “over-reliance” as we grant more control over critical systems they no longer fully understand and become “irreversibly dependent”.
Published: [#item_custom_pubDate]