Home Computing UK announces £8.5m grant ‘to push boundaries of AI safety research’

UK announces £8.5m grant ‘to push boundaries of AI safety research’

Michelle Donelan, head of the Department for Science, Innovation and Technology, unveiled the funding programme during the second day of the AI Seoul Summit.

The UK’s AI Safety Institute (AISI) will lead the new initiative, which it will deliver in collaboration with UK Research and Innovation (UKRI) and The Alan Turing Institute.

Shahar Avin, an AI safety researcher joining the institute on secondment from the Centre for the Study of Existential Risk (CSER), will lead the charge within AISI.

The government says Avin’s experience at Google and CSER positions him perfectly to guide proposals that safeguard the public from AI risks while harnessing its potential.

As part of the initiative, researchers across the UK will compete for grants to tackle pressing issues like deepfakes and cyberattacks, while also exploring how AI can be harnessed for positive impacts like increased productivity.

Proposals with the strongest potential will be nurtured into long-term projects, opening the door for additional funding.

The programme extends beyond immediate threats, with the government emphasising “systemic AI safety,” a new area of focus for the AISI. This research will explore AI’s broader societal implications and how institutions, systems and infrastructure can adapt to this transformative technology.

“We will begin by offering a round of seed grants, followed by future rounds with more substantial awards,” said AISI. “We expect to provide successful applicants with ongoing support, computing resources, and access to a community of AI and sector-specific domain experts. We designed our grant process to be maximally inclusive – applying for a grant should be fast and easy.”

While grant applicants must be based in the UK, international collaboration is actively encouraged.

Donelan expects the new programme will ensure the UK remains at the forefront of developing innovative approaches to keep AI as a force for good.

The funding announcement follows the AISI’s release of its first AI safety test results last week.

The results highlighted concerning vulnerabilities: none of the five unnamed models tested could handle complex tasks independently, and all were susceptible to security breaches.

Furthermore, the AISI identified instances where models produced harmful outputs even without deliberate attempts to bypass safeguards.

The first day of the AI Seoul Summit saw 16 international AI companies sign the Frontier AI Safety Commitments – a voluntary pledge to develop and deploy AI responsibly – and ten nations and the EU signed the “Seoul Statement of Intent toward International Cooperation on AI Safety Science,” agreeing to establish the first international network of AI safety institutes.

A key commitment in the AI companies’ commitment is to refrain from developing or deploying AI systems where risks cannot be adequately mitigated.

 

Reference

Denial of responsibility! TechCodex is an automatic aggregator of Global media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, and all materials to their authors. For any complaint, please reach us at – [email protected]. We will take necessary action within 24 hours.
DMCA compliant image

Leave a Comment