The UK AI Safety Institute’s Inspect evaluations platform has been made available globally, in a move the organisation believes will lay an international foundation for safe AI development.
International stakeholders will now be able to access Inspect, a software library enabling testers to assess specific capabilities of individual models and produce a score based on results.
The platform can be used by start-ups, academic and AI developers. Models are assessed based on factors such as core knowledge, ability to reason and autonomous capabilities. Inspect can be accessed free of charge via an open source licence.
AI Safety Institute Chair, Ian Hogarth, said: “As Chair of the AI Safety Institute, I am proud that we are open sourcing our Inspect platform. Successful collaboration on AI safety testing means having a shared, accessible approach to evaluations, and we hope Inspect can be a building block for AI Safety Institutes, research organisations, and academia.
“We have been inspired by some of the leading open source AI developers – most notably projects like GPT-NeoX, OLMo or Pythia which all have publicly available training data and OSI-licensed training and evaluation code, model weights, and partially trained checkpoints. This is our effort to contribute back.
“We hope to see the global AI community using Inspect to not only carry out their own model safety tests, but to help adapt and build upon the open source platform so we can produce high-quality evaluations across the board.”
This marks the first time an AI testing platform developed by a state-backed body has been released for wider international use. This falls in line with the government’s ambitions for the UK to become a digital and tech leader globally.
The UK is striving to be the ‘next SIlicon valley’, as Jeremy Hunt, Chancellor of the Exchequer, put it in his spring budget statement earlier this year. Cultivating safe AI development is key to this, alongside Open Banking, cryptocurrencies and digital assets.
Taking Inspect international is not the AI Safety Institute’s first cross-border foray this year, however. The Institute notably partnered with its US counterpart in an information sharing agreement last month.
With Canada now working on its own AI Safety Institute, and with the two nations having a history of economic cooperation, a similar agreement could be on the cards in the future.
These developments are of great significance to the UK fintech and finance sectors, which PM Rishi Sunak and his government are counting on to play a continuing role in the UK’s economic recovery.
AI’s use in finance and payments has been widely discussed. The tech’s potential for AML and fraud prevention and detection, is a particular area of interest, whilst other stakeholders have pointed to the possibilities of linking it with Open Banking.
UK Secretary of State for Science, Innovation, and Technology, Michelle Donelan, added: “As part of the constant drumbeat of UK leadership on AI safety, I have cleared the AI Safety Institute’s testing platform – called Inspect – to be open sourced.
“This puts UK ingenuity at the heart of the global effort to make AI safe, and cements our position as the world leader in this space.
“The reason I am so passionate about this, and why I have open sourced Inspect, is because of the extraordinary rewards we can reap if we grip the risks of AI. From our NHS to our transport network, safe AI will improve lives tangibly – which is what I came into politics for in the first place.”