(Image: www.surrey.ac.uk)

With a rapidly growing interest in generative artificial intelligence (AI) systems worldwide, researchers at the University of Surrey (Guildford, England) have created software that is able to verify how much information an AI system farmed from an organization’s digital database.

The verification software can be used as part of a company’s online security protocol, helping an organization understand whether an AI system has learned too much — or even accessed sensitive data.

In addition, the software can ascertain whether an AI system has identified — and is capable of exploiting — flaws in software (e.g., it can tell whether the AI has learned to always win in online poker by exploiting a coding fault).

“In many applications, AI systems, such as self-driving cars on a highway or hospital robots, interact with each other or with humans,” said lead author Dr. Solofomampionona Fortunat Rajaona. “Working out what an intelligent AI data system knows is an ongoing problem which we have taken years to find a working solution for.”

“Our verification software can deduce how much AI can learn from their interactions, whether they have enough knowledge to enable successful cooperation, and whether they have too much knowledge that will break privacy. Through the ability to verify what AI has learned, we can give organizations the confidence to safely unleash the power of AI into secure settings.”

“Over the past few months there has been a huge surge of public and industry interest in generative AI models fueled by advances in large language models such as ChatGPT,” said Professor Adrian Hilton. “Creation of tools that can verify the performance of generative AI is essential to underpin its safe and responsible deployment. This research is an important step toward maintaining the privacy and integrity of datasets used in training.”

Here is a Tech Briefs interview, edited for length and clarity, with Rajaona.

Tech Briefs: What were some of the biggest technical challenges you faced along the way? And how did you overcome them?

Rajaona: To find our solution, we needed to build a theoretical foundation for programs that give knowledge to AI agents. Traditional programming theories, such as the well-known Hoare Calculus, do not deal with agent learning from programs. So, developing the programming theory was challenging and took time. Once we had it right, we could develop a verification technique and write the code to implement it.

Tech Briefs: Can you explain in simple terms how the verification software works?

Rajaona: We describe an AI system as a program. The program specifies which agent is allowed to see which data, it also describes the operations performed on the data and possible interactions between agents. Then, we describe what this program should not leak. This is the privacy specification. A program and its privacy specification are transformed into a formula that is checked by an external verifier called an SMT solver.

Tech Briefs: How soon do you see the verification software becoming commercially available?

Rajaona: We plan to make it available in 8 to 12 months, but not commercially. We are still developing the features that would make it commercially appealing.

Tech Briefs: What are your next steps? Are there any set plans for the further research? Testing?

Rajaona: We are testing different use cases with our verification software. After that, we will extend our approach to get more interest from AI engineering. The next step is to implement a quantitative measure of an AI agent’s knowledge.

Tech Briefs: Do you have any advice for engineers aiming to bring their ideas to fruition?

Rajaona: There is still a lot of room for creativity in the domain of AI and in software engineering in general. When you have a good idea, and work hard and long enough on it, you can achieve great things.

Tech Briefs: Anything else you’d like to add?

Rajaona: Scientists and engineers have some moral duties in the tools that they develop. With the ever-increasing capabilities of AI systems, the protection of individuals’ privacy must be taken seriously.