close
Computer

How Much Information AI Actually Knows Can Be Verified by New Cyber Software

Researchers at the University of Surrey have developed software that can confirm how much data an AI has been fed from an organization’s digital database, which is able to help with the growing interest in generative artificial intelligence (AI) systems throughout the world.

In order to determine whether an AI has learned too much or even accessed critical data, an organization can employ Surrey’s verification software as part of its online security strategy.

The software can also tell whether artificial intelligence (AI) has found and can use software bugs. For instance, in the context of online gambling, it may determine whether an AI had developed the ability to consistently win at online poker by taking advantage of a code error.

Our verification software can deduce how much AI can learn from their interaction, whether they have enough knowledge that enable successful cooperation, and whether they have too much knowledge that will break privacy. Through the ability to verify what AI has learned, we can give organizations the confidence to safely unleash the power of AI into secure settings.

Dr. Solofomampionona Fortunat Rajaona

Dr. Solofomampionona Fortunat Rajaona is a Research Fellow in formal verification of privacy at the University of Surrey and the lead author of the paper. He said:

“In many applications, AI systems interact with each other or with humans, such as self-driving cars in a highway or hospital robots. Working out what an intelligent AI data system knows is an ongoing problem which we have taken years to find a working solution for.”

“Our verification software can deduce how much AI can learn from their interaction, whether they have enough knowledge that enable successful cooperation, and whether they have too much knowledge that will break privacy. Through the ability to verify what AI has learned, we can give organizations the confidence to safely unleash the power of AI into secure settings.”

During the 25th International Conference on Formal Methodologies, the study of Surrey’s software was given the best paper prize.

Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, said:

“Over the past few months there has been a huge surge of public and industry interest in generative AI models fuelled by advances in large language models such as ChatGPT. Creation of tools that can verify the performance of generative AI is essential to underpin their safe and responsible deployment. This research is an important step towards is an important step towards maintaining the privacy and integrity of datasets used in training.”

Topic : Article