close
Machine learning & AIOther

New software can verify how much knowledge AI truly understands.

Researchers at the University of Surrey have developed software that can confirm how much data an AI data system has sourced from an organization’s digital database in light of the increasing interest in generative artificial intelligence (AI) systems around the world.

In order to determine whether AI has learned too much or even accessed sensitive data, an organization can use Surrey’s verification software as a component of its online security protocol.

The software is also able to determine whether AI has discovered and is capable of using software flaws. For instance, in the context of online gaming, it could be determined whether an AI had developed the ability to consistently win at online poker by taking advantage of a coding error.

“There has been a major rise in public and commercial interest in generative AI models in recent months, fuelled by developments in big language models such as ChatGPT. The development of tools that can check the performance of generative AI is critical for their safe and responsible deployment. This study is a critical step in ensuring the privacy and integrity of training datasets.”

Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey,

Dr. The lead author of the paper and Research Fellow in formal verification of privacy at the University of Surrey is Fortunat Rajaona. According to him, “AI systems interact with each other or with humans in many applications, such as self-driving cars on the highway or hospital robots. We’ve been working on a solution for years to the ongoing issue of determining what knowledge an intelligent AI data system possesses.

“Our verification software can determine how much AI can learn from their interactions, whether they have enough knowledge to enable effective cooperation, and whether they have too much knowledge to compromise privacy. We can give organizations the assurance they need to safely unleash AI’s power in safe environments by enabling them to validate what the technology has learned.

The research on Surrey’s software was recognized as the best paper at the 25th International Symposium on Formal Methods.

According to Professor Adrian Hilton, Director of the Institute for People-Centred AI at the University of Surrey, “Over the past few months there has been a huge surge in public and industry interest in generative AI models, fuelled by advances in big language models like ChatGPT. To support the safe and responsible deployment of generative AI, tools that can verify their effectiveness must be developed. Maintaining the confidentiality and integrity of training datasets is made possible by this research.”.

More information: Fortunat Rajaona et al, Program Semantics and Verification Technique for AI-centred Programs (2023). openresearch.surrey.ac.uk/espl … tputs/99723165702346

Topic : Article