close
Computer Sciences

Deep neural networks and vector-symbolic models are combined in this design.

Two of the most well-known artificial intelligence techniques, deep neural networks and vector-symbolic models, have recently been combined by researchers at IBM Research Zürich and ETH Zürich to form a new architecture. Their architecture, which was published in Nature Machine Intelligence, could get around both of these methods’ drawbacks, solving progressive matrices and other reasoning problems more successfully.

One of the researchers who conducted the study, Abbas Rahimi, told Tech Xplore that their most recent paper was “based on our earlier research works aimed at augmenting and enhancing neural networks with the powerful machinery of vector-symbolic architectures (VSAs)”. “This combination was previously applied to few-shot learning tasks as well as few-shot continual learning tasks, achieving state-of-the-art accuracy with lower computational complexity. In our most recent paper, we extend this idea beyond perception by focusing on visual abstract reasoning tasks, particularly the popular IQ tests known as Raven’s progressive matrices.

Non-verbal tests like Raven’s progressive matrices are frequently used to gauge a person’s IQ and capacity for abstract thought. A number of items are presented in sets, but one or more of the items are missing.

“Our current publication was built on our previous research works targeted at augmenting and strengthening neural networks using the powerful machinery of vector-symbolic architectures (VSAs),”

Abbas Rahimi, one of the researchers who carried out a study,

Respondents must choose the correct missing item from a list of alternatives in order to correctly solve Raven’s progressive matrices. Advanced reasoning skills are needed for this, such as the ability to recognize abstract relationships between objects that may be connected to their size, shape, color, or other characteristics.

In the neuro-vector-symbolic architecture (NVSA) that Rahimi and his coworkers created, VSA technology is combined with deep neural networks, which are well known for performing well on perception tasks. Using distributed, high-dimensional vectors, VSAs are special computational models that carry out symbolic computations.

“While our approach might sound a little bit like neuro-symbolic AI approaches,” said Rahimi, “neurosymbolic AI has inherited the limitations of its individual deep learning and classical symbolic AI components.”. “Our main goal is to use a shared language between the neural and symbolic components to address these limitations in NVSA, specifically the exhaustive search problem and the neural binding problem.”.

Two key architectural design elements supported the team’s combination of deep neural networks and VSAs. Among them are a novel neural network training procedure and a technique for VSA transformations.
Rahimi stated that “we developed two key enablers of our architecture.”. The first is the use of a cutting-edge neural network training method as a versatile method of representational learning over VSA. The second is a technique for getting proper VSA transformations so that straightforward algebraic operations in the VSA vector space can take the place of laborious probability computations and searches.

The architecture created by Rahimi and his colleagues showed very promising results in early evaluations, resolving Raven’s progressive matrices faster and more effectively than other architectures previously developed. It achieved new record accuracies of 87.7% on the Raven dataset and 88.1% on the I-RAVEN dataset, outperforming both cutting-edge deep neural networks and neuro-symbolic AI approaches in terms of performance.

“Probabilistic abduction is required to solve a Raven test,” Rahimi said. This process entails looking for a solution in a space that has been previously defined by background information on the test. All potential rule realizations that might control the Raven tests are described in order to symbolize prior knowledge. The method of purely symbolic reasoning calls for computing the rule probability for each of the possible combinations, then adding those results. Due to the large number of combinations that would be cost-prohibitive to test, this search becomes a computational bottleneck in the large search space.”.

In contrast to current architectures, NVSA can complete complex probabilistic calculations in a single vector operation. In turn, this enables it to solve analogy- and abstract reasoning-related problems, like Raven’s progressive matrices, faster and more precisely than other AI methods based solely on deep neural networks or VSAs.

Rahimi added that “our approach also addresses the neural binding problem, enabling a single neural network to separately recognize distinct properties of multiple objects concurrently in a scene.”. Overall, NVSA offers transparent, quick, and effective reasoning. It is also the first instance of how distributed VSA representations and operators can effectively carry out probabilistic reasoning (as an upgrade to pure logical reasoning). The probabilistic reasoning of NVSA is two orders of magnitude faster than the symbolic reasoning of neuro-symbolic approaches, with less expensive operations on the distributed representations.”.

This team’s new architecture has so far shown to be very promising for effectively and quickly completing challenging reasoning tasks. It might be evaluated and applied to a variety of other issues in the future, and it might even serve as inspiration for the creation of additional strategies that are similar.

Rahimi continued, “NVSA is a significant step towards encapsulating various AI paradigms in a unified framework to address tasks involving both perception and higher-level reasoning.”. It’s interesting to note that NVSA already showed signs of generalization to numerous previously unknown pairings of objects and object attributes. This problem still needs to be solved in terms of further generalization.”.

More information: Michael Hersche et al, A neuro-vector-symbolic architecture for solving Raven’s progressive matrices, Nature Machine Intelligence (2023). DOI: 10.1038/s42256-023-00630-8

Topic : Article