
Researchers at the University of Cambridge applied a very simple physical constraint to an artificial intelligence system. Interestingly, this application has led to AI adapting some characteristics of the human brain.
Danyal Akarca, Duncan Astle, Jascha Achterberg, and John Duncan have been awarded funding from the Medical Research Council (MRC), the Gates Cambridge Trust, the James S McDonnell Foundation, and the Templeton World Charity Foundation to pursue research in neuroscience using artificial intelligence (AI). The researchers will be working at the MRC Cognition and Brain Sciences Unit at Robinson College, Cambridge.
Akarca’s research will focus on using AI to develop new treatments for neurological disorders. Astle’s research will focus on using AI to understand how the human brain controls movement. Achterberg’s research will focus on using AI to develop new methods for diagnosing and treating mental health disorders. Duncan’s research will focus on using AI to understand how the brain processes language.
The researchers’ work has the potential to make a significant impact on our understanding of the brain and to develop new treatments for a wide range of neurological and mental health disorders.
This research is a testament to the power of AI to revolutionize our understanding of the human brain and to develop new treatments for neurological and mental health disorders. The researchers’ work has the potential to make a real difference in the lives of people with these disorders.
Table of Contents
Scientists at the University of Cambridge are trying to impose physical constraints on artificial intelligence systems, just as the brains of humans and other animals have to develop and manipulate physical and biological structures. I am. Subsequently, the system developed some functions of a complex biological brain to solve tasks.
In a study published today in the journal Nature Machine Intelligence, Jascha Achterberg and Danyal Akarca from the Medical Research Council Cognitive and Neuroscience Unit (MRC CBSU) at the University of Cambridge collaborated with colleagues to simplify the human brain. Developed version. We applied some physical constraints before assigning tasks to the system. This technology could be used to develop more efficient AI systems and better understand the human brain itself.
Not only is the brain great at solving complex problems, it does so while using very little energy
-Jascha Achterberg
Developing a system with the same limitations as the human brain
Instead of using actual neurons or brain cells, they used computing nodes. This is because both neurons and nodes have similar functions. Both take input, transform it, produce and output it. Additionally, a single node or neuron can be connected to several other nodes or neurons, all of which can output and input information.
The physical limits they imposed on their system of computing nodes were similar to those experienced by neurons in the brain. Each node was assigned a specific position in virtual space, and the further away it was from other nodes, the more difficult it was to do. It was for two people to communicate.
After setting this limit, the system was given a task to complete. The task in this case was a simplified version of the maze navigation task typically given when examining the brains of animals such as rats and monkeys. He was given some information to determine the shortest route to the end of the maze.
The system initially did not know how to complete the task and kept making mistakes. The researchers kept giving it feedback until it gradually learned to get better at the task. The system then repeated the task over and over again until it learned how to perform it correctly.
As we mentioned earlier, the constraint placed upon the system meant that the further away the two nodes were in the virtual space, the more difficult it was to build a connection between the two nodes in response to the feedback. This is just like how it is more expensive to form and maintain connections across a large physical distance in the human brain.
Same tricks as the human brain
When the system performs these tasks under these constraints, it uses some of the same “tricks” that the real human brain uses to solve the same tasks. I used it. An example of this is how the company tried to get around the restrictions by developing a hub of highly connected notebooks that acted as a hub for routing information across the network.
But what surprised researchers even more was the fact that the behavior of the individual nodes themselves began to change. Instead of using a system in which each node solves for specific characteristics of the maze task, such as the destination or next choice, the node developed a “flexible coding scheme.”
This means that nodes can “fire” different properties of the maze at different times. For example, the same node can encode different locations in a maze, rather than requiring a special node to encode a specific location. This is also observed in complex animal brains.
Interestingly, this one simple limitation (making it difficult to connect distant nodes) forces artificial intelligence systems to have complex properties. These properties are also shared by biological systems such as the human brain.
Designing more efficient AI systems
An important conclusion of this study is that it has the potential to enable the development of more efficient AI models. Many of the common AI systems we know, such as the Generative Pre-trained Transformer (GPT) technology used in OpenAI, are resource intensive, such as computing power (GPU) and electricity.
“We see great potential in using our findings to create AI models that are simpler internally, while still retaining functionality, and can run more efficiently on computer chips.” We’re feeling ‘an AI model across multiple chips in a large computing cluster,”’ Achterberg said in an interview.
Current implementations of “spatially embedded AI systems” are based on very small and simple models to study their effects. However, it can be scaled to build larger AI systems.
Many companies such as Google, Amazon, Meta, and IBM are also developing their AI chips, but Nvidia dominates the market. It accounts for more than 70% of AI chip sales in the market. This, combined with the fact that countries such as the US restrict the sale of AI chips to certain markets, means that AI chips are very expensive and difficult to obtain. They also use large amounts of electricity, which contributes to climate change.
Therefore, there is great interest in developing sparse AI models that operate with fewer parameter sets and fewer
‘neural connections’. In theory, sparse models can run more efficiently. The results of this Cambridge study could help develop brain-inspired sparse models that can solve the same problems more efficiently.
Understanding the Human Brain
There’s a more interesting angle to technology. Perhaps we can also use it to better study real human brains.
“The brain is an amazingly complex organ, and to understand it we need to create simplified models of its function and explain the principles by which it works.” The main advantages are that you can: “We can study phenomena that are difficult to study in real brains,” Achterberg said. In a real brain, you can’t remove neurons and add them later to see their exact role. But with artificial intelligence systems, it’s entirely possible.
“The big questions in neuroscience are that we can usually only understand the structure of the brain (which neurons connect to which other neurons) or the function of the brain (which neurons are currently sending and receiving information). “We showed that by using our simplified artificial model, we can examine both the structural and functional principles of the brain and investigate the relationship between brain structure and function,” he added.
It would be incredibly difficult to do what Achterberg described with data recorded from real brains. It might be easier with a simplified artificial brain.
Further development of the rudimentary “artificial brain”
The researchers are now focused on further developing the system in two directions. On the one hand, we need to make the model more similar to the brain, while at the same time making it less complex. “In this direction, we have started using so-called spiking neural networks, which more closely mimic the way information is transmitted through the brain than normal AI models” Achterberg he said.
The second is to transfer the insights gained from small, simplified models to the large-scale models used in modern AI systems. In this way, they hope to study the effects of brain-like energy-efficient processing in large-scale systems that require large amounts of energy.