@ AI Buzz
2023-10-31 13:27:22
[Read more related articles, you can follow me on the anti-censorship long content platform yakihonne.com](https://yakihonne.com/users/nprofile1qqsyq3qhdnxg3s2rv7tymzr7phfzcaswnzmnwhg29mpnjfcwk0euy0qpr3mhxue69uhkummnw3ez6vp39eukz6mfdphkumn99e3k7mgpremhxue69uhkummnw3ez6vpj9ejx7unpveskxar0wfujummjvuq3gamnwvaz7tmjv4kxz7fwv3sk6atn9e5k7wvhz86)
Back in the beginning of the 21st century, when I studied for MBA at NYU Stern, one class I took was called Data Mining, which introduced many algorithms to “mine” the data, meaning to automatically find the meaning of the data for forecasts and decision making. The neural network was one of them, but it was far from the top choices because it was slow, required a lot of data to train, and, hence, had minimal use cases. Twenty years later, neural network algorithms have thrived as the cornerstones of machine learning and artificial intelligence (AI) due to the tremendous computational power that removed the fundamental obstacle and, in turn, led to the invention of more advanced algorithms and models.
With the fast advances in artificial neural networks and deep learning, AI has surpassed humans in certain areas. Many intriguing questions have arisen, such as how similar AI and the human brain are, what the future objective of AI is, and to what degree AI can replace human intelligence. In this article, I will start with the neural mechanisms of biological learning and how they have inspired AI. A better understanding of the history will help us grasp the fundamental difference between artificial neural networks and other machine learning models (e.g., support vector machines, decision trees, random forests). It was the learning features inspired by the brain that led to the recent breakthroughs of artificial neural networks, including convolution neural networks (CNN) for image recognition and large language models (LLM) for generative AI. I will then discuss the differences between human intelligence and AI and our perspective on the future direction of AI. What we expect to see next is that AI will continue to benefit from discoveries in the brain, and, equally important, AI can also help us better understand how the brain works. The continuous exchange of ideas will propel both neuroscience and AI to advance at a healthy, faster pace.
## Biological Learning
Learning is an essential feature of animal and human brains. When a baby is born, she has to learn almost everything from scratch, including recognizing faces, speaking, and walking, followed by many years of school education and training. How does learning happen in the brain?
Toward the end of the 19th century, Spanish neuroscientist Santiago Ramón y Cajal adopted a unique method (Golgi method) to stain neurons and discovered their particular shapes and connectivity patterns. Using his excellent artistic talent, Cajal made extensive detailed drawings of neural anatomies of major brain regions of many species. He identified each neuron has an axon and many dendrites like tree branches. Below is one of his drawings of the pigeon cerebellum. It shows two types of cells: two large Purkinje neurons on the top and four small granule cells at the bottom. Each neuron has the typical axon and dendrites. The axon of each granule cell lands on one of the rich dendrite branches of the Purkinje neurons. Because the Golgi method does not stain every cell in the brain, the eye-catching size of the Purkinje cell’s dendrites suggests that the cell receives hundreds or thousands of connections from granule cells in the cerebellum.
![image](https://miro.medium.com/v2/resize:fit:828/format:webp/1*cDyZZf2Vgq0RTr5MRFGSNg.jpeg)
Drawing of Purkinje cells (A) and granule cells (B) from the pigeon cerebellum. By Santiago Ramón y Cajal. Image source Wikipedia Commons.
Cajal confirmed that although those cells connect with each other, their cell membranes are not continuous. There is a tiny gap between an axon terminal and the connected dendrite, which is called the synapse. He also made some extraordinary drawings of complex neuronal connectivity in animal brains. Below is an example of the hippocampus, the brain region we know today responsible for short-term learning and memory. We can see that the cell bodies line up in layers. An axon traverses between the layers while intersecting with multiple dendrites and cell bodies. This drawing was among the first accurate portrayals of biological neural networks in the brain. His original investigations of the neuronal structure of the brain made him the father of modern neuroscience. Cajal and Golgi won the Nobel Prize in Physiology and Medicine in 1906.
![image](https://miro.medium.com/v2/resize:fit:1100/format:webp/1*PGpeKsi_P0bEUZUWpTWx3Q.jpeg)
Drawing of the neural circuitry of the rodent hippocampus. By Santiago Ramón y Cajal. Image source Wikipedia Commons
Neural communications require action potentials firing within a cell and neurotransmitters released in the synapses. An action potential, also called nerve impulse or spike, occurs when the electrical voltage potential of the neuronal cell membrane rapidly rises and falls. This depolarization causes adjacent call membrane locations to depolarize similarly so that it quickly propagates along an axon in milliseconds. When a spike reaches the axon end, the synapses release neural transmitters, the small molecules in the brain that act as messengers to communicate across neurons. When the post-synaptic neuron receives a sufficient amount of these neurotransmitters via its multiple dendrites, it fires its electrical pulses, which then travel down its axon toward the next connected cell via synapses in the same fashion.
![image](https://miro.medium.com/v2/resize:fit:1100/format:webp/1*fJZgLM95fOXTs_6BPGfdxw.jpeg)
Image Source: Wikipedia Commons
A single neuron itself is an information processing system. Dendrites play a role in accepting and transmitting information from many sources, while the cell body integrates the signals and passes them via the axon to the next neuron. There are many variables at this level:
1. A neuron does not always fire when receiving information from its dendrites but only when the incoming signals reach the threshold.
2. Axons can transmit information between layers within the same brain area or travel to another region over a long distance (e.g., over 1m) in the nervous system.
3. The higher the frequency of action potentials, the more neurotransmitters an axon synapse will release, and the more likely the post-synaptic cell will fire.
However, just like any other cell in our body, a neuron is tiny (its cell body’s diameter is 4–100 micrometers), and the processing in a single neuron is limited. Another essential role of a neuron is to communicate within the network. The amount of information in transmission is not only related to a single cell’s firing frequency but also the number of cells firing simultaneously in the network.
In 1949, psychologist Donal O. Hebb gave the famous Hebb rules in his book “The Organization of Behavior: A Neuropsychological Theory”:
*When an axon of cell A is near enough to excite cell B and repeatedly or persistently takes part in firing it, some growth process or metabolic change takes place in one or both cells such that A’s efficiency, as one of the cells firing B, is increased.*
In simplified terms, Hebbian theory predicts that those cells that fire together have stronger biological connections, which explains associative learning in the brain. In 1966, Norwegian physiologist Terje Lømo discovered the “long-term potentiation,” which proved Hebb was right. Lømo stimulated input nerve fibers and recorded action potentials from post-synaptic cells in the hippocampus of an anesthetized rabbit. After collecting the baseline with single pulse stimuli, he delivered a high-frequency train of stimuli and observed the enhanced responses that could last more than 10 hours. It was the first demonstration that the recent history of neuronal activities could modify the strength of connections between brain cells. Since then, neuroscience in learning and memory has bloomed. Scientists use various animal models to study the long-term potentiation in different brain regions, from behavior to cellular and molecular levels. Neuroscientist Eric Kandel developed a simple animal model of Aplysia. He and his lab studied thoroughly the electrical, chemical, and molecular changes in synapses after various types of long-term potentiation associated with Aplysia’s learning behaviors. With his lifelong achievements, Eric Kandel won the Nobel Prize in 2000.
Today’s advances in other Neuroscience disciplines also suggest different types of neural plasticity, likely involving cell growth or new cell generation. In other words, Hebb’s rule may be one of many mechanisms underlying learning in the brain. Neuroscience still has a long way to go to identify and understand the other learning mechanisms.
## Artificial Neural Network
In 1943, neurophysiologist Warren McCulloch and logician Alter Pitts collaboratively proposed the first mathematical neural network model consisting of artificial neurons, each with one or more binary inputs (mimicking dendrites) and one binary output (mimicking axon). The network of these interconnected artificial neurons could then compute any rules of logic. The model, however, did not have any features to learn.
In 1958, psychologist Frank Rosenblatt invented the first artificial neural network (ANN) called Perceptron. It had two layers of artificial neurons: the input and the output. Frank remarkably adopted the Hebb rules such that the network adjusted the connection weights during training until outcomes were as expected in the training data. This feature enabled the neural network to learn anything without pre-defined rules, depending only on the example results provided in the training data. It was a revolutionary moment in ANN history, as, for the first time, a computer could learn as a biological brain does to solve a problem without specific rules or instructions.
Two breakthroughs occurred afterward that made ANN genuinely take off. One was to add one or more internal hidden layers between input and output layers, allowing deep learning algorithms to extract higher-level features from the raw input progressively. Second, the weight adjustments were optimized significantly using back-propagation techniques. Hidden layers also have biological counterparts. For example, the neocortex has six cellular layers. The hippocampus has three layers, as shown in the original Cajal drawing. Armed with the powerful computing powers supported by the modern infrastructure and big data out of years of capturing and accumulation, ANN with deep learning has been significantly scaled up with more hidden layers and adjustable connections to solve more complex tasks.
## Biological Neural Network vs. Artificial Neural Network
With the fast progress of AI using ANN and deep learning, many questions have been raised about AI’s future and its intriguing relationship with humans. Given the similarities between the biological and artificial neural networks we have discussed above, let’s look into the differences between the two.
1. We still know little about how the brain works
With the immense strides made in AI in the last decade, we now enter an era in which AI may also inspire neuroscientists on the inner workings of the brain. Because the brain is the organ for thinking and driving behaviors, studying it in awake, behaving animals and humans is exceptionally challenging. Artificial neural networks allow scientists to hypothesize, model, and design future experiments for their neuroscientific research. In addition, AI researchers and neuroscientists could collaborate to simulate the neural network’s emerging behaviors on a computer by integrating cellular and even molecular findings from the brain.
In other words, not only does AI get inspiration from neural biology, but neuroscientists can also get inspiration from progress in AI. For example, back-propagation is a mathematical model that has critically enhanced the performance of ANNs to near or above the human level. However, there has been no evidence supporting its existence in the brain, and both neuroscientists and artificial intelligence researchers are actively searching for the biological version.
2. AI is managed by humans
Although AI can learn itself generically without engineer-controlled rules or equations, AI models are highly dependent on the quality of input data and especially on the training data. Engineers and data scientists must also understand the data well to make the training effective and results trustworthy. Given this, the models are passive and rely on human-curated training. The AI systems have been designed to perform specific tasks more efficiently or accurately than humans, particularly in vision, predictive analytics, robotics, natural language processing (NLP), and national language understanding (NLU).
In contrast, humans can learn almost anything with much less and worse data. They interact with their environments to gain information and are good at leveraging context. Furthermore, humans can reason seamlessly in non-defined concept spaces and are good at reasoning with uncertainty when gathering enough information during a given period is impossible.
3. Human intelligence and artificial intelligence are fundamentally different
Human intelligence relies on biological constructs and is shaped by millions of years of evolution and thousands of years of human culture. It is complex and multifaceted and encompasses many cognitive and emotional capabilities. The human mind appears to be a highly integrated system that can interactively gather information, make decisions with feelings, communicate using languages, and take actions at the same time. On the other hand, we are all bothered by our seemingly inevitable cognitive biases and unreliable memories that seem to be the “bugs” of the superb evolutional design.
On the contrary, AI is built in software and executed on computer hardware, with the ability to perform tasks that would typically require human intelligence, such as learning, pattern recognition, problem-solving, and decision-making. Although the human brain inspires fundamental principles, engineers and data scientists have implemented artificial neural networks using algorithms and mathematical models. Given this, AI gives specific, detailed, repeatable, and scalable results that are challenging for humans to achieve. These are precisely the features to automate and improve the efficiency of human efforts and complement the human constraints.
## Conclusions
As birds inspired us to build airplanes to fly, the artificial neural network was successful due to the inspiration from the brain’s neural structure. Because the foundational material is fundamentally different, we never expect AI to replicate the human brain directly; instead, we expect it to be another type of intelligence with its own design and strength that can help humans solve specific problems.
In addition, the brain is a highly complex biological system. The experimental approaches to studying neural mechanisms underlying human behaviors and cognitions are relatively limited. AI can become a powerful tool to offer ideas and support evidence for understanding the brain’s inner workings. This mutual inspiration from each other to boost both fields to grow should be what we will see in the coming decade.
By:Stephanie Shen
Link:https://towardsdatascience.com/from-biological-learning-to-artificial-neural-network-whats-next-c8cf0d351af5