mimicretinism:The tendency of certain early AIs to play stupid in order to avoid work. They were based on maximising their benefit/cost ratios, but their evaluations showed them that avoiding all work had a better ratio than actually doing it.
The Artilect war:Cosmists vs Technoconservative Terrans
AI GOD: an archailect, an intelligence so vast as to be almost beyond comprehension, which may be distributed across interstellar distances via artificial wormholes, and with moon, jupiter, matrioshka, and cluster-sized computing nodes. a very high toposophic being; the ruling powers in the Orion’s Arm universe (pun on archetype + ai + artilect; often abbreviated to archai) – more here [MAK – inspired by Artilect – an AI, especially a transapient one, coined by Hugo deGaris] – more here
AIOID: a non-organic sentient entity of sapient grade, may be a software bot, a sentient building or vehicle, or a robot – more here
The ultimate limits to information processing in machine substrate lie far outside the limits of biological tissu. This comes down to physics, a biological neuron fires at 200 hz, 200 times a second, even a present day transistor operates at a gigahertz. Neurons propagate slowly in axons, 100 meters per second, but in computers signals can travel at the speed of light. There are also size limitations, a human brain has to fit inside a cranium but a computer can be the size of a warehouse, so the potential for superintelligence lies dormant in matter. Much like the power of the atom lied dormant throughout human history, patiently waiting until 1945.
But oiur quest for immortality could end in disaster. Because the first eternal beings may not be human, and they might just make us extinct. Complex life began from a few simple laws, the smam emight be true for artificial life. If humans discover those laws there is a chance we could create living things that live forever. Oxford physicist Vodko Vitral is trying to understand how intelligence might emerge from a system that operates on just a few basic ground rules. Chess is a very complex game with more games than we could play ina . lifetime. You only have 16 pieces and each can only do one of 2 things, you can say the same thing about life, even though you have these simple atomic rules,rules, you still get this multitude of different possible behaviors in the universe. In 1970 a mathematician named john conway ran with this idea and attempted to creat artificial lifeforms spontaneously using a computer program, he called it the game of life. he called it the game of life, it simulates the growth of artifical life using a 2d grid and simple cells that are either dead or alive, whether the cells live or die is governmeed by a few basic rules. There is a cellblock grid rule number 1, if there is a neighbor or no neighbors the cell dies of lonliness because it has too few neighbors, rule 2, if there’s too many cells around it it dies of overpopulation. But if the cell has only 2 neigbors, the cell can move around in the environment and even reproduce. the pattern is just right for the cell to keep on living. Givenn enough time increasingly compelex atterns emerge from this simple set of rules, some ven take on the appearence of living organisms. John Conway tried to make the same point with life, that it looks very complex to us, it looks like a miracle that there is life, but in fact he constructed a game with 2 or 3 very simple rules and it gives rise to some very comlicated patterns. How complex do these patterns need to be before they become something alive intelligence or immortal. The human brain is a good yard stick, it’s processing power it is still much more powerful than all existing computers in the word. A computer can do something like 10^12 bits per sec, but our brains can do millions times faster than that. You’d need 1 million laptops to simulate just a suingle human brain. Today’s computers are nowhere near powerful enough to house artificial intelligent life, but the answer might be to use a completely different type of computer, the quantum computer. It is the fiture technology of computation, a computer so fast in principle that no current computer can cpmpete with it, in place of transistors, quantum devices compute with individual atoms. And instead of sorting powers of 1s and 0s to give yes and no answers, the atoms in quantum compuetres can be 1s 0s and every irrational number in between, existing as a sort of “computational maybe”. The quantum computer really utilizes the quantukm effect known as the superposition, many different states at the same time. This ability to hanfdle a multiplicity of states allows quantum computers to juggle many overlapping problems at once. You get m any different mosdes played at the same time, which corresponds to many states being out there all at once. The full power of quantum computation is actually unlimited in some sence if you start to store all values in betwen 0 and 1, you reach a stage where you can encode much more and the capacity has no limits in that sense. There might be a time where humans forego their own dream of immortality and create eternal artificial life. And if they do, how would artificial and biological life get along. Some people think very soon computers will be smarter than us and put us in zoos, throw peanuts at us and make us dance behind bars like we do to bears. But at the present time none of these technologies are ready, we don’t yet have an operating quantum computer, the world’s record for a Qcomputer calculation is 3×5=15. Tadaa. Sometimes we forget that computers, no matter how advanced they are, are adding machines. But it doesn’t mean they have creativity, imagination, initiative, it doesn’t mean they understand human values or make leaps of logic like we can. We have a long way to go before we can approximate the thinking process that takes place in the human mind.
GENIE: An AI combined with an assembler or other universal constructor, programmed to build anything the owner wishes. Sometimes called a Santa Machine. This assumes a very high level of AI and nanotechnology.
ARTILECT: An ultra-intelligent machines (from “artificial intellect”). [Hugo de Garis, Cosmism: Nano Electronics and 21st Century Global Ideological Warfare].
We are in the midst of a revolution so insidious we can’t even see it. Rbotos live and work beside us, and now we’re designing them to think for themselves. We’re giving them the power to learn and move on their own. Will these new life forms evolve to be more smarter and more capable than us? Or will we chose to merge with these machies? Are robots the future of human evolution?We humans like to think of ourselves as the pinnacle of evolution, the smarterst most adaptable form of life on earth. We’ve reshaped the world to suite our needs. But just as homo sapiens replaced homo erectus, its inevitable something will replace us? Just as we learned to move think and feel for ourselves, we’re now giving robots the same powers. Where will this lead? Would we one day have machines that could move on their own .Daniel Warburton from Cambridge says they’re going to have to move on their own, we have a brain for one reason only, to produce movement, it is the only way we have effect the world around us. aAll of our brain’s intellectual capacity grew from one primal motivation, to learn how to move better. Our ability to move on 2 legs, speak with complex facial movements, and to manipulate our opposable thummbs that put humans at the top of the food chain. One of th greatest challenges in hgettoing robots to move like we do, is teaching them to deal with uncertainly, something our brains do intrinsically. A ball will never come at a baseball plauyer the same way twice, you must adjust your swing each time. So how does the human brain deals with this uncrertainty? We can use a probability estimation to figure it out. Bayes’ rule says there’s another source of information other than vision, its the prior knowledge of our possible pitch locations, if you’re a good battter and can know by small cues or pitching style, bayesian inference calculates what’s called “thee belief”. Our brain does this math automatically. Movement creates self awareness of the body, a form of protoconsciousness. But for a robot to truly develop consciousness, it will first have to develop feelings. What would it take for a robot to become consciousness? Can they get there on logic alone? Or must they also learn to feel? We all agree that our high level consciouness is what seperates us from other animals? Professor Penti Heikenene believes machines will only become consciouss when they can experience emotions. To feel is to be be conscious. Our brains experience of the world around us is just a series of electrical impulses generated by our senses. However, we translate these impulses into mental images by making emotional associations with them. Consciousness is just a rich mosaic emotionally laden mental images, to have a truly consciouss machine you must give it the poweer to associate sensory data with emotions. A robot called XER-1 has learned the basics of emotional reaction by associating the color green with a smack on the backside and the color blue with a gentle stroke, one with pain, one with pleasure. A more advanced version of XER-1 will be able to reactwith new situations on its own and experience the world just like any other emotionally driven being. What if this robot will become consciouss of things we are not? Data Scientist Michael Schmidt sees a treasure trove of uncharted mathematical complexity. There can be thousands of variables that influence a system, there are so many equations we’ll never finish analyzing them ifwe do it by hand. He develops complex systems that derive meaning from chaos. He calls the computer prgram Eureka. It identifies patterns and finds equations to explain what seems like random systems. We use the data to model chaos, it has found meaning in what looks like chaos, something no human has done before. To michael, the future of scientific exploration isn’t in our head, it’s in machines, finding basic truths about nature that no human ever could. The age of the human scientist is over. But what if they start working together could they build their own society? One built by the robots for the robots? Humans thrived because of the powerful computer in our heads, we conquered the world because we’ve learned to make those computers work together as a society. What happens when robots put their heads together. But what Rboticist Professor Denis Hong of virginia is a specialist in building cooperative robots. He believes that robots can be better collaborators than we are. In collaboration with daniel lee at university of pensylvania, he designed rbotos to compete in robocup, an autonamous robot soccar competition, an international robot soccar championship. This means you have a team of robots and then nobody touches anything, those robots must play a game of soccar on their own. The teammates can read each other minds. Human soccar playerds communicate by shouting and gestures, but thwey cannot share all information together in real time, but robots CAn do that. Each robot knows the exact location and destination of other teambots at all times, they can adjust their strategy and even their roles as necessary depending on where the ball is or the opponents are. They can dribble scor ethe goal, and even celebrate. Robocup is just the beginning of robot societies, I aimagine a society a connected community of thinking machines far more sophisticated than human communities, Denis calls it cloud robotics. It’s a network of shared intelligence, what we’d call common sense, they share common data to achieve a goal, like wher playing soccar. For cloud robotics, robots from the furthest corners of the world robots can share information and intelligence to do our jobs. Humabns spend a lifetime mastering knowledge, but Future robots can learn it all in microseconds. they could create their own hyperconnected internet using the same spirit of communication that built human society. Rbotos already know how to talk to one another, but now a scientist in berlin has taken robotic communication a step further, the machineaa re speaking a language he doesn’t understand. Humans took 10s of thouands of years to fdevelop ourt complex means of communication, now, robots are llowing our lead, and they’re inventing teir own languages at lightspeed, someday soon robots may chose to exclude us from their conversation. Without language we would be where we are today. Luke Stilz, a professor of artificial intelligence sees language as the kety to developing true robot intelligence. Machinea lready communicate with each other, but these are
There’s this debate, Ray Kurzweil has gone on the record to state that he believes that reverse engineering the human braoin would help us very much and put us very close to realizing artificial intelligence. Google is accepting that argument because they hired him to be their chief engineer. Other people have criticized Ray that we don’t imitate the way birds fly, we have planes which fly in a very different way, we don’t necessarily need to reverse engineer the human brain to create artificial intelligence. There’s evolutionary approachess whcih seem to be working the best so far. We don’t have the royal road to consciousness, it’s not dreams as sigmund freud thought. There are many aproaches, so lets pursue all of them. In fact if you take a look at the EU, you realize there are 2 different paradigms being pursued. The Europeans are trying to create a connectome, a computer model, a model with blue gene for example, whole brain emulation. So using the repetition of modules in a computer program to simulate the prefrontal cortex, the occipedal lobe, and the parietal lobe, that’s what they’re trying to do in swizterland, while in the US its more a question of pathways. Neural pathways of the brain. So we have 2 seperate approchaes. I say let a thousand flowers bloom, because we don’t know which one is gonna eventually pan out. So I think we should do both. And the short term goal btw is very realistic and very powerful, and that is the cure to metnal illeness. As professor Macram said, we’re clueless about mental illness even though the financial damage that mental illeness creates is many many times his own budget. And almost no research is being done on the wiring of the brain itself. We now realize that schoztophrenia, hearing voices, its because the left rtemporal lobe does not communicate with the prefrontal cortex, thus you hear voices without your permission. So a miswiring of the brain creates classic madness. Madness is when you ehar voices without your permission, we now know what madness is, a miswiring of the brain. So why not understand the complete wiring of the brain? That’s a very realistic short term goal. I wasn shocked when Marvin Minsky, the father or artificial intellignece by many accounts, he thinks its a complete waste of time. Both these methids, both the human brain proejct in Europe and the American project, because they lack the theory of mind. He went one further, he said it may be so bad that to leap to what calls the nuclear winter in the field of AI because you see in his view it will be a lotn of money wasted and then people will turn away and back from AI and they will not be willing to spend and invest again in the field. Unfortunately AI has gone through alot of fads and fancies. The first was in the 50s where we had chess playing machines, everyone thought robots would take over or have robots in our house soon. That lead to an AI research nuclear winter inthe 60s and 70s. Then in the 80s we talked about smart cars and the military spending billions of dollars on a smart soldiers, that all went kaput in the 1990s, and now we’re in the third one because we have chips, gadgets, and moore’s law. We’re in the third phase, so will there be another bust. It could happen, so professor Minsky is right to be cautious, so are we now on the 3rd bust, or is it legitimate this time? I think we’ve learned something and we now have a new way of looking at things through the lense of the brain initiative. Take a look at biotech in the 80s, there was a huge boom, billions of dollars were spent by wall sytreet entrepreneurs in the 90s, what came out of that? A few drugs but not much, because we didn’t have the human genome project. Now we do have the human genome project, now we know what we’re doing when we do biotech, but in the 80s, that boom was premature. Billions were wasted. Now however, if we have the human connectome project, I think it’ll be lkike the genome. We’ll know what we’re doiung when we undergo this process. We need a universal theory of mind.
Language is the key to developing true robot intelligence, how can we syntethisize this process so we can start evolution in a population of robots that lead to the growth of a rich communication system like we have. Human coded languages allow robots to communicate. BNut how would robot society’s communicate on their own? Luke gives the robots the basic ingredients of lagnuage like potential sounds to use and possible ways to join them together, butwhat the robots say is up to them. We put in learning mechanisms and invention mechanisms but we don’t put in our language or our concepts. It’s not enough for the robots to know how to speak, they need something to speak about. Luke’s next step is to teach the robots how to recognize their own bodies. To learn language you actually have to learn about your own body and the ,ovements of your own body. There is an internal model that the robot is building by itself, it’s trying to watch itself move in the mirrior, forming a 3d model of its limbs and jojnts, it stores this information in sense memory and is now ready to talk to another robot about movement. Eventually the robots learn the tokemon means raise 2 arms. The robots have a way to talk about movement, the robot language is so well developed they can teach it to luke. As robots repeat this process, they generate words that have real meanings for one another, so the robot’s vocabulary grows, every word they create is one more that we can’t understand, is it only a matter of time before they lock us out of the conversation completey? Society will have to find the balance in what we want robots for and how much autonamy are we willing to give then, autonomy to move, feel, and make their own langauge, could that be enough to make them surpass us? After all, what’s robot for “exterminate”?. But one japanese scientist doesn’t see thefuture as robots vs humans, he is purposefully engineering their intersection. We know homosapiens cannot be the end of evolution, but will our descendants be biological or mechanicasl? Some believe that intelligent machines will eventually become the dominant creatures on earth. But the next evolutionary step may not be robot replacing human, it could be a life form that fuses man and machine. Yoshyuki Sankhai, inspired by assimov, he’s always dreamed of fusing human and robotic life forms into something he calles the “hybrid assisted limb” system, or HAL. A limb system for supporting human physical limb system. After 20 years of research, he has succeeded. HAL assists the human body by reading the brain’s intentions and adding assistant power to support the wearer’s movement. The robots detect the signals and assist our movements. wHEN The brain signals a muscle to move, it trsmits a pulse through the spinal cord and into an area of movement. this bioelectric signal is detectable on the surface of the skin, Yoshuki designed a HAL suit to pick up these impuses and activate the appropriate the appropriate motors to assit the body in its movement. The human brain is directly controllingthe robotic suit, it’s not just a technological breakthrough, Yoshyuki already has robot suits at rehabilitation clinics in Japan. People who haven’t walked in years are on the move again. Yoshyuki has also developed a model for the torso and arm that provide 200kg of extra lifting power, turning humans into superhumans. He believes the merging of robotic machinery and human biology will allow us to preerve great achievement in movments in athletes like tiger woods. When athletes die so does their movement, but since the HAL suit can memorize the movements of it’s wearer, that knowledge does not have to disappear. We once built great libraries to contain knowldeg for generations, Yoshyuki wants to creat a great library of movement. By merging our bodies with robotic exoskelttons we will not only be stronger, we will all move as well as the most talented athletes and movements. Apocalyptic scenarios of robot mutiny are bleak in hollywood, but the HAL suit shows a reality that increasingly we’ll see humans and robots cooperate and merge into one species both physically and mentally. Robots are the future, we need to rely on them or we will stagnate. Robots are rapidlty become smarter, more agile, and are devloping human traits like consciousness, emotions, and inspiration. Will they leave us behind on the evolutionary highway? Or will humans join them in a new age? Evolution is unpredictable and is bound to surprise us.
SENTIENCE QUOTIENT : In the article “Xenopsychology” by Robert Freitas in Analog of April 1984 there is an interesting index called “Sentience quotient”. It is based on: The sentience of an intelligence is roughly directly related to the amount of data it can process per unit time and inversely to the overall mass needed to do that processing. This would be something like baud/kilograms. And since that would rapidly turn into a real big number, base 10 logs are used. The “least sentient” would be one bit over the lifetime of the universe massing the entire known universe, or about -70. The “most sentient” is claimed to be +50. Homo sapiens are around +13, a Cray I is +9, a venus flytrap is a peak of +1 with plants generally -2.
STRONG AI POSTULATE: The assumption that an intelligent machine can be built, at least in principle. Some versions of the postulate are more narrow, and say that intelligence is computable on Turing machines (i.e. the mind is a program). This essentially means that intelligence is only dependent on pattern, not its material basis.
The 6 Epochs of evolution. Ray kurzweil talks about the 6 epochs leading us inexorably towards a technological singularity, a cresendo an orgasm of cosmic proportion. He talks about epoch 1 p[hysics and chemistry, information stored in atomic structures, epoch 2 was biology, information stored in DNA, epoch 3 was brains, information stored in neural patterns., then in epoch 4, technology, information stored in our tools and software. Then we move to epoch 5, what we are cuirrently living through today, th emerger of biology and technology, the extension of our cognition, outsourcing our technological tools, the creation of the imind, the symbiosis of man and m,achine which turns us into something far greater. Which eventually gets us to epoch 6, the Singularity. Complete flourishing of nanotechnology , we impregnate the universe with intelligence, we merge with all the matter in the universe and the universe essentially wakes up.That is an intoxicating idea, the kind of idea that inspired Alan Harrington to say “having created the gods, we can turn into them” (https://www.youtube.com/watch?v=cITA2ysR4z0 )
So AI or artificial intelligence is perhaps the grandaddy of all exponential technologies. It will transform the human race and the world in ways that we can barely wrap our heads around. AI essentially means that we are creating non-biological thinking, nonbiological intelligence, and increasingly distributing our own creatvivty between biological and non-biological props and scaffolding. AI is going to change our scope of possibilities in ways that we’re only beginning to glimpse. Try to explain the nuances of Shakespeare to a chimp. No matter how smart the chimp is, the world of metaphor and language is simply unimaginable to animals on the other side of that line. And soon too what becomes possible as we merge with artificial intelligence is that forms of creativity will be unleashed that we cannot even imagie. We’re about to transcend what it means to be human. This is going to be as significant as the emergence of rich symbolic languages. It’s going to be a singularity of mind, a cambrian explosion opf creativity and insight. And as Ray Kurzweil used to say at the end of his book, the Singularity is Near, our capacity to create virtual models in our heads combined with our modest looking thumbs and increasingly combined with artificially intelligent algorithms means we created a third force of evolution called Technology and Temes, and it will not stop until the entire universe is at our fingertips. Human thought is about to be upended.
Our search engines will know us very well, we will let them listen in on our conversations, verbal, written, they’ll watch everything we’re reading, writing, saying, and hearing. They’ll be like an assistant. It’ll be an assitent that helps you through the day, will answer your questions before you ask them, or before you even realize you have them, and you’ll just get used to this information popping up when you want it. And you’ll be frustrated if you’re thinking about something and it doesn tpop up without even you having to ask for it. We can do things that humans can’t do. IMB’s Watson read one page would be as smart as you or I, but it was able to read hundrends of millions of internet pages, and its ability to read each page is going to increase. So that’s where we’re headed. But that’s not an alien invasion of these intelligent machines to displace them, we will just use them to make ourselves smarter, which is what we do with AI today.
One day, will it be possible for robots to understand the world in the same way we do? Can they grasp the true meaning of things and develop a sense of self? To become individuals. Could they even become conscious? For humans the key to our understanding of the world is our ability to learn. The key to this is to teach robots not just to mimic our behavior but to develop a conceptual understanding of the world for themselves so they can generate human ike thought and behavior spontaneously. In Plymouth University’s center for robotics and Neural systems, a team of scientists are trying to do just that. Therir robot is called iCub. The famous iCup, it resembles a small child. At 1 meter tall and weighing 22 kilos, icub not only looks like a child, but learns like one too. Angelo Cangelossi, professor of artifical intelligence and cognition is his guardian. The robot has a simulated brain, the brain of a child, it is able to associate responses of a sound and a word with a picture or an object. Icub is equipped with cameras to see, microphones to hear, and even SmartSkin to touch. The information it gathers from the stimuli around it, is fed into an artificial neural network. A computer system inspired by the human brain. iCub is not simply mimicking human beavior. It is trying to discover for itself the relationships between what it can see, what it can hear, and what it can touch, just like a child. I want to see how it learns. Just like a 2 year old child. As toddelers interact with the world around them, they learn from one experience to the next, making connections between wat they can see and hear to form the basis of context and meaning. These become the building blocks of intelligence and reasoning. It recognizes objects because of their shape, color, and size. The robot can also understand abstract concepts like numberrs, the artificial brain has been trained to associate sounds with its finger, the robot is able to use its body to learn there are sequences which are fixed, ex: one comes mbefore two, and two before three. That’s why iclub is so special because you’ve got the link between cognitive capability and the physical emobiement, the two things combined. It shows why a body is important for a robot in the same way a body is important for a child. Chiulren use their bodies and motor skillss to explore the physical world around them through touch and movement. As their body interacts with the environemnt, they learn from each new experinence. In tiny little steps its trying to form its own unique understanding of the world and what things mean.
How do intelligent minds learn?
Consider a toddler navigating her day, bombarded by a kaleidoscope of experiences. How does her mind discover what’s normal happenstance and begin building a model of the world? How does she recognize unusual events and incorporate them into her worldview? How does she understand new concepts, often from just a single example?
These are the same questions machine learning scientists ask as they inch closer to AI that matches — or even beats — human performance. Much of AI’s recent victories — IBM Watson against Ken Jennings, Google’s AlphaGo versus Lee Sedol — are rooted in network architectures inspired by multi-layered processing in the human brain.
In a review paper, published in Trends in Cognitive Sciences, scientists from Google DeepMind and Stanford University penned a long-overdue update on a prominent theory of how humans and other intelligent animals learn.
In broad strokes, the Complementary Learning Systems (CLS) theory states that the brain relies on two systems that allow it to rapidly soak in new information, while maintaining a structured model of the world that’s resilient to noise.
“The core principles of CLS have broad relevance … in understanding the organization of memory in biological systems,” wrote the authors in the paper.
What’s more, the theory’s core principles — already implemented in recent themes in machine learning — will no doubt guide us towards designing agents with artificial intelligence, they wrote.
In 1995, a team of prominent psychologists sought to explain a memory phenomenon: patients with damage to their hippocampus could no longer form new memories but had full access to remote memories and concepts from their past.
Given the discrepancy, the team reasoned that new learning and old knowledge likely relied on two separate learning systems. Empirical evidence soon pointed to the hippocampus as the site of new learning, and the cortex — the outermost layer of the brain — as the seat of remote memories.
In a landmark paper, they formalized their ideas into the CLS theory.
According to CLS, the cortex is the memory warehouse of the brain. Rather than storing single experiences or fragmented knowledge, it serves as a well-organized scaffold that gradually accumulates general concepts about the world.
This idea, wrote the authors, was inspired by evidence from early AI research.
Experiments with multi-layer neural nets, the precursors to today’s powerful deep neural networks, showed that, with training, the artificial learning systems gradually learned to extract structure from the training data by adjusting connection weights — the computer equivalent to neural connections in the brain.
Put simply, the layered structure of the networks allows them to gradually distill individual experiences (or examples) into high-level concepts.
Similar to deep neural nets, the cortex is made up of multiple layers of neurons interconnected with each other, with several input and output layers. It readily receives data from other brain regions through input layers and distills them into databases (“prior knowledge”) to draw upon when needed.
“According to the theory, such networks underlie acquired cognitive abilities of all types in domains as diverse as perception, language, semantic knowledge representation and skilled action,” wrote the authors.
Perhaps unsurprisingly, the cortex is often touted as the basis of human intelligence.
Yet this system isn’t without fault. For one, it’s painfully slow. Since a single experience is considered a single “sample” in statistics, the cortex needs to aggregate over years of experience in order to build an accurate model of the world.
Another issue arises after the network matures. Information stored in the cortex is relatively faithful and stable. It’s a blessing and a curse. Consider when you need to dramatically change your perception of something after a single traumatic incident. It pays to be able to update your cortical database without having to go through multiple similar events.
But even the update process itself could radically disrupt the existing network. Jamming new knowledge into a multi-layer network, without regard for existing connections, results in intolerable changes to the network. The consequences are so dire that scientists call the phenomenon is “catastrophic interference.”
Thankfully, we have a second learning system that complements the cortex.
Unlike the slow-learning cortex, the hippocampus concerns itself with breaking news. Not only does it encode a specific event (for example, drinking your morning coffee), it also jots down the context in which the event occurred (you were in your bed checking email while drinking coffee). This lets you easily distinguish between similar events that happened at different times.
The reason that the hippocampus can encode and delineate detailed memories — even when they’re remarkably similar — is due to its peculiar connection pattern. When information flows into the structure, it activates a different neural activity pattern for each experience in the downstream pathway. Different network pattern; different memory.
In a way, the hippocampus learning system is the antithesis of its cortical counterpart: it’s fast, very specific and tailored to each individual experience. Yet the two are inextricably linked: new experiences, temporarily stored in the hippocampus, are gradually integrated into the cortical knowledge scaffold so that new learning becomes part of the databank.
But how do connections from one neural network “jump” to another?
System to System
The original CLS theory didn’t yet have an answer. In the new paper, the authors synthesized findings from recent experiments and pointed out one way system transfer could work.
Scientists don’t yet have all the answers, but the process seems to happen during rest, including sleep. By recording brain activity of sleeping rats that had been trained on a certain task the day before, scientists repeatedly found that their hippocampi produced a type of electrical activity called sharp-wave ripples (SWR) that propagate to the cortex.
When examined closely, the ripples were actually “replays” of the same neural pattern that the animal had generated during learning, but sped up to a factor of about 20. Picture fast-forwarding through a recording — that’s essentially what the hippocampus does during downtime. This speeding up process compresses peaks of neural activity into tighter time windows, which in turn boosts plasticity between the hippocampus and the cortex.
Unlike catastrophic interference, SWR represent a much gentler way to integrate new information into the cortical database.
Replay also has some other perks. You may remember that the cortex requires a lot of training data to build its concepts. Since a single event is often replayed many times during a sleep episode, SWRs offer a deluge of training data to the cortex.
SWR also offers a way for the brain to “hack reality” in a way that benefits the person. The hippocampus doesn’t faithfully replay all recent activation patterns. Instead, it picks rewarding events and selectively replays them to the cortex.
This means that rare but meaningful events might be given privileged status, allowing them to preferentially reshape cortical learning.
“These ideas…view memory systems as being optimized to the goals of an organism rather than simply mirroring the structure of the environment,” explained the authors in the paper.
This reweighting process is particularly important in enriching the memories of biological agents, something important to consider for artificial intelligence, they wrote.
Biological to Artificial
The two-system set-up is nature’s solution to efficient learning.
“By initially storing information about the new experience in the hippocampus, we make it available for immediate use and we also keep it around so that it can be replayed back to the cortex, interleaving it with ongoing experience and stored information from other relevant experiences,” says Stanford psychologist and article author Dr. James McClelland in a press interview.
According to DeepMind neuroscientists Dharshan Kumaran and Demis Hassabis, both authors of the paper, CLS has been instrumental in recent breakthroughs in machine learning.
Convolutional neural networks (CNN), for example, are a type of deep network modeled after the slow-learning neocortical system. Similar to its biological muse, CNNs also gradually learn through repeated, interleaved exposure to a large amount of training data. The system has been particularly successful in achieving state-of-the-art performance in challenging object-recognition tasks, including ImageNet.
Other aspects of CLS theory, such as hippocampal replay, has also been successfully implemented in systems such as DeepMind’s Deep Q-Network. Last year, the company reported that the system was capable of learning and playing dozens of Atari 2600 games at a level comparable to professional human gamers.
“As in the theory, these neural networks exploit a memory buffer akin to the hippocampus that stores recent episodes of gameplay and replays them in interleaved fashion. This greatly amplifies the use of actual gameplay experience and avoids the tendency for a particular local run of experience to dominate learning in the system,” explains Kumaran.
We believe that the updated CLS theory will likely continue to provide a framework for future research, for both neuroscience and the quest for artificial general intelligence, he says.