Rmember when phones looked like this? Then phones got faster and faster and every 2 years you probably upgraded your phone from 16 gigs to 32 and so on and so forth. This technological progress we;’ve all been prarticipating in for years hinges on one key trend called “Moore’s Law”. Cofounder of intel Gordon Moore made a prediction in 1965, that integrated circuits for chips were the opath to cheaper electronics. “Moore’s Law” states that the number of transistors, which are tiny switches that control the flow of an electrical current that can fit in an integrated circuit are gonna double every two years, but the cost of those will half. Chip power goes up, cost goes down. That exponential growth has brought massive advances in computer power, hence the tiny comouters in our pockets which you are using to watch this. A single chip today can contain billions of transisors and each transistor is about 14 nanometeras across, which is smaller than most human viruses. Now moore’s law isn’t a law of physics, its just a good hunch that’s driven comoanies to make better chips. Its even been revised overtime. But experts are claiming that the trend is slowing down. Intel recently disclosed that it’s becoming more difficult to roll out smaller transistors in that 2 year time frame while also being affordable. So to pwer the next wave of electronics there are a few priomising options in the works. One of those is quantum comouting, another in the lab stage is neuromorphic computing, which are comouter chips modelled after our own brains. They’re basically capable of learning and remembering all a tthe same time at an incredibly fast clip. So lets start with the human brain. So your brain has billions of neurons, each of which form synapses or connections with other neurons. Synaptic activity relies on Ion channels which control the flow of charged atoms like sodium and calcium that make your brain function and proces properly. So a neuromorphic chip copie sthta model by relying on a densly connected web of transistors that mimic the acitivyt o f ion channels. Each chip has a networjk of cores with inpouts and outputs that are wired to additional cores which all operate in conjucntion with each other. Because of this connectivity, neuromorphic chips are able to integrate memory, comoutation, and communication altogether. These chips are an entirely new compoutational design. Standard chips today are built based on Von Neumann architecture where the processor and memory are seperate and data moves between them, a central processing unit runs commands that are fetched from emeory to execute tasks, this is what made comoputers very good at computing but not as good as they could be. Neuromorphic chips however completely changed that model by having both storage and processing connected within these neurons that are all communicating and learning together. The hope is that these neuromorphic chipscould transform computers from general purpose calculators ionto machines that can learn from expeerience and make decisions. To a future where computers wouldn’t just be able to crunch data at breakneck speeds, but could don that AND process sensory data in real time. Some future applications of neuromorphic chips can include combat robots that decide how to act in the field. Drones that could detect changes in their environment and your car taking you to a drive through for ice cream after you’re dumped. Basically these cheaps could power our future robot overlords. We dont have machines with sophisticated brain like chips yet, but they are on the horizon, but get ready for a whole new nmeaning of the term “brain power”.

SPIKE, THE: Another term for the singularity, suggested by Damien Broderick since the growth curves look almost like a spike as it is approached. [Damien Broderick, The Spike 1997]

(MY ORIGINAL PARAGRAPH )In a matter of time, AI will perform medical diagnostics based on thousands of parameters, perscribing treatments, and doing it better than humans. Eventually, AI will be capable of understanding texts written in natural languages and even youtube videos, allowing it to learn from the internet. Once that is possible, AI can make discoveries, generate new ideas, simulate experiments, and prepare scientific articles for publication. Once it can start doing computer science research, AI will be able to design more advanced AI, which can design more advanced AI. The so called “Intelligence Explosion”, where superintelligences design successive generatiions of increasingly powerful minds might not stop until the agent’s cognitive abilities surpass that of an human. The term was popularized by sci fi writer Vernor Vinge who used it to describe the phenomenon of technological acceleration causing an eventual unpredictable outcome in society. Even earlier, mathematician John Von Neumann spoke of “ver accelerationg progress of technology and changes in the mode of human life, which gives the appearence of approaching a singularity where humanity as we know it can not continue.¬†Hopefully this will one day creat an AI advanced enough to create a map of the human brain and perhaps even help us upload our consciousness into computers. Ray Kurzweil, who popularized the term citing Von Neumann’s book The Computer and the Brain, and predicts the singularity to occur around 2045. (MY ORIGINAL PARAGRAPH)

Behold, the transistor. A tiny switch about the size of a virus that can control the flow of a small electrical current, its one of the most inventions ever because when its on its on, and wen its off its off. But this either or situation is incredibly usieful because its’ a binary system. On or off, yes or no, 1 or 0. With enough transistors working together, we can create limitless combinations of ons and offs 1s and 0s to make a code that can store and process ankind of information you can imagine. It’s how your computer computes, its how you’re watching me right now. Its all because those tiny transistors can be organized or integrated into “integrated circuits” also called microchips or microprocessors, which can orchestrate the operation of millions of transistors at once. And until recently the only limitation to how fast and smart our computers could get was how many transistors we could pack onto a microchip. Back in 1965, Gordon Moore, cofounder of the intel corporation, predicted that the number of transistors that could fit on a miucrochip would double every 2 years. So every 2 years computers would become twice as powerful. This is known in the tech industry as “Moore’s law”, and for 40 years, it was pretty accurate. We went from chips with 2300 transistors in 1972, to chips with 300 million transistors by 2006. But over the last 10 years we’ve fallen behind the exponential growth that moore predicted. The processors coming off assembly lines now have about a billion transistros, which is a really big number, but if we were keeping up with Moor’es law we’d be up to 4 billion by now. So why is the trend slowing down, how can we get more transistors onto a chip? Are there entirely new technologies we could be using instead? Ones that allow more numbers, like graphene, which doesnt melt like silicon. And how do billions of little on/off switchs turn into movies and music and youtube videos about science that display on a glowing magical box. To understand the device you’re usiong rught now as well as the challenges computer science is facing and what the future of computing might look like, start with the transistor. A transistor is essentially a little gate that can either be opened or shut with elecricity to control the flow of electron between 2 channels made of silicon, which are seperated by a little gap. They’re made of silicon because silicon is a natural semiconductor. It can be moidified to conduct electricty really well in some conditions or not at all in other conditions. In its pure state silicon forms nice regular crystals. Each atom has 4 electrons in its outer shell that are bonded with the silicon atoms around it. This arrangement makes it an excellent insulator. It doesnt conduct electricity very well because all of its electrons are spoken for. but you csan make that crystal in silicon cconduct electricity very well if you dope it, when you inject one substance into another substance to give it new properties. Tyhe silicon is doped with another element like phosphorus, which has 5 electrons in its outer shell, or Boron, which has 3. If you inject these into pure crystal silicon, suddenly you have extra unbonded electrons that can jump across the gap between 2 strips of silicon. But they’re not gonna do that without a little kick. Whe you apply a positive electrical charge to a transistor, that positive charge will attract those electrons, which are negative, out of both silicon strips, drawing them into the gap between them. When enough electorns are gathered, they turn into a current. Remove the positive charge and the electrons zip back into their places, leaving the gap empty. Thus the transistor has2 modes, on and off. 1 and 0. all the information your comuter is using right now is represented by sequences of open and shut transistors. So how does a bunch of iones and zeros turn into this screen? Well imagine 8 transistors hooked up to each others, IU say 8 because one byte of information is made of 8 bits, that’s 8 on or off switches, the basic unit of a single piece of information inside your computer. Now the total number of possible on/off configurations for thise 8 transistros is 256, that means 256 combinations of 1s and 0s in that 8 bit sequence. In the past 50 years, the biggest onstacle to cramming more and more transistors onto a single chip and therefore increasing our processsing power has come down to one thing, how small we cna make that gap between the 2 silicon channel. Today a state of the microchip has gaps that are only 32 nanometers across. To give you a sense of perspective, a single red blood cell is 125 times larger than that. 32 nanometers is the width of only a few hundred atons. So there’s a limit to how low we cna go. Maybe we can shave that gap down to 22 or 16 or even 10 nm using current available tehc, but then you start running you start running into a lot of problems. The firs tbig problem is that when you’re dealing with comoponents so small that just a few stray atoms can ruin a chip, its no longer possible to make chips that are reliable or affordable. The second big problem is heat, that many transistors churning trough millions of bytes of data per second in such a small space, generates a lot of heat. We’re starting to test chips that get so hot they melt through the motherboard. And the third big problem is quantum mechanics. When you start dealing with distances that small, you face the very real dillemmma of electrons just jumping the gap for no reason, in a phenomenon known as “Quantum Tunneling”. When that starts happening your data will get corrupted while it moves around inside your computer. So, how can we keep making our computers even faster when atyoms aren’t getting any smaller. Well, it migt be time to abandon silicon. Grapene for example is a more igl conductive material tat would let electrons travel across it faster, we just cant figure out how to scale up manufacture it yet. Anothyer option is to abandon electrons, because they are incredibly slow. Like the electrons moving through the wire that connects your lamp to the wall outlet. They’re moving at about 8 and a half cm per hour, thats fast when they only have to travel 32 nanometers, but other stuff can go a lot faster, like Light. Optical computers can go a lot faster, like Light. Optical computers would move around photons instead of electrons to represent te flow of data. And potons are literall as fast as anting can possibl be, so ou can’t ask for better. But of course tere are problems wit optical computiong, like the fact that photons are so fast it makes them hard to pin down for long enough to be used for data, and that lasers, which are probably what optical computing would involve, are huge power hogs and would be incredibly expensive to keep running. The simplest solution to faster computing isnt to swtich to graphene or harness the powerr of light, but to just start using more chips. If you got 4 chips porcessing a program in paraleell, the computer will be 4 times faster right? But microchips are super expensive and its also hard to design software that makes use of multiple processors. We like our flow s of data to be linear because that’s how we tend to process information andits a hard habit to break. Then there’s exotic options like thermal computong, which uses variations in heat to represent bits of data, or quantum computing whcih deals in particles that are in more than one state at the same time. Thus doing away with the whole on/off either/or system. Wherever computers go next, there will have to be some big changes if we want Moore’s law to keep expanding exponentially.

Every year computers are getting more powerful. What used to fill up a room now fits in our pockets. More crucially, the time it takes for computer power to double is also getting shorter. At the outset of computing, the doubling process took 18 months, and this interval appears to be getting smaller. Plot this on a graph and it’s not a straight line but an exponential upward curve. We need only project into the future to see that there is a point at which this line is practically vertical. A moment in human history refferred to as the technological singularity.The futurist thinker ray Kurzweil postulates that as these technologies develop, we will likely edit our bodies in order to integrate with computers more and more. This concept should be familiar. We’re already in a symbiotic relationship with technology. Youn can send your thoughts at incredible speeds to recipients on the other side of the planet, find your precise location using satellites, and access the world’s repository of recorded human knowledge with a device you carry with you at all times. And all of this was unthinkable 20 years ago. Out of this predicted computer capability explosion may eventually come artificial intelligence. A simulated consciousness in silicon. Given the rate at which an A I will be abl to improve itself, it will quickly become capable of thought with precision, speed and intelligence presently inconceivable to the human mind. If kurzweil is right, and we end up integrating ourselves with technology, we could be in private contact with this A I whenever we chose. the result of this is that we effectively merge with this A I and it’s abilities effectively become our own. This will propell the human race into a period of superintelligence. But, perhaps, as some argue, no non biological computer could ever become conscious. Or what if, as every other dystopian science fiction book goes, this A I’s goals differ from our own. And what does our increasing reliance on computers mean for our future? Superlongevity and superintelligence are all well in good, but only in so far as they make us happier, more fulfilled, more content. Thats he math with todays technology, but in 30 years from now, that size factor would shrink exponentially. When Ray Kurzweil was at MITa computer took up a building. But our cellphones today have more computing power today then NASA had when they landed on the moon.Well there are problems, first of all Moore’s law is slowing down now. You can talk to any physicist who works in the field. And the reaosn is obvious, you’re eventually gonna bump up against the fact that silicon is incapable of sustaining these calculations at the molecular level. Your laptop computer today may have a layer about 20 atoms across, that’s about the limit of what we can do with computers today. In 10-50 years it’ll be done at 5 atoms. At that point it leaks, the heisengberg uncertainty principle comes in. Heat generated and loss through leakage, is enough to kill you, and thats the reason why we have to go beyond silicon. The silicon era is coming to a close. ust like the vaccuum tube era came to a close in the 1950s, silicon valley could become a rust belt. The next generation of computers may be molecular comouters, quantum computers, atomic computers, DNA computers, protein computers, photon computers, and quantum dot computers. None of them are ready for prime time. Molecular computers is perhaps the best bet, we don’t know how to mass produce them or wire them, they’re moleculas after all. Quantum computers are even worse, if someone were to sneeze a hundred meters away, that sneeze would be enough to vibrat the atoms so they’re no longer in synchronization and you lose coherence. Decoherence is the major problem for quantum computers. Thats why the US govt said it’s not realistic at the present time to build quantum computers. Jodi Roles from D Wave has sold quantum computers to google U of California, Mcdonald-DOuglas, so I think they’ve sold 3 so far. IBut still, I have my doubts. I think what they have does not live up to the hype and propaganda because of the decoherence problem. The world’s record for a quantum computer calculation is 3×5=15. IBM set that world record, that doesn’t sound like much. We need vats of liquid helium to even get 5 atoms to resonate correctly. We arent going to sell this commercially.


What is the technoloigcal singulatiry? It depends on how you define technological singularity. Some people ay it’s when robots become smarter than humans. Other people say it’s when robots reproduce themselves to create ever increasing generations of smart robots, other people say it’s when you can upload the human minds into a comouter. So the question is which singularity are we talking about? Whats the proper definition? The word sin gularity comes from physics, singularities called black holes. Also John Von Neumann one of the founders of AI, talked about a coming exponential rise in computer power, but they didn’t specifically say what that was. So we should take each of them seperately. Whether we can upload our mind into a computer, the answer i think is yes, we’ll be able to do it at the end of the century. Not anytime soon because we always underestimate the complexity of the brain. And we’ll also have to have the theological philosophical debates about consciousness soul, who are we, can we upload ourselves. But its gonna be a process, it won’t happen all at once. No one will anounce that on 2050 on march 1st we’ll upload the human mind to a computer. I think its gonna be a process, we’ll asymptotically get closer and closerr and closer and by then it’ll probably be silicon consciousness, it won’t quite be our consciousness, it’ll be a silicon consciousness. The 2045 Initiative. I will know what kind of champagen you like, the movies you go to, so in one dimesnsion I know “who you are” much better than a biographer who might not no all those personal things about you. So i think as time goes by, more and more of our life will be online. Including our personality quirks and experiences. So forget AI, just buy the internet, you’ll get a fairly good approximation fo your personality, interviewing your friends and relatives, they’ll know on a scale of 1-10 wheter you’re sociable or quick to anger, and then get a series of numbers. Brain net. A series of numbers 1-10 on how you’d react to certain situations. Then id put all this information into the robot. If toyu then put it into a social situation, it will have a very good approximation of who you are. It’s called a mindfile. Now extrapolate that, we’ll get assymptoticly closer, it won’t be you but it will be pretty much indistinguishable. The other singularity is when machines become smarter than us. Right now robots have the intelligence of a cockroach, they can barely navigate in a room. That’s level 1 consciousness, understanding space. But we’ll eventually have robots as smart as a mouse, then a rabbit, then a dog or a cat, and finally a monkey. At that poibnt, they can become dangerous, monkeys have a sense of awarenes, their own goals, their own agenda. And we should put a chip in their brain to shut them off if they have murderous thoughts. So before they become smarter than us, we should take safeguards. This can happen over decades, decades warning ahead of time. No one’s gonna annoucene that they’ve suddenly built a boeing 747 at the garage, it’s a process it takes time, so you can’t simply announce the creation of a 747 in the same way you can’t announce the creation of a machine smarter than us. So is it going to happen? Probably yes. When is it gonna happen? We don’t know, but its a proess, it’s not gonna happen all at once. And can they have children that are smarter than them? Maybe maybe not, because at present time, no one has demonstrated you can have a turing machine that can create another Turing machine more advanced than the first generation. No ever one demonstrated that yet, Von Neuman tried, but we still don’t know whether turing machines can self replicate to create other turing machines more sophisticated than the original turing machine. That’s because humans don’t fit the definition of being turing machines. Technicallty we are not turing machines, even though turing machines can mimic neural networks, we are neural networks and we belong on a different scale than digital computers.


The exponential growth in computer storage capacity and processing power has followed a pattern known as Moore’s law, which in 1975 predicted that information density would double every 2 years. But at around 100 Gigabits per square inch, shrinking the magenetic grains further or cramming them closer together posed a new risk called “The Superparamagnetic Effect’. When a magnetic grain volume is too small, it’s magnetization is easdily disturbed by heat energy and can cause bits to switch unintentionally, leading to data loss. Sceintists resolved this limitation in a remarkably simple way. By changing the direction of orecording from longitudinal to perpendicular, allowing arial density to approach 1 terabit per square inch. But recently the potential limit has been increased yet again. Through “Heat Assisted Magnetic Recording”, this uses an even more thermally stable recording medium who’s magnetic resistance is momentarily reduced by heating up a partticular spot with a laser and allowing data to be written. And while those drives are currently in the prototype stage, scientists already have the next potential trick up their sleeves. Bit paterned media, were bit locations are arranged in seperate nanosized structures, potentially allowing for aerial densities of 20 terabits per square inch, or more. So its thanks to the combined efforts of generations of engineers, material scientists, and quantum physicists that this tool of incredible power and precision can spin in the palm of your hand.

The most common type of memeory is Dynamic ram, or “D Ram. There each memory cell consists of a tiny transistor and a capacitor that store electrical charges. A zero when there’s no charge, or a one when charged. Such memory is called “Dynamic” because it only holds charges briefly before they leak away, requiring periodic recharging to retain data. But even it’s low latency of 100 nanoseconds is too long for modern CPUs, so there’s also a small high speed internal memory cache made from static ram. That’s usually made of 6 interlocked transistors, which don’t need refreshing. S-Ram is the fastest memory in a computer system, but also the most expensive and takes up 3 times more space than D-Ram. But ram and cache can only hold data as long as they’re powered. For data to remain once te dvice is turned off, it must be transferred into a long term storage device which comes in 3 major types. In Magnetic storage, which is the cheapest, data is stored as a magnetic pattern on a spinning disk, coated wioth magnetic film. But because the disk must rotate to where the datas is located in order ot be read, the latency for such drives is 100,000 times slower than that of D-Ram. On the other hand, Optical based storage like DVD and Blue Ray also uses spinning disks but with a reflective coating. Bits are encoded as light and dark spots using a Dye that can be read by a laser, while optical storage media are cheap and removable, they have even slower latencies than magnetic storage and lower capacity as well. Finally, the newest and fastest tpes of long term storage are “solid state drives”. Like flas sticks. Tese ave no moving parts, instead using floating gate transistors tat storer bits b trapping or removing electrical carges witin teir speciall designed internal structures. So ow reliable are tese billions of bits? We tend to tink of computer memor as stable and permenant. But it actuall degrades ver quickl. Te eat generated from a device and its enviropnment will eventually demagnetize hard drives., degrade the dye in optical media and cause charge leakage in floating gates. Solid state drives also have an additional weakness, repeatedly writing to floating gate transiostors corrodes them, eventually rendering them useless. With data on most current storage media having less than a 10 year life expectancy, scientists are working to exploit the physical poroperties of materials down to te quantum level. In te hopes of making memory devices faster, smaller, and more durable. For now, immortality remains out of reach, for humans and computers alike.