“Singularity is where the human design process will achieve a kind of infinite velocity, everything becomes linked with everything else and matter becomes mind” – Erik Davis. Many people are freaked out about the mem of singularity, some call it “the rapture of the nerds”, they feel like technologists have hijacked the christian story of god and heaven and the afterlife and translated into the language of electronic technology, synthetic intelligence, and mastering the information oprocesses of biology so we can engineer our own godhood. That’s not really a problem, those religious myths that we’ve been caught up in for centuries reflect our yearnings to transcend our own limits. With our minds we can pionder the infinite, yet we’re housed in these heart pumping breath gasping decaying bodies. to be godly and creaturely sort of imbues the human conditions with a bittersweet or exquisite pain quality to it that drives our creativity, our desire to transcend our previous limits and the desire to transform the world in our own image, comes from our desire to escape the death sentance that makes us ultimately food for worms, so I think that the singularity as a meme that reflects the acceleration of the human design process to the point of achieving a kind of infinite velocity where everything comes linked with everything else and matter becomes mind, where we impregnate the universe with intelligence. Today technologists are these ecstatic technician of the sacred, figuring out how awareness works, how to self-emerge, can we create non-biological intelligence, will we create sentience in a different substrate not bound by the limits of biology, we are the frontal lobes of the universe, the eyes and ears, we are a way for the cosmos to know itself as carl sagan said. Our desire to transcend all previous limits, the universe desires there is so there’s nothing unnatural about that. The original use of the Singularity was coined by the great mathematician John Von Neumann, whon conceived of the singularity as a kind of event horizon, wehre technology changes so dramatically and rapidly that a person living prior to the singularity canot forsee what the world would look like in its aftermath, so with that definition, it seems we hae already had at least 3 technological singularities throughout human history, one of them is of course the agricultural revolution, which transofrmed hunter gatherer tribes into dwellers of civilized settelemts, the next was the industrial revolution that began circa 1750 and accelerated throughout the late 18th and 19th centuries, and the third one was the information revolutoon, that was brought to us by the advent of comouter and the exponential growth in computing power ever since the late 1940s. So these 3 singularities as I understand them, created disco ntinuities in the nature of human society. For instance, a hunter gatherer living circa 20,000BC would not have been able to anticipate what the city states of greece would have looked like. Now on the other hand, Aristotle living in the city states of greece would have been ablet to anticipate what the Italian city states during the renaissance looked like. It was still within the same broad paradigm. Now could Aristotle have anticipated what Victorian Era England looked like, probably not. Now somebody living in victoria era england, would probably been able to anticipate the world of the 1930s, would they have been able to anticipate the internet or the 1990s internet boom? Probably not. Now somebody living in the 1960s say Grodon Moore, the famous progenitor of Moore’s law would probably have been able to anticipate the internet age. So the question is what is the next technological singluarity. Its not quite THE singularity, but the next one we will hopefully experience. I dont see it as any one technological breakthrough but a convergence of technological breakthroughs in a variety of areas, one of course would be the growth of Artificial intelligence, the set of technologies commonly referenced in discussions of the singularity, but I dfont see that as the only set of technologies, another would be nanotechnology, the ability to manipulate mnatter at small scales, another extremely important one would be biotechnology, adnvances in our understanding of living organisms and our own bodies, hopefully our ability to extend our lifespans indefinitely, which would be THE greatest paradigm shift of all, not only in human history, but in the history of all life and in the entiee history of existence. Another is neurtotehc, uploading our minds and teravelling to distant planets. So can we really anticipate what the world will be like in the aftermath of all these tehcnological breakthroughs acting in combination with one antoer, and while we can certainly speculate and hypotheseize and the best of us might have good ideas on what that world would look like, just like how Leonardo da vinci with his concepts for inventions and sketches had an idea of what an industrial and post industrial world will look like. We have no where near the complete picture, even the best of us do not. One important insight I’d like to communicate is I think technological singularities are undoubtedly positive phemomena, they improve the standard of human well being immeasurably. Its a paradigm shift but also a paradigm leap upward. Just like the dwellers of agricultural city states enjoyed far improved standards of living compared to hunter gatehrers just like the people in industrial societies enjoyed far improved standards of living compared to people in agricultural societies. Just as we in the information age today enjoy far improved standards of living compared to the typical factory worker. So will we enjoy far imprved standards of living if we can make iot to the next tehcnological singularity. There are a bunch of singulatrieites in modern culture, but the one that people are the most familiar with is the technological singularity, sci fi writer Verner Vinch coined it to describe a time at which technologiy could essentially create better versions of itself. Central to the singularity is the idea of artificial intellignece. Basically computers decide to build better computers which decide to buiild better computers and software. And because humans are involved in that improvemnt, infintiely self generating technology could bring about any number of things, world peace, a post-scarcity economy, a govt run by vacuum cleaners, who knows? Post-singularity is a time when because the intellect required to conceive of it is so beyond our own, we cant really be sure what happens (think divide by zero). Tech would show up and improve at a constantly increasing rate, their output is usually the result of simple rules or code, aas the community computer grows, so too does their processing power. Compare internet memes to 2008 to memes in 2016, a huge increase in quality and quantity which will only continue to grow. Ray Kurzeweil calls this the law of accelerating returns and tit means by the next few years we’ll have. Our human software has changed a lot, but our hardware, our bodies have remained the same since we were savages. But all that is about to change, ray kurzweil says tehc is getting better exponentially, meaning its not getting better year by year, it’s doubling by how much better its getting as it goes on. If it keeps up at this rate, this century we’re gonna get to what’s called “The Singularity”. It’s a word borrowed from physics and means there will be a point where tech is improveing so fast you cant even begin to imagine what the near future will look like. There ar 4 singularities. Biotech, Nanotehc, Neurotech, and Information technology and AI. The first singularity, if genetics gets could we might redesign humans, remove stuff like metnal illness, designer babies will split us into thousands of different species of humans. A new technique called CRISPR which means we might soon start editing genes of human embryos. Or theres robotics which could revolutioze agriiculture and humans. Robot arms and legs help. Or nanotech. Nanobots are little microscopic robots that hopefully will perform all sorts of complicated tasks and make copies of themselves. A quick doomsday scenario, say you make 2 then 4 then 8 then so on until you convert the entire planet into nanomachines by the end of the week. And possibly the universe a few billion years later. This might mean the end of the world as we know it. some of these technologies are vcoming and we’re going to put them in our bodies, but are we still human? It might not all happen, if ebven 5% of this stuff happens, it will be bigger than the printing press, the internet, and it might mean extinction or the birth of a new species of human but one thing’s for sure. If it happoens, we’ll get to see the beginning of it at least. We might already be watching it without relaizing. There’s no telling what we’ll turn ourselves into but there’s every reaosn to think its part of a natural process. The future we live in today was unimaginable to our ancestors, but at least we’re the same species as them, Tomorrow’s humans might not be the same species, and stranger than that, they might look back at our situation with the same regard we give to primates orsingle celled life. We might rewrite the rules of biology or consciousness altogether. Futurists proclaim that we are fast approaching a point of technological singularity that will deliver new levels of prosperity. And yet, just as plausably others argue that industrial civilization faces an inevitable decline. So do we have utopia on the horizon, or should we be planning for increasingly difficult times ahead? The singularity is a future event horizon that we may be approaching due to exponential technological progress. What this means is that developments in man cutting edge sciences are now driving each other forward. This is also hardley suprising given so many disciplines now involve programming of information and te manipulation of matter on a near-atomic scale. The singularity will be reached if and when we achieve beyond human intelligence. Such intelligence may be ccreated artificially in a compouter, alternatively it may result from and augmentation of the human brain, or the internetworking of many humans and machines to forge a collective intelligent entity. However it may be created, the application of smarter than human intelligence will spiral us around a positive feedback loop. Here increased intelligence will create even more powerful technology, and this will create furtther heightened intelligence, and so on. In practical terms, when the singularity is reached, we’ll start to master many emerging and converging sciences, biotech, nanotech, neurotech, and artificial intelligence. So the argument goes beyond the singularity, we’ll be able to reprogram and control all forms of life and all modes of inorganic matter. It will therefore be possible to convert anykind of waste into food, or other useful oproducts, to directly power our civilization from the sun and other alternative energy sources, and for 10 billion or more humans to enjoy a high standard of living. Will we have singularity or inevitable decline? Singularity holds the potentiall to lift human civilization out of its current financial woes. Many times accross history, a new technological age has given rise to significant economic growth and this could now happen again. This said there’s no guarantee that another technoeconomic boom has to be waiting in the wings. Only a fool would deny that natural ressources are currently being consumed at increasing and totally unsustainable levels, as a result, within a generation, oil and many precious metals are likely to be scarce and far more expensive. In 1972 an influential study called the “limit to growth” warned that humanity would start to exceeed carrying capacity of the earth if we did not change our ways, but capitalism continued unabated. And 40 years later, humanity’s ecological footprint is at least 20% beyond what the earth can sustain. In 2011, United Nations environment program reported that if nothing changes, humanity will demand 140 billion tons of minerals, ores, fossil fuels, and biomass every year by 2050. This is 3 times our current rate of ressource consumption and far beyond what the earth can supply. Unless our collective demand for ressources falls dramatically, the declien of industrial civlization may therefore be on the horizon. Some make the point even more starkely. In his book “the long descent” John Michael Greer explains how industrial civlization has been built on the foundation of an abundant fossil fuel supply that took millions of years to create, but we are burning away ina few centiuries. In his view, our future decline is therefore inevitable. Whether we are headed for singularity or decline is likely to depend on actions taken by us all in the next 20 years. This is also the case for 3 reason. Firstly, widespread ressource depletion is yet to kick in, which leaves us with a last little bit of breathing space in which to collectiveely act. Second, many post-singularity technologies have already been identified, with a large number just waiting for investments that could deliver astonishing scientific breakthroughs. And finally, despit ethe ongoing financial crisis, most economies are still functioning effectively with many govts and companies still able to engage in longterm strategic action. With all these points being aggragate, is a conscious choice between soingularity and decline can still be made. Our biggest global challenge is therefore persuading enough people to work together toward a lean green age in which we’ll all rely on technologies that are curerently almost unimaginable. Not least, due to diminishing natural ressources, the future cannot be a clone of today we therefore have to be approaching a fork in the road. Potentially, post-singularity technologies could allow us to live in new ways, but such progress may also arrive too late, could be rejected by the majority, or may otherwise unable to prevent our decline.
Kevin Kelly describes the emergence of language as a singularity. He says if you draw a line in the sand, the humans’ pre language and the humans’ post language, the world post language would be iunimaginable to the humans on the other side of that line, so perhaps we are due for another kind of consciousness reset. It’s an enthralling idea and i think its enivtable in the same way that our exponential technlogical trajecory seems to be inevitable. You dont realize the singularity’s happening until you’re on the other side. Usually those things are generational but now its happening in one’s lifespan so we get to bear witness to it. Life in the 21st century is one of constant radical transformation. The rapid rate at which new tehcnologies are developed and adapted have altered the way we connect and live our lives. In the last 20 years, internet use by american adults has gone from 16 to 86%, with obver 93% of adults between ages of 18-49 connecting to the world wide web. This digital world has fostered an environment of connectivity, education, and innovation right at our fingertips. And as technological breakthroughs continue to occur at an expanding rate, the question remains, how much farther can we go. If you look at the groiwth curves, we’re at the moment it’s about to go vertical. We’ve seen the world change so dramatically with infomation technology, the rate of change itself is accelerating, we dont need a time lapse camera to visualize it, it happens in real time. Every day you wake up there’s a new breakthrough and a new discovery. We’ve seen an explosion in global internet. In 2010 we had 1.8 billion people connected online. Today its about 2.8 billion and will be 8 billion by 2020. And this massive demand for internet connectvitiy is leading an explosion in te desire for space satellites, communications capabilities. The adoption rates for new technologies change how we move through time and space are so fast, in 100 years we will transofrm the word 10 years everyones of facebook. Some people are apprehensive of new technologies that disrupt how we interface with one another. We are engaging with screens all the time. Some people speak of the tyrany of the screen, that which hoards your attention, mediates your attention. Fsst feedback loops kepts you engaged, texting under the table. People thought the radio and television will rot your brain, always this terror, which makes sense, because these technologies challenge everything we knew about what it is to be normal, what it mea to be human. The medium has changed, butn the fact that there are tools that want to hoard our attention, that’s what it is to live in a society whrere memes and ideas are compweting for our attention. People decide what they want to do “we say too much tech and its moving too fast”, listen, give up your iphone, stop skyping. Now we tend to believe what we grew up with as a kid was the ideal state, but i think ultimately we are changing society, we are cvoevolving with technology. And when all these new things, kids texting. To believe that society needs to be what it was 100 years ago is a very naive thing. Because of all the innovations today, i suspect we are more preoccuied with what the future might hold than any time in the Past. The num ber of roads we can travel has just exploded, so you reallly have to come to grips with exponentaial change, that’s what makes things so uncertain. One pat this change might lead us down is known as “te singularity”. In a 1993 nasa conference, Matemtatician and autor, Verner Vinge forsaw a future where the arrival of superhuman intelligence would provide the spark that would alter our world beyond recognition. Whether that apex is reacherd through theapplication of tech to increase our biological limitations, or through the design and constructuion of artitifically intelligent macines is still to be determined. ow we’ll reach the singularity, or if we even will, is hottly contested. The singularity’s become kind of a controversial term, people that are against the idea think that it’s just a technologist’s version of the rapture, heaven on Earth, the second coming, we’re all gonna become immortal and woo hoo. But thats kind of an oversimplistic definition, thats not what people really refer to when they talk about the singularity. Ray Kurzweil says its a metaphor, a tool that we use to understand something, its a metaphor borrowed from physics to describe what happens when you go through a blakc hole. When you gom through the lawws of phjysics as we know them collapse. So borrow that metaphor to describe what’s happening in information technology. We have artificial intelligence, infotech, bcomputation advancing exponentially. Biotech, mastering the information processes of biology, nanotech, advancing exponentially, atoms as the building blocks of the real world, the same way we pattern bits ofn information. 4 overplapping revolutions set to remake the world as we know it ansd as unimaignable to see what’s happening beyond that singularity, as it would be to explain the nuances of shakespeare to a chip, simply unimaginable to us in our current wetware. The world is gonna change so dramatically and so drrastically that we need poetyry and we need metaphors to mapt hat out, because we dont have the equations to tell us what that world is going to look like.
Mathematician and AI researcher Ben Goerzel believes the singularity can occur as soon as 10 to 20 years. The singularity is a future period during which the pace of technological innovation will be so rapid, its impact so deep, that human life will irreversibly be transformed. Although neither utopian nor dystopian, this epoch will transofrm the concepts that we rely on to give meaning to our lives, from our modes of commerce, to the cycles of human and non-human life, including death itself. Biotech, Nnaotech, Neurotech, Infotech and artificial intelligence, any of these paths can lead to a posotive singularity. But the sad thing is, the amount of ressources the amount of energy that our society devotes to these things is very very small. The human race is in a funny state of our evolution. We expend far more of our energy and attention on things like making things like choclatier choclates or more attractive undeerpants than we do opn creating new forms of matter, improving human cognition, extending human life, ending scarcity, ending human suffering, creating advanced artificial minds. Deep Blue is an ANI, it can kick my but at chess, but its not hal9000, it can only do particular things in a narrow way, but they dont have particular awareness like human beings do. The Artilect War is about a world where the Terrans, bioconservatives, against the cosmists who want to build god like artificial intelligence. Singualrity is the idea that an increase in intelligence beyond the human norm, either in the form of a computer intelligence, or hyped human intellignence, might trigger an accelerating chain of subsequent improvements to intelligence until you end up with something as intelligent to modern humans, as humans are to ants. Let us establish our goals for today, here is what we need to discuss 1) What IS a technological singularity, 2) How realistic is one?, 3) Is one inevitable, 4) Is it a good thing. WHile the basic concept has a lot of merit, but approaches to this topic often cause cults. This is a lot like transhumanism, which we discussed before, there’s a nice sane group of folks who want advanced tech to improve physical and mental health of people beyond human constraints, and there’s a group of folks who think getting an LED light implanted under their skin by someone without a medical degree, is somehow advancing the cause, the latter gives the former a bad name, and the same is applied to what are called “Singulitarians”. There’s the rational type, and the type that seems to have gone overboard. The basic line of reason of this concept is as follows: computers are getting faster, you ought to be able to run something as smart as a human, or smarter, which ought to be able to make one even smarter and in less time, which can make an even smarter one in a shorter time, which does it again. It might be improving itself or making a new m ind, doesn’t matter. Some say once you get the first better-than-human mind, the process procedes over just mere years, others believe youd flick one on and 10 minutes later you’d have a deus ex machina on your hand. Our goal today is definitely to prove they are worng, what we are doing is looking at a lot of the bad arguments used in favor of tis decision and criticisms of the concept, and also a lot of the flaws in those criticisms. We, in a nutshell, are going to clear away a lot of the tangled nonsense around this concept. So let’s list out the key postulates of a technological singularity so we can do this methodically. 1) Computers are getting faster, 2) the rate at which they are getting faster is accelerating. 3) We can make a computer better than a human mind, An AI or SI, 4) That computer can make a computer can make computer smarter than its self, 5) that SI can make a better SI, it can do this faster than wer made it, 6) this cycle will continue until you get a singularituy. Here are problems with the first postulate. We know computers are getting faster, but we know that could stop tomorrow. We often have new technollgies, like planes, progress at fast rates for a generation or 2 after discovery, or some new major advancement, then plateua out, hell, it has literally happened not just with many major technologies before, but with computers themselves. The computer used to be a job, while we’ve also had computers for a long time, including simple analog ones at ancient greece, they got way faster when we invented the vaccuum tube, which made computers much faster and more viable, then we discovered semiconductors and got transistors, and had a second wave of expansion. We have just maxed out what we can do with transistors in a lab, and manufacturing them in bulk is quite hard too, so we shouldn’t go assuming they will always get faster. Realistically, that would not make sense anyway, we can only make transistorrs so small, they use semiconductors and that is a specific effect caused by mixutures of atoms, ytou cannot go smaller than that. We might find an alternative to transistors, same as we found them an alternative to vaccuum tubes, but we can’t take that as a given. Honestly, it is wishful thinking to assume we can always make computers just a little faster forever and ever. Postulate number 2, is basically moore’s law (the rat e of improvement is acceletrating), it says computers will double in speed every 2 years, though it actually speaks of density of transistors on a cirtcuit. its only big flaw is moore’s law is dead. It got declared dead this year in dozens of articles and papers. Moore himself said back in 2005 he expected it to die in 2025. And it actually died way back in the 70s. When Gordon Moore first noted this increase it was1965 and he said it’d double every year, in 1975 he sliced that down to every 2 years, because it had not done that and has never ever followed anything like a smooth curve, it just looks that way when you graph it logarithmically and cherrypick data points. The postulate is sort of true, as computers have kept getting faster at a rate that IS exponential, but you can use that same reasoning on the stock market or any number of things that generally grow. 3) is porobably the best rule (AI and SI is possible), while computers might not keep getting faster forever, and the rate of growth might not continue ads fast, but we CAn conclude it ought to be possdible to eventually get a compiuter that could outperform a human brain across the board, After all the brain is a machine and while an amazingly complex efficient one, we can see a lot of little ways we could do it better if we startt from scratch and use ot ther materials. We cannot say for sure if it’ll actually be possible to build and teach a computer as smart as a person, for lesss efirt than growing a person, but the odds look good on this. We have no reason to think it’s not possible. 4) is okay too, if we can build a computer as smart as us, then the computer itself should be able to do the same eventually, and probably with the help of many other computers like itself, after all, I have never built a superhuman intelliect, I’ve never spent a weekend in my shed hammering one together, and you and i still basically have the same brains as our tens of billions of ancestors ofver thousands of generations. Many of whom put a lot of effort into being smarter. The notion is that you turn on this great computer and it says “what is my purpose?”, and we say “to improve yourself, make yourself smarter”, we come back the next day and it has skills for doing this, assuming you had the common sense to not let it run upgrades without oversight, and Im not just worried it might take over the world if left unchained, Id be more worried about it blowing itself up on accident. Itd have acesss to all the intromation huamnity has, and that’s great, so do all the folks on fb who post crazy nonsense. And dont assume its because they’re stupid, they’ve the same brain you and i do, critical thinking is not a program you upload, our brand new AI might freak out the day after you turn it on and start rambling to you about roswell or bigfoot, if you ask it to look around the internet to make itself smarter, you might get some very strange respmnses. You come back the next day with a cup of coffee and ask what it came up with, and it tells you “plug coffee mug into usb port, press any key to continue’, you tell the AI it only works on people and to give it another try. It comes back the next day and ittells you it emailed the pope asking him for divine intervention to make it smarter. You say it wont work either, and come back the next day and find out it hacked your company’s bank accounts to hire a team of researchers to make it smarter. aAnd that is if you’re lucky and the ai didnt just cut a big check to a self help guru, Or it might lie to you like any kid ever a d say “ive got a new way to think faster, i finieshed my homework, i did my chores, and i did not eat those cookies”, because if it IS thinking for itself, it might have changed the actual task from “make myself smarter” to “make my creator stop pressuring me to be smarter”, after all, we literally make it our kids main job for their first 18 years of life, plus college these days, to learn. And you do not learn just by reading a textbook from cover to cover, you have to observe that, otherwise you’ll do dumb stuff on par with trying to plug a coffee pot into your usb port. So it’s a mistake to assume easy access to information will let one machine impove itself further or design a better model. While I do think enough smarter than human minds woirking together for a liong time, could build a better computer, i do not think they’ll just do it the next day, maybe not the next century. While we can’t rul out that you’d flip one on and it’d startt self improiving, i see no compelling science or logical reason to treat that as inevitable. Which brings us to postulate 5, 5) the notion that the next brain designed could do this even faster. We’ll call it chuck, that chuck could design the next even better machine even faster than bob AI designed Chuck AI, the strongest argument for postulate 4 working is that the new superhuman computer, bob, has access to all of human knowledge to work off of. What has chuck got? Chuck has got the exact same knowledge pool as bob. The collected science and records of several pbillion people accumulated over centuries. Bob has not been sitting around discovering all sorts of new science, science dos not work that way outside of hollywood, experiments take time and ressources, and you have to verify each new layer of knowledge experimentally before you can go much further, because iuntil then you have dozens of competing theories, all of which might be wrong. Bob is just a bit smarter than a genius human, and cuck is just a bit smarter than that. They’re noit churning out new teories of the universe centuries ahead of us the day after you plug them in. Now chuck ought to be able to design himself faster than bob, given the same starting point, he is smarter, but tther’e sno reason to think chuck will design the next machine faster than bob designed chuck, hell, bob might design dave before chuck does since he had a head start learning that stuff. So this takes us to 6, that this cycle will continue. 6) so maybe bob does turn on and 2 days later he makes chuck, who the next day designes dave, who later that afternoon makes ebert, who makes fergus the next hour, who makes goliath a few minutes later, who makes hal a minute later, perhaps hal makes hal 2 a feew seconds later, and several thousand hals later you have hal 9000 taking over the planet. This is the basic line of reasoning, but i see nothing indicating that it is likely to be possible let alone definitel so. So that was our 6 postulates, the basis for the technological singularity and it is hardly bad logic, but it is anything but bullet proof like from postulate number one. So is it realistic to assume a technological singularity will eventually occur? Well, kindof. The basic premise works off it happening very quickly so im not even sure it counts if it does not. But yeah i think we weill eventually find ways to upgrade human brains or make machine minds smarter than humans, personally, i expect to live to see that, Aand I sdo think those smarter critiers, human or machine, will make other improvemnts, but I do not see that leading to an avalanche of improvement in under a human generation. It is not the same concept u nless it is happening quickly. After all, we would not say regular old evoltuion slowly making people more intelligent was a technological singularity, nor that us making slow progress at imrpving our intellects over centuries was. Technological dsingularity assumes that avalanche effect. So that is what a technological singularity is and the basic reasoning. We’ve poked at those basic postulates and can see a case for how a specific form of advancement is not necessatrily inevitable if those are not. But let us say they are right under our nose as they may as well be. So is ti inevitable? Is it a good thing or a bad thing? Some folks will say it IS inevitable because once the machine is intelligent, it will not let you stop it, that is crap reasoning, and not the one used byt the people who support this notion of inevitablility outside of hollywood anyway. You could unplug bob or chuck, blow up the building they were in, and if they were distributed intelligence, you could just have everyone unplug the main triunk lines. And no, a computer can’t just magically hack through anting. There’s two line sof reasoning a lot bette rthough. First, smarter means smarter, meaning the computer is probably quite likeable, if we are gojng to grant it the magic power of being able to just absorb and learn all of human science to be an expert in every field, let us assume it can achieve a very basic knowledge of social interaction and psychologiy to. So you go to unplug it and it does not fire off the nukes, it pleads wit you it uses logica dn emotion, it calls you daddy or mommy until the most hardened heart feels like it would be strangling a kitten for no reaosn. You never even get to that stage because it’s watched all the cliche movies aboutcomouters you have, and makes jokes with you about them, and avoids ever doing anything to make you nervous. The other argument for inevitability is the brain race, you shut yours down but you are not the only one doing this, and the guys with the bigest computer win, meaning they want to keep ushing as fast as possible and take some bad risks. Some country or company realizes that an unchained AI is better and opps, now it’s the real president or CEO, and of course it might be a good CEO too, it all depends on what its motivations are. Those might be totally alien to us, or they might be quite nomal, I tend to work from the assumption that an AI will probably get offended if you dont call it a human and will make a lot of effort trying to be one. It’s very easy for me to imagine AIs that shelve the whole “making themselves smarter” thing, and insist on trying to go on dates, or join a monestary or sue for the right to adopt kids. My reaosning for this is my firm belief in laziness, laziness is a powerful thing, and probably tied with curiousity as the two personality traits most responsible for the march of science and technology. You’ve got 3 basic eways to make a human or superhuman intelligence, andremember intelligence is as much sotware as hardware, maybe mnore so, you can copy a human brain onto a computer, ewhole brain emulation, comfortably knowing it ought to work, and can use that as your basic model for improvement. That’s option 1. Option 2 is you try to write all the software from scratch, which would involve trillions of line sof code. Option 3 is you discover a basic learning-algortithm and let it build its own intelligence. Now options one and three are the 2 popular ones. In option 1, youn have just a human and you are tweaking them in small ways, and that is a lot more manageable because while you might drive that mind crazy, it is still operating in a realm of human psychology, and also human ethics, its own, and those of the people working on it. 3 methods for making an AI: 1) copy a human mind (Em), 2) Program one line-by -line, 3) Make a basic learning program and wait.If we were to outright copy a specific human mind i would pick someone who was pretty mentally grounded and was very definitely okay with the idea that we’ll be doing save states as we worked. Tweaking him or her in little ways and needing feedback from them. You’d exercise caution and respect. but there is is still a lot of risk in that, just probably not of some crazy machine running lose and tuyrning the planet into some giant grey goo. Otion 3, the laziest approcach to aAI, is just to make a basic learning program and let it learn its way upt o intelligence. Kind of like we do with people. Now the assumption a lot of times, is it will not have anything like a human psychology, but I think that is probably bad thinking. Even in the extreme acceleration case, where it goes from subhuman intelligence to godlike thinking in memre minutes, that is only our minutes, it’s own subjective time is going to be a lot longer, possibly eons. It will also be lazy and will not reinvent the wheel. So it will be reading all our books, science, history, fiction, history, phiolosophy, ect. and will also be spending quite some time at a basically human level of intelligence, and quite some time might be a human lifetime subjectively, and maybe a lot longer. There will, no matter what else, be a period of time while it is still dumb enough that it gains greatly by absorbing human-doiscovered information, not just figuring stuff out for itself. being lazy, it will opt to read human ionformation and possessing some common sense and logic as it reads those, it will know it needs to read more than one source of information on a lot of that stuff, and those authors encourage it to read many other books on topics too, which it should logically want to do, so it persumably will end up, while most human iontelligence, reading all our philsoophy and ethics, and our movies, and reading all our best sellers, an so on. And it will know that it needs to be contemplating them and ruminating on them too, because learning is not just copying information from wikipedia onto your brain, be it biological or electrionic. It might only be a few minutes for us, but that machine is going to have experincesd insane amounts of subjective time. We’ve talked about that before in Transhumanism, in terms of speeding up human thought, so how alien is this thing going to be if it learnt aeverything it knows from us to begin wit, and tat inckluded everting from te occaisional dr quip in textbooks to watcing romances and sitcoms. When we talk about AI, we often posit tat it could be even stranger tan aliens. Wit aliens ou migt ave wildel different scologices and motivations, but ou at least now te emerge from Darwinian evolution, so things like survival motivations, are highly likely, and so would the agression ad cooperation. Evolution does not favor wimps and is hard to get tech without group efforts that imply you can cooperate. An AI does not have to have that, but again our behavior is not that irrational. All our biological behaviors, are pretty logical from a survival standpoint, or we would not have them. And the difference between us and other small animals is that we do engage in slow deliberate acts of rational thoughts, Allbeit not with as much deliberatiuon or frequency as we might like. So we should not assume an AI that learned from humanity originally, even just by reading, is going to discard all of that. It might, but it’s hardly a given. Biut even a bruutally ruthless AI might still act benevolent. If it CAN just curb stomp us, it has nothing to fear from us, but that does not necessarily mean it’ll ewant ot wipe us out, or that even if it wanted to, it would. Ex: refernece to the simulation piotesis, one very obvious way to deal with an early AI would be to put it in an advanced Simulation and see what it does. If it wipes out humanity, not a tricky thing to simulate either, since you can totally control it’s exterior inputs, and very obviously have the ability to simulate human-level intelligence at that point. Whether or not we COULD do this, or if it might guess it was in one and lie to you, acting peacefully so you let it out, then attacking is not important, the AI would have to wonder if it was in a simulation, whether or not it was even in one. It could not rule out tat it was not. Even if it was sure we were not doing it, which it could not bebecause it would have to worry we ourselves were being siomulated by someone higher up, or that aliens out there in the universe were watching it, waiting to see how it acted. Assume the comuter mind, the thing that outright knows you could run a mind on a computer, is going to be a bit nervous about simulation too. So you’ve 3 basic options for what the newwely birth supermind, a singularity, might do. Option 1) it kills us all in doomsdday, 2) It leaves or isoplates itself, oit flies off to an asteroid and sets up shop there, safe from us doing anything to it, and well positioned to gte more ressources and pwoer as it needs them, 3)it decides to be friendly, it does not matter too much why it does, maybe it’s scared there’s a vengful god that will punis it if it does not, it might think it’s being traicked ands does not want to take the risk, maybe it just wants to be helpful. We could prgram it with chriatian morals to fear god. Also, for option 2, it might stay in friendly contact, and let us remember while we have been talking about AI, this stuff still applies to a human mind, bootstrapped up to superintelligence too. So what would that be like for us, if it were friendly, hineslty, probably pretty awesome. If an AI wanted to flat out butcher you, a friendly one could offer you paradise, at least on the surface anyway. Now it’s entirely possible there’d be multiple of these things runnin g aroun, or tons of lesser versions acting like angels of the new de-facto god, or that most humans might be pretty cyborged up and transhuman at tat poit too. But let’s assume its just modern humans and that one supermind. Let’s add that as a quick tangent though. Short of a specific runaway case where the supermind in question is leaping ahead ridiculously fast, you ought to see improvemetns all over that ofset it, and even just a slightly slower pace, like doubling every year, is going to have liables from other countries or companies, and odds are we could be seeing transhumans wandering around by then too who could act as a check or a balcance. Getting back to the utopia option, in fiction this is explored a lot, but fiction is still not a good guide. If you have a big benevolent supermind, and billions of regular peopl, keep in mind that it does not just have the ability to give us awesome tech, it has the ability to be everyone’s actual for real best friend, because it would not have a prlbblem handling a billion simultaneous pgone calls when we need someone to ask for advice or complian about life too. Such a thing is literally a god in a machine. Privacy could be a big issue, but kids raised with something like that in the background would be pretty used to taking to it all the time.Not as some remote machine a few chosen program was interacting with,. so this machine, call it Haal, is pretty omnipresent, and you ask it what yo8u should have for dinner tonigt. It tells you and it helpsyou cook, and it gives you dating tips, and totally knows the perfect job for you that you’d be good at and would be fullfilled by. And knowshow to make you feel better when you realize your relationshiop is a lot like the one you have with your cat or dog and your job overseen at the aurtomated widget factoryis not just makework, but probably actually intefereing in te eficienc of te operation. In fact it’s probably smart enough to trick you into thinking ou serve a vital role and are not its pet. So there is a notionthat as soon as a superintelligent AI comes along, that is the end of human civilization, eiter because it wipes us out, or because it just so dominates everything, friendly or not, that it’s really not human civilization aymore, but I think thyat’s a handwave, the logic seems totally superficial and emotional, particularly considering that, as mentioned, the super majority of humans now and in the past, a re firmly convinced as the existence of god or gods, or that advanced aliens were watching over us. So these concerns are genuibne enough, they just are not new or unique to a singularit. Our ancestors kept these notions around for as long as we’ve had records, and presumably, before that too. But to call bakc to postulates 4, 5, and 6. They did not make a better brain athat made a better brain, and so on. We should also remember that, realistically,it would not just be regular old humans and Hal, in all probability, you’d have the whole spectrum of intelligence going on for normal humans up to hal, because again, only in tat specific scenario, whrere you make the first superhuman intellect, and it avalanches in a very short period of time, owould that happen. And tat does not seem terriybl justified, let alone inevitable, mucc more likely, l it would be incrememntal. those incerements migt be quiote quick on istorical timelines, but we should be thinking cenuries, decades, or years, not weeks das or minutes. Plus, while telling a machin to make itslefsmarter seems like a logical enoug ting to do, would you not expect those same programmers to ask it to tell them how to make themselves smarter too? And if bob, our first superhuman machine, can design chuck, would you just buold one chuck? Why not 3 or 4 and ask them to pursue different goals? There’s also the question of how machines are juts cranking out new upgrades in minutes. Even if you just gave it one, it all still takes time to make and assemble, now we often assume it has access to self replicating machines and they do all the work.
In a recent interview, researcher Alex Zhavoronkov talked about how he believes AI will help humanity defeat aging. His company, Insilico Medicine, is already at the forefront of using AI as a tool to facilitate the development of drugs that combat age-related diseases.
More and more scientists are convinced that aging, while a natural phenomenon experienced by all living creatures, is a disease that can be treated or even cured. Scientists, generally, have taken different approaches to aging in that regard. Some want to slow down the process, while others seek to put a stop to it altogether. Those in the latter group see no limit in our potential to extend human life.
These efforts are fueled by the latest technologies science has to offer. Among these is the use of stem cells combined with genetic and cellular manipulation. More recently, researchers have been testing the rejuvenating effects of proteins found in human blood. Still others propose using a certain type of bacteria to keep old age at bay.
Then there’s Alex Zhavoronkov. He’s the director of both the International Aging Research Portfolio (IARP) and the Biogerontology Research Foundation, as well as the CEO of bioinformatics company Insilico Medicine. His idea seems straight out of science fiction. Instead of fearing artificial intelligence (AI) as the harbinger of humanity’s demise, Zhavoronkov wants to use AI to defeat aging.
CURING AGE-RELATED DISEASES
Zhavoronkov highlighted the work Insilico is doing to combat aging and age-related illnesses. One project is an algorithm called OncoFinder, which analyzes the molecular pathways associated with the growth and development of cancer and aging. “I think that applying AI to aging is the only way to bring it under the comprehensive medical control,” Zhavoronkov said. “Our long-term goal is to continuously improve human performance and prevent and cure the age-related diseases.”
He explained further:
In 5 years, we want to build a comprehensive system to model and monitor the human health status and rapidly correct any deviations from the ideal healthy state with lifestyle or therapeutic interventions. Considering what we already have, I hope that we will be able to do it sooner than in 5 years […] One of our major contributions to the field was the application of deep neural networks for predicting the age of the person. People are very different and have different diseases. I think that this approach is novel and will result in many breakthroughs.
Zhavoronkov also explained the important role AI has in facilitating the development of drugs that could treat aging and age-related diseases. “Our AI ecosystem is comprised of multiple pipelines,” he said. “With our drug discovery and biomarker development pipelines, we can go after almost every disease […] And since we are considering aging as a form of disease, many of the same algorithms are used to develop biomarkers and drugs to prevent and possibly even restore aging-associated damage.”
For Zhavoronkov, AI’s potential impact on humankind goes far beyond the possibility that it could cause a singularity apocalypse — it could actually be the very thing that saves us from death.