Before we get too deep into this we need to look a difference, there’s a distinction between numerical and qualitative identity. Now, numerical identity is to say that 2 things are one in the same, things are one and not in fact 2. whereas Qualitative identity is to say that things are qualitatively identical, that is to say that they are indiscernible. They can still be 2 different things, they just can’t be told apart. With numerical identity you may be able to tell the two things apart but you’re calling them the same thing. With qualitative identity you can’t tell the 2 things apart but are still admitting that they’re 2 different things. So, numerical identity is usually talking about how something could be the same over time. Generally something at an earlier point, after some time passes it may have goine through some changes, but we woulds till say it’s the same thiong. QUALITATIVE identity on the other hand, is saying that 2 objects are indiscernible. So we have 2 objects that may be different in space, but they look exactly the same. They also may be different in time. But the point here is that we’re talking about 2 seperate objects. With numerical identity we’re talking about one object being identical to itself at a later time. Now, there are interesting positions that say that the only kind of numerical identity we can get is QUALITATIVE identity. Basically that saying the only way something can be the same over time is if its actually indiscernible from its previous self. This would be the kind of person who says that once you lose a single plank from the ship of Thesues, it is no longer the same ship. Exactly what we’re looking for is “under what possible circumstances is a person who exists at one time, identical with something that exists at another time? Basically, how are you identical to some previous version of you? A younger self perhaps? How are you identical to an OLDER self? And we say something and not a person in case we talk about something that may not be a person, like an AI. So this also posing such questions Like are you the same person who got black out drunk last night even if you don’t remeber it? Are you the same person if your brain is transplanted into someone else’s body? Are you the same person as someone else whose brain is identical? Are you the same person as you in a coma? Are you the same person as your Swampman? Are you a philosophical Zombie? We’re going to use the 2 most prominent approaches, the psycological approaches, based in memories and cognitive states, while the somatic approach is based in your body. “The somatic approach to personal identity” is pretty simple and pretty common sense. It’s based on an idea called “Animalism”. Animalism has an argument that goes as follows. There appears to be a thinking animal in the same place as you, you seem to be the only thinking thing there, and you seem to be the only animal there. Therefore you are identical to the thinking animal. Bascially, what this is saying is that you are JUST your body, your brain, and everything that’s going on with that. You’re nothing more than that. You’re identical to the thinking animal there. So “you” in the present is identical to “you” in the past or future because you’re in the same body, you’re the same thinking animal as your past self so you’re all continuous, all identical. Here are some objections. 1) What if you got a brain transplant or a head transplant to another body? According to the somatic approach you would be a different person, even if your brain was transplanted. Even if all your memories moved on, all your thoughts because your body died, because that animal that you were in died, you would be a different person. Another objection, imagine Jean Luc Picard walks into a transporter and he gets transoported somewhere else. For those who aren’t star trek fans, basically the transporter scans you as you are, puts you into data and then it pops you out in another transporter at another location. Acoording to the somatic approach, every time Jean Luc Picard steps into the transporter he stops existing, he dies, and another captain, a differenty person shows up on the other side. Because its a diffferent body, a different animal, a different creation that shows up on other side. So there have actually been hundreds if not thousands of captain jean luc picards or anyone who’s been through star trek transporter. If there was a way to put your mind in a hard drive, which theoretically there should be, because all your mind is, is a bunch of neurons going on, a bunch of physical molecules. Even if your thoughts and memories were kept intact you’d be dead and a new mind would now have been created. You would no longer be identical with yourself at any time past according to the somatic approach. These things are pretty counter intuitive and its one of the reasons people would favor the psychological approach, even though it has its problems as well, over the somatic approach. We also have the original problem that we started with when we were talking about identity. Whenever we’re talking about having an identity based in the physical world, we’re going to have the problem of Theseus’ Ship. Basically your cells are constantly dying and being replaced. The question is at what point do you start being a different person? After you lose a coupe cells? After you lose a lot of your cells? Maybe more than half of your cells? If you upload your mind ganglia by ganglia, outsourcing each brain function to a specific digital device one at a time, at what point do you stop being “you”? You could theretically upload your whole mind to the internet as long as you do it slowly one brain function at a time. Our memory and navigation has already been outsourced to our smartphones. Our mathematical ability outsourced to calculators, as is our sense of time. Perhaps we could do the same with things like speech, emotions, or vision. At what point are you a different person? When have you outsourced too many brain functions to technology? At what point do you start not being human? After lose a couple nbeurons? Maybe more than half your neurons? Once you’ve lost all the cells in your body, if you still think that you’re the same person consider the problem of theseus’s ship. What would happen if someone took all of your old cells and reconstituted them, bringing them back to life. Which one of these You’s are really you? This is a question that the animist, the somatic approach, is going to have a tough time answering. So, that was the psychological approach and somatic approach, those are the 2 main approaches to dealing with persistency and the problem of identity. We have offered pretty sufficient objections to both of them, when you get down to it, fairly unintuitive. This leaves us primed for “no good sense” or “no stable idea of personal identity”. So what is personal identity, look back to theseus’s Ship paradox. Basically the question is, the greek hero Thesues has a ship, he’s sailing along in the ocean, at various points one or more of the planks in his ship gets rotten and needs to be replaced. The question is, is this still the same ship? Most people would say after a single plank that it is. But it goes on and on over hundreds of years and all sorts of different planks get rotten, and at what point does this no longer become thesues’s ship? After the last of the original planks is replaced? If you still think hat this IS thesues’ ship, what would happen if someone took all those rotten planks and built a rotten ship. Which one of these would now be Theseus’ ship? This is the problem for identity with objects, things like ships. What PERSONAL identity is is the question of how is a person the same over time? What makes a person the same person, just like what makes a ship the same ship?

PATTERN IDENTITY THEORY: The theory that “I” am the same individual as any other whose physical constitution forms the same or a similar pattern to mine. (Cf continuity identity theory).

CONTINUITY IDENTITY THEORY: The theory that “I” am the same person as various future and past selves with whom I am physically and temporally continuous. (Cf pattern identity theory).

“The Psychological Approach to personal identity”. We’ve done the somatic approach, now for the psychological appraoch, which has 2 different versions. The memory criterion and the causal dependance criterion. We’re going to look at each of these in turn. First off, the memory criterion. This is the most intuitive version fo personal identioty that a lot of people think about. A person is identical to another person if one of them remembers the experiences of the other. So what does this look like? Basically, you in the present are identical to you in the past or future if you remember your past or future self, remember the things your past or future self did, and so on. So this is a pretty common sense version of what personal identity is. There’s also the causal dependance criterion, but as we’ll see in the objections it avoids problems with the memory criterion. The Casusal dependance criterion says that one is psychologically connected to one’s previous selves if and only if one’s current mental states were caused by those previous mental states. In order to be identical to something via causal dependance criterion, you have to be psychologically connected to it and you have to be psychologically continuous with something. One is psychologically continuous with one’s previous selves if and only if one is psychologically connected to them. So you have to be connected to all of those previous selves. Going in a nice little causla dependance string, in order to be identical to them. Basically, your mental states from before have to have caused your current mental states in an important way. Let’s take a look at what this looks like, you in the present are identical to you in the past if your current mental states, i.e your thoughts, feelings, and beliefs was caused by your past mental states. And you in the present are identical to you in the future if your future mental states will be caused by your current mental states. It doesn’t make sense for us to be identical with mental states or thoughts that we were caused by so, that is the causal dependance criterion. There are 2 objections to the memory criterion for personal identity AND the causal dependance criterion for personal identity. So the memory criterion, as we remember is a person that’s identical to another pserson if one remembers the experiences of the other? So what if we have the following example. Imagine a lawyer, and a suit. The lawyer remembers the student not playing a library fine. The lawyer remembers being the student in the past and not oaying that library fine. Thus the lawyer is identical to the student. Also imagine a law professor later in life. The law professor remembers her law career, she remembers being that lawyer. So, the professor is identical to the lawyer. However, the professor does not remember paying the library fine as a student. Thus the student is identical to the lawyer and the lawyer is identical to the professor, but the profesor is not identical to the student. This is a problem because it breaks what’s called “The Transitive Rule of Identity”. Basically, if student equals laywer, lawer equals professor, professor HAS to equal student. But according to the memory criterion it doesn’t work out that way. If that wasn’t convincing imagine someone named “Blott” and someone named “Clott”. Blott is someone in the present, Clott is someone in the past. Blott remembers events from Clott’s life, lets say a specific event from Clott’s life. But it is still possible that Blott’s memories are not genuine, maybe Blott’s really old and he’s kind of making up different memories or doesn’t realize that he’s remembering certian things. In fac, the body of blott was somewhere else at the time when clott was having those experineces, they are the correct experineces from clott’s life, but there doesn’t seem to be a causal history related to how thos experiences got from clott having them to blott remmebering them. The memories aren’t genuine, and the only way we can tell if the memories ARE genuine is if blott and clott are actually the same person, but that begs the question, its the qwuestion we’re trying to answer. Are they the same person? Well their memories are the same so they are the sasme person, but what if their memories aren’t genuione? The only way we can tell if the memories is if they’re alreayd the same person. We’ve gone in a circle and its a problem. In oder to deal with some of these problems we’ll take a look a t Causal dependance. But first, here are a series of questions to make you even further question the memory approach. Did someone else walk my body home when I was drunk? Does someone else live in my body if I have a dreamless sleep, because I dont remember what happened for those 8 hours? If I don’t remember every time I rode thesubway, does that mean someone else has been riding the subway in my place? And if at a certain point in time I foget riding the subway, does that mean that I stopped being connected to that person and stopped being that person? If I get amnsesia does the former me die and a new person come into existence? If i don’t remember my childhood, did I never live that? Was I nevr that person? Did someone else experience my childhood? These are all impotrant questions for the memory criteria. Now, like I said, we’re gonna take a look at causal dependance criterion which gets around SOME of these problems but it’s going to have some new ones of it’s own. So as ewe remember causal dependance relies on psychological connectedness, meaning our opsychological states were caused by our previous states, and being psychologically continuous with those previous selves being always psychologically connected to them, and so on. Let’s take a look at some objections. So, it’s been said that you can lose a hemisphere of your brain and still stick around and still be “you”, your mental states according to causal dependance criterion were still caused by previous mental states you were in so you seem to still be you. And in fact, this has been done. People have lost a hemisphere of their brain via hemospherectomy, a surgery done because of inoperable brain tumors or epilepsy. They still stuck around despite losing half their brain, according to causal dependance they’ve still been them. If you were to transplant a hemisphere of your brain, you would still be you. Ir if you just trasnplanted specifically the cerebrum, the important part of your brain that does all the higher thinking memories, and all that kind of stuff. The majority of your brain that gets transplanted to someone else’s brain that maybe only has the basic body functions, nervous system and thatr kind of stuff. You would still be you. The majority of your brain that gets transplanted to someone else’s brain that maybe only has the kind of basic body functions, nervous system, and that kind of stuff. You would still be “you” because your memories, your mental states were transplanted to someone else. SO according to these take a look at the following thought expriemnt. Imagine you sent the left half of your brain, which gets transplanted into a guy named lefty., who was a complete blank slate until he got that hemisphere of your brain . And the right hemisphere of your brain is transplanted in to a guy named Righty, who also was completely a blank slate until he got that half of your brain. As we’ve shown, you can survive with only half your brain due to plasticity. And as long as the important parts get transplanted, you CAN be transplanted to someone else and maintain your identity, problem is your mind splits. Problem is You seem to be both identical to both right and lefty, they are both thinking your thought and all their mental states were caused by your mental states. But you can’t be indentical with 2 things at once. Because maybe righty is huingry at one point and lefty isnt, you cant be both hyungry and not hungry at the same time so we have a problem. So we can see that there seems to be a problem with causal dependance criterion. If your don’t like that try this one, let’s say a person walks into a magic room, what happens is somebody flips a switch and all of a sudden a perfectcopy of that person appears in a perfectly identical room right next to it. There’s no way for these people to tell who they are, they both SEEM perfectly from first person perspective, they seem to be the person who just walked into the room,. they both THINK they are the person who walked into the room. But only one of them is identical to the original, the other is NOT, even though their quantitative experiences are perfectly identical. There’s no way for a person to figure out if they walked into the box or if they were created moments ago. This seems to be a problem because there seemed to be 2 people thinking the same set of thoughts, it seems that even though some thoughts rwere caused by that perfect causal chain and some thoughts were just caused by a machine that made them? Both of those people are thinking those same thoughts. How can they be QUALITATIVELY identical but not NUMERICALLY identical. This is a little confusing. SO, those were some objections to thew psychological approach, next we’ll take a look at the somatic approach which nstead of thinking about memory.

Identity and the Transporter Paradox. One of the oldest and pervasive themes is “how is it that something can change and yet still remain the same thing? The question requires paying close attention to the difference between qualitative identity, and numeric identity. I can have 2 pencils that are, according to any measurement, Identical. in size in shape, in color, functionality. But there’s alsways some distinction that can be made between them. the only difference is one is over here, and one is over here. A <–> B. THAT’S qualitative identity, in any conceivabel situation, you could replace one pencil with the other one and nothing would change. On the other hand, numeric identity is something we take for granted in conversation and thought. But is actually much much weirder. Like say I have a pencil and I use that pencil to make a mark on a piece of paper. You’d probably say that that’s the same pencil, but there are measurable differences between this thing and the thing I started with. It’s a little bit duller, it has lower total mass. My fingerprints are on it in different palces. If I were to compare it to its past self with certaintools, I’d be able to tell the difference in a way I might not be able to with a qualitatively identical pencil. But we still say it’s the same pencil. A=A, i’d give you any nuber of other things to change it, like maybe painting it or even breaking it in half. AND we’d still prbably agree that it persists, pencil A is Pencil A, that there’s some relation it has to the orginal pencil, which is somehow more intimate than the qualitative relatioship that this one (Pencil B) has to it. *Breaks pencil A in half* After all I wouldn’t say that B is the original pencil A, even though B is much more like the original A than the broken-in-half Pencil A is. Your first impulse is probably to think of some material connection. Afte all most of the atoms that were in the original pencil are here in this one (A), but what about things that are a little more dynamic then pencils, like software or minds. Human beings are living organisms constantly absorbing nutrients from their environment and using those nutrients to rebuild themselves. If you’re in good health, Your body will have replaced almost every single cel, in it in about 7 years. So what is it that makes me ME,am I the same i was 7 years ago? If I replaced 99% of the particles in my pencil, I’m not sure that’s the same thing anymore. What makes me numerically identical with me from 7 years ago? AM i still me? Its not like my identitiy changed the last time I went to the bathroom. What if I were to clone myself and transplant my brain into my clone, it seems weird to suggest that that new organism with all new parts wouldn’t be ME anymore, or that the empty headed body I left behind is somehow still me? Maybe that indicates where we should be focusing, the brain. After all there’s some aspect of past me’s psychology that still persists in me rihgt? It seems like there should be some relationship between my mind and his mind that makes us the same person. As i used to be him. although it would have to be something that’s pretty general. I used to be really into settlers of catan but I just prefer other games now. But it seems silly to suggest that just because we don’t share a deep unabiding love for catan that we’re not the same person. There’s an important distinction here that qualitative numeric identity thing again. We sometimes say things like “Salley is a totally different person since she took up swim dancing. But we don’t mean that Sally is dead and some alien creature is now psosesing her body, she’s still Salley, she’s just different hen what we’re used to. So lets takes some stabs at this psychological relationshiop that allows people to persist. Maybe it’s the continuity of consciousness, I have a continuous experience of exiting and being the same person, so maybe THAT’s what makes me persist over tiem. Unless i go under anasthesia, or fall asleep, or space out for a few seconds. OOpps, it doesnt work, lets skip that one. Maybe its something more like continuoity of memory? I have memories of me being myself living in 2010, maybe what makes me numerically identical with that person Is I remember being him. But there are some problems. As anyone who knos me can tell you, my memory isnt all that stellar. If I can’t remeber some part of my past, it would imply that it wasn’t me who bought all these steams. Worse yet, at least for the enterprise of trying to establish the criteria by which we judge someone to be numerically identical with their past selves, memory is a really finicky thing, it’s very easy to convince eye witnesses that they saw something they didn’t. Or any number of other things, our memory is flawed. If I managed to convince someone that they also bought this steam game during the summer sale, it would mean we’re somehow the same person at that time? You might say that doesn’t count because it’s not an accurate memory. But you can only say it’s inaccurate if you’re using some other criteria to answer the question you were trying to answer in the firs tplace. so if its not about sharing a memory with someone in the past, then we’ve got more work to do. OK, so what about some combination of memory and causality? A chain of ememory? Like I might not specifically remember buying those games, but I do remember surfing steam during the summer sale. I remember a past version of myself and he remembers biuying those games. That sounds pretty feasible, and we get to dodge that memory duplication prblem. Because only one of us has an unbroken chain of memory that goes back to that event. I might be able to convince you that you also bought those games, but there’s no past version of “you” who remembers that. Its also got some interestiing and potentially useful implications. Like if you ever severed that chain of emmory, you ARE a different person technically. If you accidentally get blackout drunk one night then look at the pictures afterward. It definitely feels like that was someone else. One way we can test this theory of persistent personal identity is to invent weird but logically possible scenarios to see if we cna break it. Star Trek is unfortiunaely not a doicumentary of the real world. But it does contain a neat little thought experiment we can use to apply pressure to that “chain of memory” idea. One of the most important technologies in Strar Trek is a machine that cans people down to the subatomic level and then disassembles them int their component atoms and reassembles atoms somwheere else into an identical structure.In the Sci Fi series Star Trek the next gen Its called the transporter, and its used as a teleportation device. If you were dedicated to some of those previous criteria of persistent personal identity or continuity consciousness, memory, or having the same atoms. Then being dissassembled into your component atoms. Debunk 4 counter arguments for mind uploading. 1) Continuity of Matter, our cells replaced every 7, 2) Continuity of Personality, 3) Continuity of Consciousness, Sleep Drugs, 4) Continuity of Memory, Drunk or Surgery. But then again, you were also dying every 7 years or every time you fell asleep. Other psychological criteria kind of seem like they don’t have a problem with the transporter. The copy of you that shows up shares all the same psychology with the you that stepped in the machine. Well, let’s tweak the thought experiment. Let’s say that instead of disintegrating you, the trasnporter just knocks you unconscious, then duplicates you, then drops both bodies in sick bay, without any indication of which is the original. when thhey wake up, there are now 2 people who share psychological continuity with your past self. But they’re in different beds. They’ll have different memories and different experiences from this point forward. They might develop different characters of disagree with each other. Which Commander Riker is the same Riker? They’re very clearly different people now. So which one is you? Which one counts as you persisting into the future? Which one should get your stuff or sleep with your wife? This is also called the Swamp Mna problem. We’ve got the 2 pencil situation here, they may be qualitatively identical. But unless one can somehow equal 2, they can’t be numerically identical to the person that you were before you stepped into the transporter. Obviously this is a fantastic scenario, there’s a lot of physics btween you and the atomically perfect copy of yourself. But even if it could happen except in some fictional universe where someone’s invented Heisenberg compensators. The 5th counter argument It’s logically possible for 2 people to be psychologically continuous with the same past person. Then psychological continuity of any form isn’t a sufficient criteria to establish suffient personal identity. So that defeat s the fifth argument 5) Psycholigcal contnuity. So where does that leave us. There’s a reason the problem of persistent identity has pklagued western philsoophy since heraclitus. He pointed out that you could not step twice in the same river. It’s rreally really hard. We have an intuition that things like the penciols persist in time. Buit gets a little uncomfortable when we put the screws to it. It’s even more uncomforatble to realize the things we’re most intimately familiar with in the entire universe, ourselves, are subject to the same uncertainty. Even people who believe in eternal supernatural entities like souls or gods are faced with the same problem. The transporter problem wasn t first imagined in the context of star trek but 200 years ealier in the context of the apocalypse and ressurction. There isn’t really a wrong answer to these kidns of questions. Just different kidns of answers with more or less intuitive implications. Some philosophers think it’s really important to establish what makes us continue to be us. While other philsophers think the entire idea of persistent personal identity is some sort of cognitive or linguistic mistake. If you’ve got a transpoorter trip to mars, or a nice shiny body for me to slip into, I’m not gonnsa lose a lot of sleep wondering if its still technically me afterward.

A Hemospheretomy is a surgical procedure in which half of a person’s brain is removed. It’s usually only ever done on bvery very young patients because their brains are still plastic enough that the remaining half will take on the functions of the half that was removed. Its usually done because a child or a baby is having sezures and removing the opart where the seizures occur is the only solution. But here’s my question, if you can live with half a brain, What if I were to take 2 empty skulls, take both halfs and plop them into seperate bodies. Which person would be “you”? I mean you are you, you are conscious, you are aware of what is happening to you from the perspective of yourself. Think of it this way, if you just stare at something and feel what it feels like to be you, it feels a little bit like you’re a thing inside a body looking out through the eyeballs and nobody else on earth will ever see the world from that position. This awareness of your own experiences, the awareness that you are having them, that you are having your own thoughts makes up what we call “consciousness”. But if I split your brain in two, and put it into 2 bodies, would both of them be new people who were conscious? One of the bets places to start is to begin with things we agree are not conscious. For instance, Michael from V Sauce uses a very clever example: cleverbot. is a website where a computer program will respond to your quesitons really cleverly, but only because it is programmed to do so. We wouldnt consider it conscious because it doesnt have a sense of itself. It doesnt “feel” anything, it doesn’t have its own inner life, it’s just a program that responds automatically to my inputs. I know that I am not cleverbot. I known that I feel things and that I have a sense of myself. I have intentions. But how do i know that YOU do? For that matter, how do I know that everyone else I meet is like me? How do I know they’re not just little smart versions of cleverbot who know exactly what to automatically say? This is the concept of the P-Zombie. This is also known as “the probelm of other minds.” What im asking is philsoopshical but it is a very famous and important question. I’m asking if its possible for something to exist as a Philosophcial Zombie. a thing that reacts, responds, and acts just like a normal human but doesn’t actually “feel” anything. It doesn’t “know” that it’s having its own thoughts. It just automatically responds like a robot in the appropriate way. What’s amazing and heavy about this question is that science doesn’t have an answer, and it’s not even clear that science will EVER have an answer, let alone an approach to finding that answer. About all we have is the psychology of disorders, of consciousness, let’s begin with Anosognosia. A common example of it in psychology classes is a patient who lost their ability to move their left hand,. When asked to raise their right hand, they’ll do it, but then when asked to raise their left hand, they’ll say yes but not move it. And when they didn’t move their left hand, instead of reporting that they can’t they’ll make an excuse like “I didn’t feel like it”. Anton-Babinsky is even more dramatic. Pateitns with it are cortically blind and cannot see antyhing but they deny being blind. If you ask them up how many fingers youre holding up, they’ll make a guess, but if they’re wrong they’ll make an excuse like “i didn’t have my glasses”. People with this are usually victims of stroke, teheres some disconnect between what they’re really experiencing and their consciousn awareness of it. They don’t knwo that they can’t see because the part of their brain that monitors visual input isn’t telling the brain anything, it’s not even telling the brain that there IS no visual input. Which means that the parts of their brain responsible for answering questions or creating speech has to create a confabulated response.

Donald Davidson’s “Swampman” problem. Imagine im walking in a swamp and get struck by a bolt of lightning. My entire body is burned to a crisp and dissolved into smitherines. But the very same oment, a second bolt of lightning strikes nearby and causes a bunch of atoms and molecules to arrange themselves into the exact same configuration that MY body used to have, making a second me? Is that me? Would that be me? Imagine that asurgeon came in and started removing cells from me and from you, replacing them exactly one at a time. Replacing my cells into your body and your cells into my body. At what point will I have officially become you?

Whether we think he ship of Thesues is the same ship or not, the key lies in identity. Philosophers describe identity as the relation that a thing bears only to itself. So whatever makes a thing uniquely what it is defines its identity. And if 2 things are identical, they are said to share an identity relation. Now, Whether two things are the same might seem obvious, but it isn’t. 17th century philsopher Gottfried Wilhelm Liebniz came up with a principle that might help us solve the puzzle of who batman is. He called it the “Indiscernibility of Identicals”. The ideas is that if any 2 things are identical then they must share all the same properties. If liebniz is right then the ship of theseus became a new ship as soon as the first plank of wood was replaced. As soon as its parts were not all original then the ship suddemnly acquired a new property, and with a new property came a new identity. So likewise Bruce Wayne and batman can’t be identical because the have different properties. So is there a limit to how much something can change and still be the same thing.Whether we think he ship of Thesues is the same ship or not, the key lies in identity. Philosophers describe identity as the relation that a thing bears only to itself. So whatever makes a thing uniquely what it is defines its identity. And if 2 things are identical, they are said to share an identity relation. Now, Whether two things are the same might seem obvious, but it isn’t. 17th century philsopher Gottfried Wilhelm Liebniz came up with a principle that might help us solve the puzzle of who batman is. He called it the “Indiscernibility of Identicals”. The ideas is that if any 2 things are identical then they must share all the same properties. If liebniz is right then the ship of theseus became a new ship as soon as the first plank of wood was replaced. As soon as its parts were not all original then the ship suddemnly acquired a new property, and with a new property came a new identity. So likewise Bruce Wayne and batman can’t be identical because the have different properties. So is there a limit to how much something can change and still be the same thing. Philsophers have struggled with the issue of personal identity for a long time, trying to find that essential property that makes you “You”. That thing that preserves your identity through time and all the changes that come with it. First there’s the body theory, and the assumption that most people have. It says that personal identity persists over time because your remain in the same body from birth to death. But it’s not like you consist of all the same identical stuff you have when you were born. You’ve replaced outer layer of cells skin hundreds of times so far, your red blood cells only live for 4 months before they’re cycled out. So kind of like the doctor or the ship of thesueus, you’re constantly being replaced by new pshycial versions of yourself. If you ARE your body then how much of you can change until you become a new you? What if you gain weight, or grow a beard? 20th centrury english philosopher Bernard Williams proposed a thought experiment to make us consider where we think our personal identity resides. It goes like this. You and I have been kidnapped by a mad scientist, he tells us that tomorrow morning he will transfer all your mental content, beliefs, memories, personality, everything into my brain. Then he’s going to move all my mental content into your brain. But he also tells us that after the procedure is complete and the mental content has switched bodies. He’ll give one of the bodies a million dollars and the other body will be tortured. He’s decided to let you pick which body gets the torture and which one gets the cash? What do you decide? Your answer should give you a clue of where you think your identity lies. John Lockle didn’t like the idea that the most essential aspect is the body, for locke the thing that makes you YOU is the nonphysical stuff. your consciousness. But locke recognized that we don’t maintain a single consciousness over the course of our entire lives. We go to sleep every day, but when we wake up, our conscious selves remember who we were the day before. So locke posited a “memory theory of personal identity”. He believed your iddentity persists over time because you retain memories of yourself at different points, and each of those memories is connected to one before it. Now we don’t remember every single moment, do you remember what you ate for lunch last tuesday? But you can probably remember a time when you DID remember that, like say last tuesday afternoon.And if you cant remember that version of yourself then you’re still connected to the tuesday at lunch person through a chain of memory. And this process can take us a lot farther back than last tuesday. Locke said that if you can remember back to your childhood, then you maintain a memory link to that person. Sure your mom also remembers that day, but only you fremember it from the inside. That’s your memory, and since its yours, you must be the same person who experineced that memory. The memory theory makes a lot of sense but its got some probglems of its own. First off, no ne remembers being born. That’s not a bad thing, I imagine none of us would really want top recall that partuclart experience or the couple years we spent after that popoping our pants. But if personal identity requires a memory then none of us became who we are until our first memory, which means we all lost at least a couple years at the beginning. If you’re committed to this view you also accept that people stop being the same person if they lose their memories. If a person begins to suffer from dementia once he’s lost the ability to remember his past, does he stop being that person? So the memory theory presents problems forboth the beginnings and ends of life. But there’s also the issue of false memories. Memory after all, is notorriously tricky. We know that eyewitnesses are likely to recount the same event very differently so how do we know the memories we have are accurate? And if they’re not, if things didn’t actually happen the way you remember then how did those faulty memories influence your identity, did they make you a partially fictional person? So at first locke’s theory seems to have advantages over the body theory because consciousness and memory persist through your body’s physical changes. But after a little interrogation you find that memory is pretty tenuous too. Hume argued that the idea of the self doesn’t persist over time, and that there is no “You” that is the same person from birth to death. He said the concept of the self is just an illusion. If having a certain identity means possessing the same set of properties, then how could anyone really maintain the same identity from one moment to the next. This would argue against mind uploading as being pointless. But what does this mean for my understanding of myself if there is no single constant me? I clearly dontt share all of the same properties of my childhood self or even of myslef last week so Hume would say its silly to pretend that it’s still me. But I feel like me so what’s going on? Hume said that the so-called self is just a bundle of impressions consisting of a zillion different things, my body, my mind, my emotions, preferencesm memory, even labels that are imposed on me by others. Hume says there’s no single underlying thing that holds it all together. He said we’re all just ever-changing bundles of impressions that our minds are fooled into thinking of as constant, because they’re packaged in these fleshy receptacles that basically look the same from one day to the next. Contemporary British philosopher Derek Parfit probably after watching star trek posed this thought experiment. Imagine a michine that breaks you down atom by atom, copies all that information and transmits it to Mars at the speed o light. Once the information gets to Mars, another machine uses it to recreate you atom by atom, using copies of the same organic stuff that you were composed of here on earth. The person who wakes up on mars has all the same memories and personality as you did, and that person thinks it’s you. So here’s the question, is this space travel? Did you travel to mars, is thew transported person really you? Or was a new being created that happened to correspond to you atom by atom, thought by thought. But now consider this, what if a new version of the machine is created so that now instead of destroying your body it can simply be scanned and all the information can be recreated on mars but the you here on earth still exists… now did you travel, or were you just replicated? If you’re here on earth are you also on mars? Parfit agrees with Hume that there isn’t such a thing as personal identity ovber time so in either case, you didn’t travel through space, that’s just a new you that shows up on mars whether the old you was destroyed or still exists. And even if you hopped on a spaceship and flew to mars the old fashioned way, it’d still be a new you that arrives because you would have exopeineced all sorts of changes during the trip. But the thing is parfit thought that Hume missed a really important point, even though there isn’t a singular you from birth to death, each of us have a “psychological connectedness” wioth ourselves over time. Think of your life as being like a peice of chainmail, the mesh that is your personal identity is made up of lots of seperate chains, and those chains intersect at certain points to make up the chain mail. As you followed the timeline of one set of links, new links are being created that add to the chain, and as time passes the links farther back in your past slowly start to drop off as they lose their psychological connection to you. When you stopped loving Dora the Explorer, that link dropped away, and when you discovered that you love philosophy a new link was created. But some chains intersect with that other chain and they have links that persist for a long time, like the love you have for your parents. So sure, Im not the same person as elemnetary school and i wont be the same person when im old. But parts of me survive the passage of time because they’re psychologically connected to ur previous selves. And survival is what’s important to Parfit. As long as enough of the elemnets of you persist, you see ytourself as relevantly the same, but not for a whole lifetime. Oarfit would say that NONE of the you that existed at birth is still around. Your physical matter is almost all diofferent and you have no memory of that time and your preferences have completely changed. Baby you have not survived. But some of last year you probably has. This is called “Parfit’s theory of Survival through Psychological connectedness”.

First of all, there’s no definition of either consciousness or identity that is universally agreed on by philosophers, which is amusing since both are things that everybody knows what they are. I am me, you are you. We all know what identity is, and what consciousness is, but if you try to slap a proper definition on it, things can get very complicated. I wonder if trying to describe identtity in terms of concepts, is like trying to describe smells in terms of colors or sounds. Somethings you just can’t describe without resorting to a circular definition, a definition that assumes its conclusion in its premise. We all change with time, most our cells are replaced gradually and those themselves routinely take in fuel and material to replace damaged components. There are some atoms in your body from when you were born, but odds are good they all left your body at some point and returned by coincidence. While a child differes from the man they grow up to be, they are someow te same person. If ou don’t have consciousness or identity any more than a rock does, then you wont have opinions any more tan a rock does. And if identity isnt something more wide sperctrum than your exact position of atoms at this moment, then there isn’t any you to have opinions since the you who thought them up wasnt you it was sometone else and not listening to me when i began tis sentance. These definitions are absurd in the sense that htere’s no reasons to comtemplate them sionce, if true, they’d mean you’re not contempleting anyting anywa since there’s no you to do the contemplating. I can doubt many things but I cant doubt my own existence, since if i dont exist, there would not be anone to be doing the doubting, wic doesnt porve tat iI am tinking or exist. But it does prove tere is no point doubting our identit, since you’re incapable of doubt if you don’t exist, thus since I think I am. We’ve been wacking our forhead against this wall for centuries, its occupied the minds of the greatest philsophers and to this day we;’ve got no conclusive answer so it isnt one we’ll answer today. We know people change with time both in their software and hardware. It’s not just addition either, you lose stuff, cells die, atoms switch out, memories get forgotten. We usually say that if it’s gradual and maintains continuity in te process, the identity has been maintained. I plant a seed, it bcomes a sapling and becomes a tree, throughout that process identity is maintained, even tough there’s no atoms left over from that seedling. But if I chop it up and burn it and becomes fertilizer in a garden, that becomes vegetables I eat, then identity would not be maintained.  This claim would lead to madness. What if i duplicated you? Now we have a copy of you omplete with your memories, we know in time you and they will diverge and arguably count as seperate people rigt from te first moment. Software matters most mabe, but hardware is a big deal. Hardware is moslty identitical if i make a clone, but it wont be the same sensory inputs and ecxperiences anymore. If i duplicate my mind onto a different bod like a professional basketball plaer, I wont know ow to pla basketball but ill be better at it tan I am now. If i switc bodies wit tem, we’d view it tat wa, we switch bodies not minds. The body is the inferior partner in the pair. We wouldnt say if bob and todd switched bodies, ‘hey look there’s bob with todd’s brain”, we’d say todd is in bib’s body. If it stayed that way long enough we’d say bob is now that body too, a new identity. But there is no one with a better claim to that identity and that’s probably important. Maybe todd isn’t actually todd anymore, but nobody has a better claim to be him. Maybe you are not who you were 10 years ago, but nobody has a better claim to that identity than you do. Most would say that copy wasnt actually us, but it wasa pretty good approximation. And if a friend died and was so duplicated, complete with memories, we’d be pretty justified in treating them as we did the friend. Man folks would say it was the same person, that’s why concepts like mind-uploading or docking yourself into a clone body are popular routes to immortality in fiction. So on the table getting read for a n upload or transfer after wich you die. In one reality that’s exactly what happens. I have an untreatable illness and will oprint a new body, or stick me in an android body or upload me to a virtual realit. If tat appens, the new me wakes up takes a sad glance at dead me on the table and decides it is me. Its a perfect clone of my mind and body. Now we handled a similar case wit te simulation hypothesi. At this point a computer model pops in a digital coipy of me proclaiming itself the new me and downloads itself into an android. That android severs ties with the digital self, and now we’ve got 3 of mee standing around glaring at each oter. Becomes I have contingency plans for an identity crisis, we have the plan for such a scenario. We each pick numbers and letters, for a new name. We divide my possessions and friends. It would be nice to not divide those up, but that could cause a lot of confusion. Even if we nmade it very clear to everyone wioch was which. So what if we found a way to keep us linked together and sare memories. this isnt te same as a hive mind, where there is one single collective mind running everting and all us all drones. But it raises the problems which well get to in a moment. But what if they cut my brain out and stuck it in a braindead person who donated their body to science, and raises te issue if id want to have cosmetic surgery to look my old self. That might get intensive too. If a woman had her mind put in a male body. is she a man or a woman? Should she get gender reassignemnt surgery, or shoiuld she spend some time as a man to experience that? We’d say thats entirerly her choice, but if it ere us making the decision as an individual. In terms of what we would do, which would you do? So hive minds, these come in a lot of forms. From the single mind running to bodies like drones to where everyone has simply done some sort of linkage to share some or all their experiences, to a lose internet connection, the ants catted between them and allowed improtant material to be shared automatically. Wed see this as multiple bodies, one mind of man. But it could go the other way too.  If we are the sum of our experiences more than anything else, then adding those or subtracting those changes the person.

Its very like mind uploading, or replacing parts of the mind with computer parts too. It could be done gradually or instantly. But you can make a very good case its not you anymore. And a very good one too, because suddenly jumping someone’s IQ up a few orders of mangnitude is going to seriosuly change their personality. Arguing if a duplicate of you to the last memory is you, is maybe somewhat semantic. The former shouldnt result in any cange of behavior except gradual between you and that clone, as you develop new experiences. But usually when we talk about extending life in the transhumanism sense, we’re talking about changing their mind too. You could make a good case that massively upgrading your mind or merging it partially with others, is creating a new person and arguably killing the old one. Some might see major changes as tantamount to suicide. Others might argue its no different than a metapmorphesis from child to adult. Im not the same person I was when i was a teenager, anymore than the clone i made 10 years ago with complete memroies is the same person I am now. Yet there was no death there, and me in 1 million AD i might be a giant planetsized computer orbiting with a few million other megaminds as a matrioshka brain is probably not the same person anymore. If things like identity, consciousness and free will genuinly are illusions, things we trick ourselves into belieing to maintain sanity, I sometimes wonder if there’s a max intelligent someone can have before it cannot trick itself anymore into believing in them and shuts off, not even regarding it as suicide. It does help to add to that problem that a mind that big and potent like a matrioshka brain has no problem simulating quadrillions of human minds simultaniously and could run through all their lifetimes again simultaneously in one second.

If we could imagine ourselves getting biored after a few thousand years of life, to the point that death is just fine. Imagine what that would be like for something like that. And ypour brain cant store endless memories. But it probably can be modified to store millions years more without a problem. You might need to archive a lot of them to take less memory, but that itself shouldnt be an issue. It ought to be so little an issue that if we do figure out how to reasonably seemlessly integrate memories from other people into ourselves too, and run ourselves at higher subjective experiecens of time, overclocking our brain, you’d expect there to be a huge market for experiences. We could probably build a virtual reality simulation of being the first person to set foot on the moon. But is that the same as feeling the  dread of being a quarter million miles from home in a thin tin can, or the awee of being the first person to leave your footprint there. So it isn’t hard at all to think folks might buy up all the memories they can from other people. What if a distant civilization uses memory as a currency? Memories can give more advantages and more intelligence, so they might even replace cryptocurrency. Which raises the queston if having a memory of a crime makes you guilty of it? This is explored in the sci fi film Total recall, if you went on a vacation people treat you as though you actually did go on vacation, so won’t they do the same with a crime? Memories of pleasure might be a big commodity too. Which raises the issue if the memory is possesed by multiple people. Identity can get a little blurry if a billion people remember winning the olympics. If me uploading my mind to an android makes the android me, its hard to claim that me downloading the memory of someone winning the olympics doesn’t make me that person too. That also means that to save space as sheer memory might be stored in just one place or a few for redundancy. You might end up with an equivalent to cloud memory sotorage. Which leads tot eh idea with civilizations where people live a really long time and can move memory around and store lots of it, might get incredibly touchy about the notion of identity theft. Particularly in this litteral way. I could well imagine it being taboo or a crime to voluntarily transfer memories or mix memories or delete memories or run multiple copies of your self. One person, one lifetime, one unique set of memories. Even if that lifetime was essentially indefinite. They could get a lot more touchy about that too. Once on the pathway where uniqueness is important and violating it is seen as the worst sort of crime. Not just you made a copy of me and took my memories for your own, but you are impersonating me. There is after all only so much true uniqueness. Same as me tweaking a few things about you doesnt change who you are since that is constantly appening so your identity is a wide spectrum. If I just took a book and changed a few words, that is still plagiarism. And the same idea if i dye my hair blond, I am still me, so identity isnt some discrete state with clearly defined borders. And there could actually be a limited number of them available. There are infinite shades of color for instance, and the line between them is hazily defined. But there are only so many genuinelyunique identities, and a civilization mihgt decide there are only a quadrillion reasonably unique personalities. make a new one and someone says “that is identity theft, i want her deleted, she’s impersonating me”. Same with genetic code, clones might come about statistically iof the population is large enough. And they might actually kill her, after all, nothing is being lost, there’s no new person in their eyes who is being killed. I think thats pretty dark. but I could envision how something like that could happen, because i can ssee myself being angry at someone who tries to mimic us when it goes beyond a bit of flattery. In such a siutuation killing that copy might be seen as self defense against identity theft. The only thing that probably matters to a transhumanist society, at a murder, they could just dump you in a new body and might just send the culprit a bill for that , for the trauma and inconvenicene of downloading the new mind. There is a miximum number of identities you could cram into a dyson sphere or a matrioshka brain, and its not even that many. The human brain only has so many neurons before you start getting copies of other brains. If it’s against the law to have clones, your babyu might be killed by the government for coming out a copy of someone else, but again, that scenario is far beyond hypothetical. Isaac Arthur calls it “The Uniqueness holocaust solution”

THE NO CLONING THEOREM. We need a procedurce for transforming raw material into the semblace of the original thing. To copy a pointing, you carefully put paint on a blank canvas to match the original exactly, but your painting isn’t exactly the same as the original;. The red is a little too bithg, the stoke has a little too heavyy, therre are few too many atoms of carbon 14 in the new canvas. Its a copy but not an atomically identical one. Is a perfect copy identical even at the sub atomic level, even possible? Like can you make a copty of my brain down to the neuron and beyond so that even the position momentum and spin of every single sodium ion moving between neurons is exactly indistinguishably the same as in the original? Physicists call this kind of perfect copying “cloning”, even though it def isnt the same as cloning in biology, where 2 organisms share the same D N A but how they grow and develop can be very different. Cloning in physics is a much perfect copy. Where the relative positions and momenta and energy levels of every particle and all their bonds and interactions are exactly the same in the copy as trhe original, such that if you blindfolded yourself and randomly switched them, there’d literally be no way of telling which was the original and which was the copy. Unfortunately, the unioverse is a party pooper and perfect cloning is impossible. I dont simply mean that we dont know how ro rthat we havent succedded et because its really hard to do in practice. No i mean it has been mathematically proven that perfect cloning cant be achieved even in principle. Here now is the proof: using as little math as possible. Everything in the universe is made of elementar quantum particles and the forces by which they interact, so the no cloning proof we need to know what it means to clone a quantum particle. First we’re gonna need to know 3 important and fundamental properties shared by all quantum particles. Property number 1: Particles can be in several states at once. Like shrodinger’s cat, stuck in a bunker with unstable gunpowder that has a 50% chance of exploding at any minute, but maybe it hasnt yet? So that the gunpoweder is in a superposition of “gunpoweder has already exploded” or “gunpowder hasnt exploded yet”, or a photon going through 2 slits at once to interfere with itself and make a nice mpattern on the wall, or an electron on an atomic orbital. It’s wave function occupying many points in space all at once. In summary, in quantuym mechanics, the whole is equal to the sum that is the superposition of its fdifferent possible parts. Property 2: multiple particles when viewed together as one single object, like an atom or entangled pair of photons, or the gunpowder together with shrodingers cat, are the products of their components, or since its quantum mnechanics, a superposition of products of their components. So the situation inside shordinger’s box coul dbe described as a superposition of “exploded/dead” or “nonexploded/alive” for the cat. In summary, composite quantum objects are multiplied together, and finally, quantum property 3) any change to a particle that’s in a superposition of states, affects all of the states independantly. Kind of like iof you go 2 miles to the right and one mile up, then rotate tyour map 90 degrees, thats the same as first spinning each arrow 90 degrees and adding them together, or if you have an electron in a superposition, it means an electron in one second, will be in a superposition of wherever “here” is in one second, and wherever “there” is in one second. In summary, when you have a siuperposition, AKA a sum of several parts, any change or transformation of the sum of the parts, is equal to the sum of transformations of the parts. Whether that transformation ios a rotatioon, a movement or even an entire potetical cloning process. So a Xerox is impossible. So to prove it we’ll use three of the properties that all fundamental particles in the universe obey. 3) Individual particles can be in superpositions which looks like adding. 2) Groups or combinations of particles are products of their components, or sums of products of their components, which looks like multiplying. 3) And any transofrmation of a particle or group of particles is the same as the sum of the transformation applied to the parts, which looks like distributing. Now in terms of the properties we just outlines, lets talk about what it would mean to have a transporter, or a quantum cloning machine. First we’d need the person to be cloned, the materials to be cloned out of, and a procedurte to transform the materials into an aexact copy of the ioriginal. Our machine shouldn’t have to know in advance what the thing to be cloned is. Otherwise it’s not really a machine for cloning things, as much as a machine for building a known thing. So if a cloning procedure were to exist, we should be able to apply cloining to any mind we want and end up with 2 copies of the mind. The problem occurs if the specimen we’re cloning is a superposition, like the gunpoweder from shrodinger cat’s box, in a superposition of exploded and not exploded. If we apply our hypothetical cloning to the whole gunpowder inside the box superposition, we get (exploded + not exploded) x (exploded + not exploded), so we should get the same results by applying cloning to each part of the superposition, seperately cloning exploded and not exploded then not adding them together. But we don’t get the same thing since (exploded x exploded) + (not exploded x not exploded) is not the same as (exploded + not exploded) x (exploded x not exploded). Basically, if both quantium machanics and mind-cloning are true, then (a+b)^2 must be the same as a^ + b^2, but they are not the same, but this contradiction means either quantum mechanics is wrong, which would fly in the face in the most precise and accurate experimental tests in physics. Or that mindcloning can’t exist. This is the mindclone problem. This is an example of proof by contradiction. However, for those wanting to live in a sci fi future of mindclones, all is not lost. Even if perfect cloning isn’t possible, you can still make decent copies that are so close that the atomic differences are negligible. So the mindclone problem isnt really a problem at all. Rebuttal: its possible to clone a Qbit with an average of 83% fidelity so says the paper “Universal Optimal Cloning of Qubits and Quantum Register” by V. Buzek and M. Hillery 31 december 1997. Even more exciting, the no cloning theorem is onl about cloning. Teleportation IS still possible, that’s because teleportation consists of a subjet, materials to make the teleported version out of, and a procedure to turn the teleported materials into the subject, leading behind the an emopty machine. But this brings up the problem of destructive uploading. If scanning every atom in your body is done, it would collapse the wave function of your body and ruin the original copy. But the equations of quantum mechanics shows that teleporting a superposition or sum, is indeed equal to the superposition or sum of the indiviudally teleported parts. Once more, no cloning doesn’t mean you can’t have two or more copies of the same thing in the universe, it just means its not possible to take an existing thing you dont already know all the details about and making a perfect copy while leaving the oiriginal inteact. Henry from Minute Physics does a pretty good job of explaining how non-destructive uploading is impossible in his video about the No Cloning Theorem. Overall, you can build a machine to make multiple versions of things, as long as you know in advance exactly what iot is youre making, because copying someting would collapse the original at the quantum level because it would change the original. So, is it possible to learn everysingle detail about something? EWell, the heisenberg uncertainty principle means you cant simultaneously measure the details of any one object, but if you have a number of objects that you know are the same, you can measure each in a different way to get the full picture. So the irony is that in quantum mechanics, you can’t perfectly mindclone a thing you have only one of. But if you already have a lot of copies of something, you can make more, one new copy for each original copy you destroy to get it because of the Heisenberg uncertainty principle. Unfortunately theres only one of each of us in the universe, so no 100% perfect cloning in quantum mechanics means no 100% perfect cloning in humans either. While you can grow a child that’s genetically identicial to you, we’d likely wont ever be able to make a perfect clone of you that has all your memories thoughts, and loves. How close we can get of course depends on whether or not consciousness relies on quantum processes in the brain. In addition, The Heisenberg Uncertaint principle also throws a wrench into tiplerian concepts like Quantum Archeology, where a super computer might be able to bring back anyone who’s ever lived.

Here is the reason the Transporter Paradox debunks Mind Uploading. Were a copy of you to claim that it were conscious how would you know? Your continuous stream of consciousness is your life, and you are the only one who can experience it and can know if it exists and if its continuous. And trasporters are scaryt because they cause breaks in that consciousness, making a copy that lives the life that you have left with no one the wiser, with no one ABLE to be wiser. While transporters aren’t real, breaks in consciousness are. If you go for surgery, when they put you under, you can’t be sure of its you that woke up. For that matter, your bed might be a suicide machine. Every night you slip into unconsciousness you die. And every morning, the first and only day of a new creature’s conscious life. Its impossible to know.


  • Scientists were able to elicit a pattern similar to the living condition of the brain when exposing dead brain tissue to chemical and electrical probes.
  • Imagine the infinite human future(s) of mind uploading, life after death, immortality. Would you still be you when we upload your mind?

If you’ve ever worked with a virtualized computer, or played a video game ROM from a long-defunct console on your new PC, you understand the concept already: a mind is simply software, and the brain, the hardware it runs on. Imagine a day when your neurons, the matter that forms your mind, are transferred to a machine and their counterparts in your skull are disabled.

Are you still you? Imagine a future of mind uploading, whole-brain emulation, and the full understanding of the connectome. Now, imagine neuroscientists even discover a way to resurrect the dead, to upload the mind of those who have gone before, our ancestors, Socrates, Einstein?

In a paper published in Plos One in early December, scientists detailed how they were able to elicit a pattern similar to the living condition of the brain when exposing dead brain tissue to chemical and electrical probes. Authors Nicolas Rouleau, Nirosha J. Murugan, Lucas W. E. Tessaro, Justin N. Costa, and Michael A. Persinger (the same Persinger of the God-Helmet studies) wrote about this breakthrough,

This was inferred by a reliable modulation of frequency-dependent microvolt fluctuations. These weak microvolt fluctuations were enhanced by receptor-specific agonists and their precursors[…] Together, these results suggest that portions of the post-mortem human brain may retain latent capacities to respond with potential life-like and virtual properties.

Imagine the infinite human future(s) of mind uploading, life after death, immortality. How does it begin? Would you still be you when we upload your mind? Let’s imagine the journey as we explore the future of what brain augmentation might look like.


If you put a tiny chip in your brain which is 1,000 times more powerful cognitively than your biological brain, will you still be you?

Let’s say you put a tiny chip in your brain to enhance your memory capacity, analytical thinking, creativity and so forth. What if the capability of that chip was 1,000 times greater than that of your biological brain?

Let’s say you replace a single neuron in your brain with one that functions thousands of times faster than its biological counterpart.

Are you still you? You’d probably argue that you are, and even a significant speed bump in a single neuron is likely to go largely unnoticed by your conscious mind.

Now, you replace a second neuron.

Are you still you? Again, yes. You still feel like yourself. You still have the continuity of experience that typically defines individuality. You probably still don’t notice a thing, and indeed, with only a couple of overachieving neurons, there wouldn’t be much to notice.

So, let’s ramp it up. You replace a million neurons in your brain with these new, speedy versions, gradually over the course of several months. Sounds like a bunch, right? Not really; you’ve still only replaced 0.001% of your brain’s natural neurons by most estimates.

Are you still you?

You may find you’re reading books a teensy bit faster now, and comprehending them more easily. An abstract math concept (say, the Monty Hall problem) that once confused you now begins to make some sense. You’re still very much human, though.

You stubbed your toe this morning due to poor reflexes, resulting from a lack of sleep. You briefly felt lonely for a moment. That cute cashier turned you on as much as ever.

But why stop there? You’re feeling pretty good. You feel the tug of something greater calling you. Is it the curiosity, the siren call of improving one’s own intelligence? You embark on a neurological enhancement regimen of two billion fancy new neurons every month for a year.

After this time, you’ve got on the order of 24 billion artificial neurons in your head, or about a quarter of your brain.

Are you still you?

Your feelings and emotions are still intact, as the new neurons don’t somehow erase them; they just process them faster. Or they don’t, depending upon your preference. About half-way through this year, you began noticing profound perceptual changes.

You’ve developed a partially eidetic memory. Your head is awash in curiosity and wonder about the world, and you auto-didactically devour Wikipedia articles at a rapid clip. Within weeks you’ve attained a PhD-level knowledge of twenty subjects, effortlessly. You have a newfound appreciation for music – not just classical, but all genres. All art becomes not just a moving experience, but an experience embedded in a transcendental web of associations with other, far-removed concepts.

Synesthesia doesn’t begin to cover what you’re experiencing. But here’s the thing; it’s not overwhelming, not to your enhanced, composite brain and supercharged mind. Maybe you’ve subjected yourself to dimethyltriptamine or psilocybin before, and experienced a fraction of this type of perception. But this is very different. It feels so very soft and natural, like sobering up after a long night out.

You reason (extraordinarily quickly at this point), that since you don’t seem to have lost any of your internal experience, you should seek the limit or its limitlessness, and replace the rest of it. After all, at this point, everyone else is, too.

It’s getting harder to find work for someone who’s only a quarter-upgraded. Over the next three years you continually add new digital neurons as your biological ones age, change, and die out.

Are you still you? Following this, you are a genius by all traditional measures. Only the most advanced frontiers of mathematics and philosophy give you pause. Everything you’ve ever experienced, every thought that was ever recorded in your brain (biological or otherwise) is available for easy access in an instant.

You became proficient in every musical instrument, just for the hell of it. Oh sure, you still had to practice; approximately ten minutes for each instrument. You’re still a social creature, though, and as such, you still experience sadness, love, nostalgia, and all other human emotions. But as with a note played on a Stradivarius violin as opposed to a simple electronic function generator, your emotions now have such depth, so many overtones. Your previous unenhanced self could not have comprehended them. You are a god, an evolved human with the curiosity of a child. Though never religious, the feeling of a connectedness, a spiritual cosmism inhabits your complex mind. It is at once a bodylessness, an understanding of the universe, and again the acceptance of all ideas that are always open to revision.

Years pass. The same medical technology that allowed your neurons to be seamlessly replaced, aided and accelerated by a planet full of supersavants, has replaced much of your biological body as well. You’re virtually immortal. Only virtually, of course, because speeding toward Earth at a ludicrous velocity is a comet the size of Greenland. There is general displeasure that the Earth will be destroyed (and just after we got smart and finally cleaned her up!), but there’s a distinct lack of existential terror.

Everyone will be safe, because they are leaving. How does a civilization, even a very clever one, evacuate billions of people from a planet in the space of years? It builds some very large machines that circle the Sun, and it uploads everyone to these machines.

Uploads? People? Why sure, by now everyone has 100% electronic minds. These minds are simply software; in fact, they always were. Only now, they’re imminently accessible, and more importantly, duplicable.

Billions of bits of minds of people are beamed across the solar system to where the computers and their enormous solar panels float, awaiting their guests. Of course, just as with your neuronal replacements all those years ago, this is a gradual process. As neurons are transferred, their counterparts in your skull are disabled. The only difference you feel is a significant lag, sometimes on the order of minutes, due to the millions of miles of distance between one half of your consciousness and the other. Eventually, the transfer is complete, and you wake up in a place looking very familiar.

Virtual worlds, mimicking the Earth to nanometer resolutions, have already been prepared. In the real world, gargantuan fleets of robots, both nano- and megascopic, are ready to continue building new computers, and spacecraft, and new robots, as humankind prepares to seed the cosmos with intelligence. We haven’t achieved faster-than-light travel, but our immortal minds and limitless virtual realities make space and time irrelevant.

Are you still you?



During our current technological age of the 21st century, topics like robotics, AI, mind uploading, and indefinite life extension are no longer topics of science-fiction, but rather of science-facts and possibilities. The most common one being heavily debated at the current moment is mind uploading.

Once we’re able to artificially replicate the human brain, and then begin uploading ourselves into said artificial brain, will we lose consciousness? Will we still be ourselves or will we simply create a copy? Is it a risk we’re willing to take?

I love life. So the prospect of indefinite life extension is attractive to me. Then again, seeing as how I wish to live much longer than my biologically-fixed clock dictates, it just doesn’t cut it to simply make a copy of myself, but not myself, live forever. I would never destroy my brain and let someone else be me for me.

If I’m to achieve indefinite life extension, then I want to do so with both my physical and functional continuity still in complete operation. Without one, the other is completely irrelevant.


Functional continuity is basically the stream of consciousness that defines you as unique. “Destroying” functional continuity wouldn’t necessarily do anything to you, nor would it remain destroyed.

When we’re going through REM sleep every night, our functional continuity fluctuates on and off, only to be completely restored the next morning. Yes, your consciousness before sleep was different from the consciousness you now acquire after sleep, but you remain yourself – you’re still self-aware.

The same applies when you’re under anesthesia during surgery. Only this time, your functional continuity is turned completely off. There is no streaming of consciousness. And yet, after the surgery, your functional continuity turns back on, unaffected insofar as you remain self-aware.


So what about physical continuity? Physical continuity is very important – much more important than functional continuity. Physical continuity – using as simple an understanding as possible – is essentially the brain and all of its synaptic operations.

To destroy physical continuity would be to destroy the brain. Thus destroying everything, including the functional continuity which comes along with it. This is why physical continuity should be highly looked after much more so than functional continuity.

You can destroy your functional continuity and still have the chance to regain it so long physical continuity remains intact. The contrary, however, would be the end of your whole self.

Thus bringing us to our current dilemma of mind uploading. How are we to achieve mind uploading without destroying physical continuity in the process?




To simply “download” everything within your brain and upload it into an artificial brain, physical continuity is being replicated, not maintained. Essentially you’d be partaking in a really cool process of cloning. That’s it. Think of Lieutenant Commander Data and his brother Lore from the Star Trek universe (ignoring, of course, your cloned self being a maniacal psychopath).

ChristinaSanthouseWhich brings me to our current understanding of what is known as “Brain Lateralization” – the two cerebral hemispheres of the brain, separated by a longitudinal fissure. In other words, the left and right brain. Both are almost complete replicas of the other.

We’ve since discovered that, if one side of the brain is destroyed, the other side should still function. A great example of this would be now-twenty-six-year-old Christina Santhouse, who suffered from Rasmussen’s encephalitis – a neurological disease which causes seizures and the loss of motor skills. Once she began having over 100 seizures a day at such a young age, her and her family decided to take on a radical approach to address this very serious problem – take out the side of the brain causing this disease.

The result? She’s now a normal young woman, earning a scholarship to Misericordia University under a speech-language pathology major!


Why is this important? It paves the way in understanding how to maintain physical continuity while subsequently uploading your mind into an artificial brain. Imagine going through a process of downloading your entire brain and its various synaptic operations – including consciousness and functional continuity. Then you upload it into an artificially designed right hemisphere of the brain.

Now let’s say that you have an operation which replaces your right hemisphere of your biological brain for the artificial replica, all while keeping your left hemisphere completely intact. Over time, the right artificial hemisphere would become the dominant hemisphere, especially once your left biological hemisphere dies. So not only would you have then maintained functional continuity, but also physical continuity as well. You would achieve indefinite life extension via digital immortality.

This is the only way I can think of which will allow us to achieve both without losing one or the other in the process.

CONTINUITY IDENTITY THEORY: The theory that “I” am the same person as various future and past selves with whom I am physically and temporally continuous. (Cf pattern identity theory).