Many people are somewhat familiar with the concept of the swarm bots, even children, a concept which they’ve probably seen explored in the movie Big Hero Six, where they are caled Microbots.


The robot apocalypse is now closer than ever. Behold the wonder of the world’s first ever robot swarm. In the journal science engineers at harvard said they designed 1024 tiny robots that are able to organize themselves into any pattern they’re instructed to make without any human interaction. It’s hailed as a milestone in collective artificial intelligence. The robots nicknamed Kilobots, a scary name, are small, just a few centimeters across, cheap to make, and very simple. They communicate with each other using infrared signals and follow the same basic prinicplkes that ants, fish, and even individual cells can use to organize themselves. The harvard engineers programmed each kilobot to do 3 relatively simple things. 1 Figure out where it is in relation to its fellow robots, 2 identofy the edge of the group of robots, 3 move along the edge until it finds a spot it’s allowed to stop. The kilobots were given instructions to make a certain shape and 4 seed robots were set as the markers where the formation was supposed to start. After that the bots just started to move arbitrarily moving the outer edge of the group until it reached a coordinate that filled in the shape they were tryoing to make. In a few hours all 1000 + robots followed this pattern until the shape was complete. It’s the first time a large group of robots have been shown to follow a collective algorithm, or “shared set of instructions and rules”. The engineers say this technology could be used to have robots build buildings, or treat disease in teh body, or even form a network of self driving vehicles that coordinate wihtout traffic.

Swarm behavior is when a group of animals, birds, fish, termites, ants, act as one big thing, they can perfectly synchronize their movements or build huge mounds. Wouldn’t it be great idf we could get machines to act like that. The science of swarming behavior is inspiring scientists to build robots that in the future, might be able to help with everything from building construction to search and rescue missions. The thing that makes swarming behavior so perfect for robotics, is because in a swarm no one member does anything that’s too complicated, an animal is just following a few simple rules, like staying the same distance from all of it’s neighbors. This means you don’t have to make the robot super fancy. And you don’t need to program each one to tell it exactly what to do. Instead you just give a bunch of robots the same basic rules and because of how those rules play out in large numbers, the group will self organize and figure out how to do whatever complicated thing we want them to do. This is already a reality in today’s robotics. In 2014 researchers at harvard made over 1000 robots that could arrange themselves into almost any shape or pattern the scientists wanted. The scientists never told any each individual robot where to go, instead they just gave each one of them the same simple rules to floow. Like measuring how far away you are from your neighbor or find an outer edge of your robot swarm and move along that edge. By doing those things over and over the robots figured out exactly where to go. By doing those things over and over, the robots figured out where to go. The same harvard engineers also took some inspiration from termites to build robots that could build pyramids, castles, and other structures out of foam blocks. In this case they borrowed a strategy termites use known as Stigmergy, a method of indirectly communicating with each other to reach a common goal. When humans work on huge construction projects we need checklists, blueprints, chains of command and all that requires communictaioons. But Termites instead build by paying attention to tiny clues left over by other fellow termites in their enviroment. When they make mudballs they add in some pheramones that tells other termites where to build, this lets them coordinate their actions. In Harvard researchers used a similar idea to design robots that could place blocks based on what the structure looked like at the moment. So one robot could put its block somewhere that indicated where the robots behind it should put their own blocks down.Instead of just blindly building because of how they were programmed, they could adapt on the fly, even when the researchers tried to mess with them by moving blocks that robots had previously put down. Each robot placed its block based on how the block that was placed down before it wsas oriented. These robots are so far confined to the labs, but the idea is to eventually have them work for uis and solve real world medical problems. Robots might be able to build things in dangerous places like disaster areas or even on Mars. There might be robot swarms all over the place one day.


Imagine tiny robots made of living heart cells swimming towards you being guided by laser light, well now that’s a thing. Scientists have now created tiny swimming robots shaped like Jellyfish and stingrays. They claim to be using these mechanical copycats to study propulsion, test biological materials, and find new vehicle designes. But what about swimming armies of tiny robots. But mechanical objects like this mimic biological systems it’s called biomimicry,. Some scientists believe thaty biomimicry builds on millions of years of evolution, it’s a more effective and efficient approach to engineering. A team of scientists lead by Kevin parker at Harvard, created a 1/10th scale versiopn of a rayfish, the tiny gold skel;eton and a rubber body powered by 200,000 rat heart and muscle cells. This biological hybrid machine can swim through an obstacle course in a salt solution with a little help from lasers, it’s so creepy. The reason they use heart cells is the way that they beat, contracting in response to stimuli, they started by placing the heart cells on top of the robot, but these weren’t ordinary heart cells, they were modified to contain light-detecting protein usually found in our eyes, making them activate in response to light. this meant that the reserchers could signal the cells to contract simply by flashing a light at them. When activated the cells contract down, bedniong te skeleton, and wen they release the flexible wings were bound to neutral, just like a live ray or the beating of a heart. So now they can make the robots swimm, but since the contraction of the cells happen on both sides of the mechanical ray at the same time, the robot was impossible to steer. So they further modified the cells to make the ones on the left and right wings responsive to different wavelengths of light, flashing both wavelengths at the same time caused the ray to swim forward, but flashing one or the other activated only one wong of the ray and could turn the vehicle. After this the team started building obstacle courses for the robot like they were designing a biomemetic videogame. Bio robot hybrids are relatively new and others have mimiced aquatic animals like seaturtles, jellyfish, and now stingray. Obviously this is really cool, but why would we want to build robots out of biological materials in the first place? Stanford professor of Engineering says it all comes down to energy, but traditional robotic systems run out of energy before they took over the world, but biological systems are really efficient at using energy. If you wanted to make smaller, more powerful fatser robots, improving energy use is a good way to do that. Biomemetic jellyfish or stingrays don’t require batteries, gas, or solar panels. They use the same source of energy as the rest of the cells in our body do. Glucose, a form of sugar which scientists put in the water to feed the cells. At this point the robot only swims if it’s in a glucose and salt solution at body temperature with particular lightwaves flashing at it, but it’s still step forward to making microbots that sense, respond, and move on their own. In other words, intlelligent robots made from live biological materuials could be coming soon.


There’s a special branch of science that seeks to understand nature by imitating it. It’s called Bionics, the science of designing mechanical systems, that are based on living systems. We’ve gotten closer than ever to learning and duplicating some of nature’s hardest tricks. Robots that can fly and robots that can feel. First of all, humans have been trying to build nmachines that use flapping wings to fly since 400 BC, these are called ornathopters, Leonardo Davincii has tried and failed to imitate the design that works so well for birds, bats, and insects. Nature just does it better than us at designing things. Flapping wings are more eficient, more wind-tolerant, and more agile than fixed wings, they can react to unexpected obstacles and even stop mid flight to hover in the air. But in journal science graduate students at the harvard school of engineering and applied science say that they have succeeded in making a bioniuc fly. A bug sized robot that can hover in place and perform controlled manuvers much like a housefly, published in in 2013. Described by the researchers as the first of it’s kind, the robot gives researches a new way to study flight dynamics. It’s also a step toward imitating flight in nature. Imagine robotic swarms that can maneuver through obstacles and acclimate to changing weather conditions, or drones that look like birds that can hover in place for hours to gather surveillance. The US army already has an eye on this technology because of it’s piezoelectric materials. Solids, usually crystals that gain electric charge in response to physical pressure, It’s how some lighters work, a tiny hammer hits a piezoelectric crystal, and creates a current that ignites the gas. The robots have crystals as well that respond to minute changes in charge and pressure, translating to minute changes in motion, allowing it to navigate around and flap its wings.

100 years ago we barely knew how to make an airplane fly but insects have been doing it from hundreds of millions of years. Researchers figure we could probably learn a thing or two from all that experience. If nature has already figured out how to do it, why do the workl all over again. There’s a whole field of research focusing on technology based on biology. It’s called biomimicry and it’s particularly useful in microrobotics, because robots are just machines designed to accomplish a task, but often those tasks have already been done by a living thing like a human or an animal. So in one of the fastest growing areas of robotics science engineers are actively studying nature to see how it could teach us to build better robots. Some robots are designed to fly like bugs. There’s more to flying than just flapping wiongs. A group of swiss researchers modelled the AirBurr robot after insects. It’s designers were trying to solve one of the biggest problems robots face when exploring unknown territory. How do they get around without crashing into things. To avoid collisions robots have to tell where they’re going and map the terrain as they go. but mapping systems tend to be complicated, fragile and expensive so if a robot does crash it breaks and it’s kind of a big financial deal. The airburr is housed inside of a big flexible frame designed to bump into walls and survive. If it does fall out of the air 4 legs extend to get it upright so it can start flying again. in this way, it doesn’t need complex systems to get around, it just bumps around but eventually makes its way. Eventually robots like the airburr might help with robot search and rescue missions, flying through unknown debris filled places and bumping intop places as they go. In the sci fi show black mirror, the government solves the problem of colony collapse disorder and the declining bee population by creating robotic bee swarms that pollinate the flowers in their stead.


Let’s get even smaller. Big robots are great, but big robots made up of smaller robots are even better. Robots are taking over, gutter cleaning robots, lawn mower robots, and pretty soon rtobotic cars. These robots all have in common? They’re all really good at certain tasks but if they try to do anything else they’re pretty lousy, these robots lack versatility, there’s no single robot that’s good at everrything, but what if you could have a big robot that’s made of a bunch of tiny robots, and those tiny robots can change the shape of the larger robot to complete various tasks. This is a real thing, it has a name, Self Reconfigurable Modular Robots. Imagine the robot needs to screwin a lightbuld so thousands of tiny robots join together to create legs, giving it a firm stance on the ground. More tiny robots form a along arm that reaches up to the shelf and picks a fresh lightbulb, then they reconfigure again giving the arm 360 degrees of rotation and the robot screws in the lightbulb. Once the task is done the robots sever their connection and the big robot disintegrates. But for this to happen a lot of different components need to get together. For example, communication, each module has to be able to communicate with all the other ones as well as the overall system that’s issuing the commands. We’re talking layer upon layer of artificial intelligence from the overall system to the indivudla components. And this modular design means that those modules have to join together somehow. Those connections might be made by magnetic contacts, or even by plugs in siockets. Whatever the method, it has to be both flexible and strong. There are already several examples in the real world. MIT students have built Mblocks, cube robots that can form together to make more complex shapes. At Harvard university, you’ve got the kilobots, don’t worry, they’re friendly to humans and they display swarm behavior to complete tasks. In the future we’ll have these robots made up of smaller robots that can revolutionize everything, from manufacturing to our experience in the home. But what if we got even smaller than that. Let’s talk Smart dust. Not the dust on your bookshelfs, machines that are developed on the microscale to accomplish a certain task, like sensing the environment.. The Michigan microMote developed at the university of Michigan aims to do just that. They hope to get a sensor down to one cubic millimeter in size. Maybe in the future we’ll have sensors in robots that are so small, we’re no longer talking about larger robots made of smaller robots, every single object we come into is its own swarm. That’s endlessly customizable environments, it’s called Utility Fog. Maybe this future will never come around, maybe we run into miniturization issues, but even if I never get my own personal robocouch, I know that these miniature machines are set to make a big impact.


We know that even common medications can have side effects. Exasterbating this is that about half of people take meds incorectly according to one survey by the WHO. Scientists are working on this problem in the form of nanoimplants. A team from the american chemical society that built nanosheets that combined anti-inflammatory drugs with electrodes through a polymer film. They were then able to control the release of the drug through electrical shocks. It’s still in the development stage but they say they say this technique could be useful in treating diseases like epilepsy, where a medication is already in your body and can be released right away as the seizure happens. This study is one of many projects in the works to create programable pharmaceuticals and nanomedicine. For example, swallowable microchips have already been approved by the FDA. A company called Proteus digital created microchip imbedded pills with tiny snesors that could react to digestive juices. That relays a signal to a patch on your skin which can then be relayed to your doctors. So if something goes wrong or you’re not doing your meds correctly they can get alerted automatically.

Ingestible technology is exactly what it sounds like. Tiny sensors embedded in pills made of metals that are safe to ingest like copper and magnesium. The coating dissolves in stomach acid which activates the metal sensors, starting it’s tracking of your vitals like temperature and heart rate. It sends that information outside your body via an adhesive patch worn on your skin straight to your smartphone using bluetooth. Because it’s in your digestive tract, you pass it just like you would anything else. There are a few of these devices under development right now, but the closest to launching is a digital pill from Proteus digital health. Sanctioned by the FDA, the company’s ingestible sensor marks the time of ingestion then monitors how many steps you take, rest periods, and heart rate and sends all that health data to your smartphone. Another device called proteus discover takes it even further. These sensors are packed inside each pill of a perscription logging the time you take each dose along with how it’s working inside your body. These devices can modify medication intake and check for dangerous mixes, potentially preventing complications that stem from mixing certain drugs. These ingestible devices are actually in use today 2016. Both proteus products focus on patient monitoring with an emphasis on chronic patients, because some people don’t always tell doctors the truth about their habits. These situations leading to more serious and more expensive illnesses. These medicine based problems don’t just afect the individual they affect the whole country. The economic cost of medication based problems, including costs to nursing home,s hospitals, and ambulance care total nearlyy 85 billion dollars annually. But digital pills aren’t just about monitoring patients, some ingestible devices are also used for screening and preventative medicine. Pill Cam Colon is a miniturized camera imbedded in a disposable capsule used to non invasively check colon health. The camera bot is about the size of a vitamin pill, you swallow it and as it passes through your digestive system doctors can get a close look at the colon. They can check the images for signs of polyps or other early signs of colorectal cancer without having to do an invasive exam involving sedation or radiation. And it’s an FDA approved screening method for patients who can’t submit to a regular colonoscopy. Even though these microsensors pass harmlessly through your body, the technology does raise some interesting ethical questions. This comes with the territory when you have a sensor or a camera inside your body transmiting those images and that information. But those are all questions that will be raised as the tech becomes more widespread.