Six Ways Bio-Inspired Design is Reshaping the Future

From harvesting energy to building networks, nature has been solving problems for billions of years longer than humans have
How exactly does one turn sunlight and water into usable energy? If it were possible to ask any living organism on Earth this question, you could do far better than asking a biologist or a chemist, or any other human being for that matter, and take the question directly to a leaf. That’s the goal of biomimicry: to take human problems and ask nature “how would you solve this?” And increasingly, such questions are changing everything, from energy to information technology to the way we build cities.

To see how a leaf works its magic, look no further than Dr. Daniel Nocera’s lab at MIT. Yesterday, Nocera’s team announced that it has created the first practical “artificial leaf”, a synthetic silicon device that splits water into oxygen and hydrogen for fuel cells using sunlight just as a natural leaf does. Nocera’s leaf isn’t a perfect mimic of photosynthesis–for instance, it requires materials like nickel and cobalt that must be extracted from the earth, and catalysts that spur reactions that otherwise wouldn’t happen on their own. But it’s indicative of a growing shift in how humans solve big problems by looking to nature for elegant solutions rather than bending the natural world to their wills.

With its 4.5-billion-year head start on mankind, the natural world has developed some clever mechanisms for solving big problems, and that natural cleverness isn’t just informing new ways to generate energy. It’s slowly but surely informing everything from the the way emergency rooms are designed to how data networks communicate. It asks that electricity grids act like bees and businesses manage resources like coral reefs manage calories. Seriously.

“Biomimicry is a beautiful way of framing the design process to be cognizant of how nature does things,” says Dr. John Warner of the Warner Babcock Institute for Green Chemistry. “I think that over the centuries humans have become a little egotistical in trying to bend materials and things to our will.”

“Warner and his colleagues are on the science side of biomimicry’s collaboration between biology and design. As a green chemist, he and his lab develop new environmentally benign materials often borrowing from natural processes along the way. In Warner’s world, gone are the heat, high pressures, and toxic additives native to much man-made chemistry, replaced with processes that hew more closely to the way nature creates materials.

On the other side of that equation are the engineers looking for new and better materials with which to design. And increasingly there’s a stronger dialogue between the two, driven partially by an increased environmental consciousness but moreso by a pressing imperative to solve big, overarching problems at the macro scale.

Take Nocera’s leaf for instance: in light of an always-looming global energy (and environmental) crisis, a means to generate electricity from plentiful (and renewable) water and sunlight could solve a number of huge problems, both natural and man made. The answer is right there in the leaf, and has been for millennia–unlock that natural mechanism in a feasible, economically viable manner and you’ve got a beautiful solution to problems ranging from the environmental to the humanitarian to the geopolitical.

“When you think about the natural world, nature outperforms us in its diversity, in its complexity, but does so at ambient temperature, at low pressures, using water for the most part as a solvent.” Warner says. By helping humans to think more like a leaf (or an ant hill, or a 1,200-year-old oak, or a bacterial colony), biomimicry is tapping that multi-billion-year head start to bring the same kind of complexity and diversity to human invention.

Materials: Rewriting the Story of Stuff

“Biomimetic materials have the potential to rewrite our story of stuff,” says Tim McGee, Senior Biologist at the Biomimicry Group. “For most of the materials we use today we’re mining either ore or oil, we transport them, we heat them, we machine them and then they usually have products baked into them that are slightly toxic or not benign. That’s completely different than the way natural systems use materials.” Nature, McGee says, uses materials that are readily available nearby and does so in a way that when they’re no longer needed they can be broken down into their component parts and used again. It’s not a novel concept. New York-based <ecovative “> grows packaging materials, plastics (living polymers), and building insulation from things like mycelia (basically mushroom roots).

4XSBZIUHG4UTVTGV25UTJHNET4The industrial input: agricultural byproducts like buckwheat husks and cotton seed hulls–no harsh chemicals, no global supply chain of raw materials (pictured are Eben Bayer [left] and Gavin McIntyre of Ecovative with their mushroom-based material). By looking to ecosystems as a model, we could reorganize our entire supply chain of “stuff” by using biomimetic materials that are sourced locally and manipulated into essentially whatever we want them to be without harsh chemical processes. How? McGee sees huge potential in tweaking 3-D printing tech to be more bio-friendly. “Right now rapid 3-D printing uses these plastics and metals and other things we already know how to work with,” he says. “I think biomimicry could completely change that story by having those rapid prototyping materials be bio-inspired and really perform in a way that we’ve never seen materials perform.”

Building: Cities That Work Like Ecosystems

Nature provides a blueprint for smart, efficient systems that has been largely overlooked or ignored by those who organize our population centers. There is plenty to be considered in the way certain coastal oaks gird themselves against hurricane winds or in the way desert plants make the most efficient use of scarce rainfall, but those are piecemeal solutions to individual problems. McGee is more interested in the wholesale re-imagining of the modern burg via “generous cities” that don’t just feed off their environments, but instead give back to their surroundings. “Imagine a city where the water leaving the city is cleaner than that coming in, or a city that literally breathers carbon dioxide in to make products,” McGee says. “Or imagine if a city actually increases the biodiversity of a region or facilitates that happening in some way. All of that is possible, and people are working on it.” Look no further than Calera, a California company that is successfully sequestering carbon dioxide in concrete by emulating sea coral. Rather than heating limestone to create concrete (and lots of carbon dioxide), Calera is mixing mineral rich seawater with power plant emissions in a process that causes the calcium in the water to bond with the carbon in the emissions to form cement. The emissions from the power plant are thus sequestered in the concrete that growing cities are built from (Calera’s Moss Landing, Calif., pilot plant

Economics: Moving Resources Like Coral Reef Calories

Economists of a certain stripe point proudly to free markets as the most efficient allocators of resources. A biologist studying how calories move through coral reefs or the complex energy cycles of African savanna ecosystems might tell you that waste is far less prevalent in natural systems that maximize nearly every bit of energy. By simply observing food webs it’s easy to see that complexity doesn’t always breed inefficiencies, and that systems that waste not, want not. McGee is particularly interested in this kind of systems-level bio-inspiration, because it has less to do with creating something new and more with re-thinking how things like businesses and larger economic networks are organized. “Drawing inspiration from natural systems can help us rethink or re-imagine our existing systems,” McGee says. “And I think that actually can have quite an impact pretty rapidly. It’s about how you organize things, so you don’t need materials or development time. You can put it into place pretty rapidly.”via Wikimedia


Health: Battling Bacteria with Biomimicry

Medicine and biology are by nature already tightly intertwined, and there are numberless examples of medical researchers repurposing natural processes in really crafty ways to create everything from better glues for patching bones to proteins that can potentially treat blindness. But perhaps more exciting than bio-inspired treatments are some of the clever natural mechanisms being leveraged to keep pathogens and injuries at bay. For instance, Florida-based Sharklet Technologies realized that shark skin possesses a unique texture that doesn’t allow bacteria and other organism to take hold. By duplicating this unique pattern on an adhesive synthetic sheet, Sharklet has created a bacteria-free surface that can be used in hospitals, restaurants, and other places where contamination has consequences. What’s more, because this technique doesn’t kill bacteria it will be far more difficult for them to evolve a resistance to it, sidestepping the core problem with most attempts at rendering bacteria harmless. After all, the root technology underwent a 400-million-year incubation period in the ocean, and bacteria haven’t figured out how to thwart it yet.

Energy: Nature Already has a Smart Grid


Devising a practical and efficient means of harnessing photosynthesis is quite possibly THE Holy Grail of energy research, but it’s not the only way biomimicry has the potential to change the global energy paradigm. Biomimicry has the potential to rewire the entire world for cheap and abundant energy by informing the design of smart grids and other energy infrastructure. One company is doing so not by looking to plants, but to insects like ants and bees. Toronto-based Regen Energy has been exploring “swarm logic” for several years now, developing software based on the working principles an insect swarm–that is, that each individual node in the system doesn’t need a direct order from the leader to act in a way that maximizes benefit to the entire network. By mimicking swarm intelligence, the company has already developed a means to manage energy networks like the HVAC systems in large buildings to reduce peak electrical demand. And just a few weeks ago Regen announced that the Los Angeles Department of Water & Power is considering tapping swarm logic to help manage its DOE-funded Smart Grid EV integration project, bringing hive mentality to one of America’s largest public utilities.

Information Technology: Mimicking Natural Networks

Ants and bees aren’t just informing energy grids. “Some of the early successes in biomimicry already have come from millions of dollars saved by mimicking how an ant communicates information and translating that into how you send server packets over the Web or how you pick a route for your trucks to drive or something like that,” McGee says. There’s plenty more to learn; researchers at Pacific Northwest National Laboratory have developed a computer network security system based on the swarm intelligence ants use to defend their hills, and going all the way back to 2007 researchers inspired by honeybee communications built a system that lets networks optimize performance by taking advantage of idle servers during periods of high demand. But McGee thinks we’ve just scratched the surface of what biology can do for IT. “We’ve already seen an explosion in the relationship between understanding biology using information sciences and then developing ideas in information sciences based on biological insight,” he says. “I think there’s still a lot of room there to play with computer science and biology by learning from biological system

The Air Force Studied Falcons to Develop a Bio-Mimicking Drone Defense



Next-generation drones, built to track and hunt other drones, might be designed using hunting principles used by one of nature’s most capable predators. A US Air Force-funded study by researchers from the University of Oxford shows how this works.

Contrary to previously held understanding, these peregrine falcons don’t follow simple geometric rules during an aerial hunt for food. Instead, the raptors maneuver using control strategies of proportional navigation, which is similar to the guidance system of a visually-directed missile.


Nature is full of keenly adapted predators, and one of the best animal hunters out there is the falcon. These skilled predators have been able to adapt to nearly every habitat on Earth, and have long been used by humans to hunt game from the sky. Now, the United States military thinks there’s even more to learn from these raptors. The US Air Force recently funded a study by Oxford University zoologists that aimed to understand how peregrine falcons hunt for prey, and to model their predatory behavior into bio-mimicking drone defense technology.

“Renowned as nature’s fastest predators, peregrines are famous for their high-speed stooping and swooping attack behaviors,” the Oxford team wrote in the abstract of their study, which provides an insight into how a peregrine falcon tracks its quarry, published in the journal of the Proceedings of the National Academy of Sciences.

To monitor how this works, the researchers fitted the falcons with miniature GPS receivers and video cameras. The GPS tracked how the falcons followed another bird or a bait towed in the air by a drone (as seen in the above video), checking the predator’s angle and method of attack. You can watch more of the falcons at work in the videos included here.

“Falcons are classic aerial predators, synonymous with agility and speed. Our GPS tracks and on-board videos show how peregrine falcons intercept moving targets that don’t want to be caught,” lead researcher Graham Taylor of the Oxford Flight Group at the university’s zoology department, said in a press release. “Remarkably, it turns out that they do this in a similar way to most guided missiles.”


Taylor and his colleagues also noted a difference between how a guided missile works and how the falcons track prey. Unlike the missiles, the raptors are able to adjust the angle of their attack to compensate for their not-so agile movement. It’s possible, therefore, to copy these mechanisms into drones designed to hunt for other drones.

“Our next step is to apply this research to designing a new kind of visually guided drone, able to remove rogue drones safely from the vicinity of airports, prisons and other no-fly zones,” Taylor explained.

“It was very exciting to study these sleek, formidable aerial predators, and to watch them as they chased down our maneuvering lure towed behind a small remote-controlled airplane – then, through our computer modeling, to reveal the secret of their attack strategy,” co-author Caroline Brighton explained in the press release.

A peregrine falcon on the attack. By imitating the strategies of these powerful birds, the Air Force hopes to create bio-mimicking drone defense systems.
A peregrine falcon on the attack. (Image credit: Pixabay/Jocdoc)

As the world moves towards employing more drone technology and autonomous weapon systems in our cities and on the battlefield, presumably to lessen human casualties, taking a cue from how nature’s predators work could greatly improve next-generation designs. The US Air Force-funded Oxford study can make it easier to design drones that see their target and adjust accordingly.

“We think that the finer details of how peregrines operate could certainly find application in small drones designed to remove other drones from protected airspace,” Taylor told Bloomberg.


The future of bionics could yield soft robotics and smart trousers

The word “bionic” conjures up images of science fiction fantasies. But in fact bionic systems — the joining of engineering and robotics with biology (the human body) — are becoming a reality here and now.

Getting older and less steady on your feet? You need a bionic exoskeleton. Having difficulty climbing those stairs? Try a pair of bionic power trousers. The biggest challenge for making these bionic systems ubiquitous is the huge range of situations we want to use them in, and the great variation in human behaviours and human bodies. At the moment there is simply no one-size-fits-all solution.

So, the key to our bionic future is adaptability: we need to make bionic devices that adapt to our environments and to us. To do this we need to combine three important technologies: sensing, computation and actuation.

Sensing can be achieved by using sensors which directly record brain, nerve and muscle activity, and by using on-body devices such as accelerometers which indirectly measure the movement of our limbs. Computers then link this information with models of human behaviour — often tailored to personal information about how the user moves — and predict the movements that the user is about to initiate. In the final stage, the computer systems use these predictions to divert energy to a set of power actuators. This actuation step provides the needed assistance and support, continually adapting to our changing bodies and the changing environment.

At present, most bionic assist devices are made from rigid materials such as metals and plastics, and are driven by conventional motors and gearboxes. These technologies are well established but their hardness and rigidity can be a great disadvantage. In nature, soft materials such as muscles and skin predominate, and as humans we find comfort in soft materials, such as holding hands or sitting on a sofa.

Soft robotics for bionic bell-bottoms

New “soft robotic” technologies are emerging which have the potential to overcome the limitations of conventional rigid bionics. These systems, as their name suggests, employ soft and compliant materials that work more naturally with the human body. Instead of rigid metals and plastics, they use elastic materials, rubbers and gels. Instead of motors and gearboxes, they’re driven by smart materials that bend, twist and pull when stimulated, for example by electricity.

Continue reading

Bio-Inspired Tips to Create Better Teams

There’s an entire industry built around how to be a better leader and build strong, dynamic teams. But for the last few years, IDEO Designers have been looking to the earth and seas and sky for inspiration. A Partner, Chief Creative Officer, and a founding member of IDEO’s human-centered design practice, Jane believes that the natural world has much to teach us about cultivating the optimal conditions for creative teams. Together, with help from design biologist Tim McGee,  They have come up with a few bio-inspired tips:

1. Design a Fertile Habitat In the natural world

Certain organisms create a habitat for diverse species through their own growth. For example, a single tree can provide unique perches and conditions that foster life adapted to those regions. A rainforest canopy supports an entire ecosystem of mammals and birds that live on insects, fruits, and seeds from the trees and other plants growing within their branches. Likewise, in human organizations, it’s important to create environmental conditions that cue and simultaneously support a diverse group of people and activities harmoniously. Smaller, quiet spaces are good for heads-down contemplation while open-plan studios encourage serendipitous meetings, collaboration, and teamwork. What kind of office habitat can you create to encourage creative teamwork and help different personalities thrive?

2. Create Simple Rules

When birds fly in formation, the group stays organized without top-down control. By following one simple rule—maintain distance—each individual bird keeps track of the bird to the front and the side of it so the entire flock is able to act in a coordinated way. Simple rules allow coordinated action, or swarm intelligence, to emerge from a community of individuals. In creative teams, it can be difficult to effectively coordinate action or achieve group consensus. Typically, in human communities, we default to top-down hierarchy: someone takes charge and makes all the decisions. But that structure runs the risk of disempowering others and dismissing good ideas. How can we coordinate a team’s activities and still maintain the motivation, energy, and agency of individual contributors? For example, during brainstorming, teams can be more productive by agreeing to defer judgment and have one conversation at a time. What guiding principle or simple rules would ensure team members preserve their autonomy while remaining coordinated with group progress?

3. Be Productive

Sea turtles spawn hundreds of offspring and leave them to fend for themselves. Subjected to a combination of pressures—predators, ocean currents, temperatures—most won’t make it to maturity. But those that do survive strengthen the gene pool and better adapt the species to its environment. Compare this with elephants (and humans), which have only a few offspring, but invest a tremendous amount of guidance and resources to make sure they succeed. Contrast that to idea generation amongst teams. Fearing failure and judgment, we humans tend to quickly converge on a promising solution and develop it to high fidelity. But the investment of time and energy in an idea that ultimately proves unsuccessful can be demoralizing. As with turtles, it’s more effective to explore a greater number of ideas at lower fidelity, knowing that many will ultimately not make it out in the world. If you iteratively prototype multiple ideas, teams will learn what works and what doesn’t with minimal investment.

How do you encourage your team to explore a broad range of ideas cheaply and with low fidelity so they don’t converge on an idea too quickly?



Birch Trees


4. Expect Collaboration

Contrary to widespread belief, biologists are finding that successful organisms tend to collaborate more than compete. Birch trees and rhododendrons, for example, grow close by each other in the woods. The birch provides shade to the rhododendron, keeping it from drying out. The rhododendron, in turn, provides the birch with defensive molecules that protect it from being eaten by insects. This symbiotic relationship allows both to survive longer. At IDEO, a similar transfer of insight and skills keep the organization healthy. Learning from ecosystems like the forest, IDEO form cohesive groups that add up to more than a sum of competing parts. More than anything else, it is this deep collaboration that enables  teams to thrive in challenging work environments. How can each team member’s skills build on those of others to allow growth?

Biologically inspired: How neural networks are finally maturing

Loosely modeled on the human brain, artificial neural networks are being used to solve increasingly sophisticated computing problems

More than two decades ago, neural networks were widely seen as the next generation of computing, one that would finally allow computers to think for themselves.

Now, the ideas around the technology, loosely based on the biological knowledge of how the mammalian brain learns, are finally starting to seep into mainstream computing, thanks to improvements in hardware and refinements in software models.
Computers still can’t think for themselves, of course, but the latest innovations in neural networks allow computers to sift through vast realms of data and draw basic conclusions without the help of human operators.

“Neural networks allow you to solve problems you don’t know how to solve,” said Leon Reznik, a professor of computer science at the Rochester Institute of Technology.

Slowly, neural networks are seeping into industry as well. Micron and IBM are building hardware that can be used to create more advanced neural networks.

On the software side, neural networks are slowly moving into production settings as well. Google has applied various neural network algorithms to improve its voice recognition application, Google Voice. For mobile devices, Google Voice translates human voice input to text, allowing users to dictate short messages, voice search queries and user commands even in the kind of noisy ambient conditions that would flummox traditional voice recognition software.

Neural networks could also be used to analyze vast amounts of data. In 2009, a group of researchers used neural network techniques to win the Netflix Grand Prize.

At the time, Netflix was holding a yearly contest to find the best way to recommend new movies based on its data set of approximately 100 million movie ratings from its users. The challenge was to come up with a better way to recommend new movie choices to users than Netflix’s own recommendation system. The winning entry was able to improve on Netflix’s internal software, offering a more accurate predictor of what movies Netflix may want to see.

As originally conceived, neural networking differs from traditional computing in that, with conventional computing, the computer is given a specific algorithm, or program, to execute. With neural networking, the job of solving a specific problem is largely left in the hands of the computer itself, Reznick said.

To solve a problem such as finding a specific object against a backdrop, neural networks use a similar, though vastly simplified, approach to how a mammalian cerebral cortex operates. The brain processes sensory and other information using billions of interconnected neurons. Over time, the connections among the neurons change, by growing stronger or weaker in a feedback loop, as the person learns more about his or her environment.

An artificial neural network (ANN) also uses this approach of modifying the strength of connections among different layers of neurons, or nodes in the parlance of the ANN. ANNs, however, usually deploy a training algorithm of some form, which adjusts the nodes to extract the desired features from the source data. Much like humans do, a neural network can generalize, slowly building up the ability to recognize, for instance, different types of dogs, using a single image of a dog.

There are numerous efforts under way to try to replicate, at high fidelity, how the brain operates in hardware, such as the EU’s Human Brain Project . Researchers in the field of computer science, however, are borrowing the ideas from biology to build systems that, over time, may learn in the same way brains do, even if their approach differs from that of biological organisms.

Although investigated since the 1940s, research into ANNs, which can be thought of as a form of artificial intelligence (AI), hit a peak of popularity in the late 1980s.

“There was a lot of great things done as part of the neural network resurgence in the late 1980s,” said Dharmendra Modha, an IBM Research senior manager who is involved in a company project to build a neuromorphic processor. Throughout the next decade, however, other forms of closely related AI started getting more attention, such as machine learning and expert systems, thanks to a more immediate applicability to industry usage.

Nonetheless, the state-of-the-art in neural networks continued to evolve, with the introduction of powerful new learning models that could be layered to sharpen performance in pattern recognition and other capabilities,

“We’ve come to the stage where much closer simulation of natural neural networks is possible with artificial means,” Reznick said. While we still don’t know entirely how the brain works, a lot of advances have been made in cognitive science, which, in turn, are influencing the models that computer scientists are using to build neural networks.

“That means that now our artificial computer models will be much closer to the way natural neural networks process information,” Reznick said.

The continuing march of Moore’s Law has also lent a helping hand. Over the past decade, the microprocessor fabrication process has provided the density needed to run large clusters of nodes even on a single slice of silicon, a density that would not have been possible even a decade ago.

“We’re now at a point where the silicon has matured and technology nodes have gotten dense enough where it can deliver unbelievable scale at really low power,” Modha said.

Reznick is leading a number of projects to harness today’s processors in a neural network-like fashion. He is investigating the possibility of using GPUs (graphics processing units), which thanks to their large number of processing cores, are inherently adapt at parallel computing. He is also investigating how neural networking could improve intrusion detection systems, which are used for detect everything from trespassers on a property to malicious hackers trying to break into a computer system.

Today’s intrusion detection systems work in one of two ways, Reznick explained. They either use signature detection, in which they recognize a pattern based on a pre-existing library of patterns. Or they look for anomalies in a typically static backdrop, which can be difficult to do in scenarios with lots of activity. Neural networking could combine the two approaches to strengthen the ability of the system to detect unusual deviations from the norm, Reznick said

One hardware company investigating the possibilities of neural networking is Micron. The company has just released a prototype of a DDR memory module with a built-in processor, called Automata.

While not a replacement for standard CPUs, a set of Automata modules could be used to watch over a live stream of incoming data, seeking anomalies or patterns of interest. In addition to these spatial characteristics, they can also watch for changes over time, said Paul Dlugosch, director of Automata processor development in the architecture development group of Micron’s DRAM division.

“We were in some ways biologically inspired, but we made no attempt to achieve a high fidelity model of a neuron. We were focused on a practical implementation in a semiconductor device, and that dictated many of our design decisions,” Dlugosch said.

Nonetheless, because they can be run in parallel, multiple Automata modules, each serving as a node, could be run together in a cluster for doing neural network-like computations. The output of one module can be piped into another module, providing the multiple layers of nodes needed for neural networking. Programming the Automata can be done through a compiler that Micron developed that uses either an extension of the regular expression language or its own Automata Network Markup Language (ANML).

Another company investigating this area is IBM. In 2013, IBM announced it had developed a programming model for some cognitive processors it built as part of the U.S. Defense Advanced Research Projects Agency (DARPA) SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) program.

IBM’s programming model for these processors is based on reusable and stackable building blocks, called corelets. Each corelet is in fact a tiny neural network itself and can be combined with other corelets to build functionality. “One can compose complex algorithms and applications by combining boxes hierarchically,” Modha said.

“A corelet equals a core. You expose the 256 wires emerging out of the neurons, and expose 256 axioms going into the core but inside of the code is not exposed. From the outside perspective, you only see these wires,” Modha said.

In early tests, IBM taught one chip how to play the primitive computer game Pong, to recognize digits, to do some olfactory processing, and to navigate a robot through a simple environment.

While it is doubtful that neural networks would ever replace standard CPUs, they may very well end up tackling certain types of jobs difficult for CPUs alone to handle.

“Instead of bringing sensory data to computation, we are bringing computation to sensors,” Modha said. “This is not trying to replace computers, but it is a complementary paradigm to further enhance civilization’s capability for automation.”

A Basic Introduction To Neural Networks

The simplest definition of a neural network, more properly referred to as an ‘artificial’ neural network (ANN), is provided by the inventor of one of the first neurocomputers, Dr. Robert Hecht-Nielsen. He defines a neural network as:

“…a computing system made up of a number of simple, highly interconnected processing elements, which process information by their dynamic state response to external inputs.In “Neural Network Primer: Part I” by Maureen Caudill, AI Expert, Feb. 1989

ANNs are processing devices (algorithms or actual hardware) that are loosely modeled after the neuronal structure of the mamalian cerebral cortex but on much smaller scales. A large ANN might have hundreds or thousands of processor units, whereas a mamalian brain has billions of neurons with a corresponding increase in magnitude of their overall interaction and emergent behavior. Although ANN researchers are generally not concerned with whether their networks accurately resemble biological systems, some have. For example, researchers have accurately simulated the function of the retina and modeled the eye rather well.

Although the mathematics involved with neural networking is not a trivial matter, a user can rather easily gain at least an operational understanding of their structure and function.

The Basics of Neural Networks

Neural neworks are typically organized in layers. Layers are made up of a number of interconnected ‘nodes’ which contain an ‘activation function’. Patterns are presented to the network via the ‘input layer’, which communicates to one or more ‘hidden layers’ where the actual processing is done via a system of weighted ‘connections’. The hidden layers then link to an ‘output layer’ where the answer is output as shown in the graphic below. Neural Network Schematic

Most ANNs contain some form of ‘learning rule’ which modifies the weights of the connections according to the input patterns that it is presented with. In a sense, ANNs learn by example as do their biological counterparts; a child learns to recognize dogs from examples of dogs.

Although there are many different kinds of learning rules used by neural networks, this demonstration is concerned only with one; the delta rule. The delta rule is often utilized by the most common class of ANNs called ‘backpropagational neural networks’ (BPNNs). Backpropagation is an abbreviation for the backwards propagation of error.

With the delta rule, as with other types of backpropagation, ‘learning’ is a supervised process that occurs with each cycle or ‘epoch’ (i.e. each time the network is presented with a new input pattern) through a forward activation flow of outputs, and the backwards error propagation of weight adjustments. More simply, when a neural network is initially presented with a pattern it makes a random ‘guess’ as to what it might be. It then sees how far its answer was from the actual one and makes an appropriate adjustment to its connection weights. More graphically, the process looks something like this:

A single node example of The Delta Rule

Note also, that within each hidden layer node is a sigmoidal activation function which polarizes network activity and helps it to stablize.

Backpropagation performs a gradient descent within the solution’s vector space towards a ‘global minimum’ along the steepest vector of the error surface. The global minimum is that theoretical solution with the lowest possible error. The error surface itself is a hyperparaboloid but is seldom ‘smooth’ as is depicted in the graphic below. Indeed, in most problems, the solution space is quite irregular with numerous ‘pits’ and ‘hills’ which may cause the network to settle down in a ‘local minum’ which is not the best overall solution.How the delta rule finds the correct answer

Since the nature of the error space can not be known a prioi, neural network analysis often requires a large number of individual runs to determine the best solution. Most learning rules have built-in mathematical terms to assist in this process which control the ‘speed’ (Beta-coefficient) and the ‘momentum’ of the learning. The speed of learning is actually the rate of convergence between the current solution and the global minimum. Momentum helps the network to overcome obstacles (local minima) in the error surface and settle down at or near the global miniumum.

Once a neural network is ‘trained’ to a satisfactory level it may be used as an analytical tool on other data. To do this, the user no longer specifies any training runs and instead allows the network to work in forward propagation mode only. New inputs are presented to the input pattern where they filter into and are processed by the middle layers as though training were taking place, however, at this point the output is retained and no backpropagation occurs. The output of a forward propagation run is the predicted model for the data which can then be used for further analysis and interpretation.

It is also possible to over-train a neural network, which means that the network has been trained exactly to respond to only one type of input; which is much like rote memorization. If this should happen then learning can no longer occur and the network is refered to as having been “grandmothered” in neural network jargon. In real-world applications this situation is not very useful since one would need a separate grandmothered network for each new kind of input.

Neural Networks Difference with Conventional Computing

To better understand artificial neural computing it is important to know first how a conventional ‘serial’ computer and it’s software process information. A serial computer has a central processor that can address an array of memory locations where data and instructions are stored. Computations are made by the processor reading an instruction as well as any data the instruction requires from memory addresses, the instruction is then executed and the results are saved in a specified memory location as required. In a serial system (and a standard parallel one as well) the computational steps are deterministic, sequential and logical, and the state of a given variable can be tracked from one operation to another.

In comparison, ANNs are not sequential or necessarily deterministic. There are no complex central processors, rather there are many simple ones which generally do nothing more than take the weighted sum of their inputs from other processors. ANNs do not execute programed instructions; they respond in parallel (either simulated or actual) to the pattern of inputs presented to it. There are also no separate memory addresses for storing data. Instead, information is contained in the overall activation ‘state’ of the network. ‘Knowledge’ is thus represented by the network itself, which is quite literally more than the sum of its individual components.

Applications Should Neural Networks Be Used For

Neural networks are universal approximators, and they work best if the system you are using them to model has a high tolerance to error. One would therefore not be advised to use a neural network to balance one’s cheque book! However they work very well for:

  • capturing associations or discovering regularities within a set of patterns;
  • where the volume, number of variables or diversity of the data is very great;
  • the relationships between variables are vaguely understood; or,
  • the relationships are difficult to describe adequately with conventional approaches.


There are many advantages and limitations to neural network analysis and to discuss this subject properly we would have to look at each individual type of network, which isn’t necessary for this general discussion. In reference to backpropagational networks however, there are some specific issues potential users should be aware of.

  • Backpropagational neural networks (and many other types of networks) are in a sense the ultimate ‘black boxes’. Apart from defining the general archetecture of a network and perhaps initially seeding it with a random numbers, the user has no other role than to feed it input and watch it train and await the output. In fact, it has been said that with backpropagation, “you almost don’t know what you’re doing”. Some software freely available software packages (NevProp, bp, Mactivation) do allow the user to sample the networks ‘progress’ at regular time intervals, but the learning itself progresses on its own. The final product of this activity is a trained network that provides no equations or coefficients defining a relationship (as in regression) beyond it’s own internal mathematics. The network ‘IS’ the final equation of the relationship.
  • Backpropagational networks also tend to be slower to train than other types of networks and sometimes require thousands of epochs. If run on a truly parallel computer system this issue is not really a problem, but if the BPNN is being simulated on a standard serial machine (i.e. a single SPARC, Mac or PC) training can take some time. This is because the machines CPU must compute the function of each node and connection separately, which can be problematic in very large networks with a large amount of data. However, the speed of most current machines is such that this is typically not much of an issue.

  Advantages Over Conventional Techniques

Depending on the nature of the application and the strength of the internal data patterns you can generally expect a network to train quite well. This applies to problems where the relationships may be quite dynamic or non-linear. ANNs provide an analytical alternative to conventional techniques which are often limited by strict assumptions of normality, linearity, variable independence etc. Because an ANN can capture many kinds of relationships it allows the user to quickly and relatively easily model phenomena which otherwise may have been very difficult or imposible to explain otherwise.

Biologically Inspired Vision Systems

Neuroscientists at MIT have developed a computer model that mimics the human vision system to accurately detect and recognize objects in a busy street scene, such as cars and motorcycles.

Recognizing objects in a scene, such as the car in the street scence shown here, can be a challenge for computers. A model of how the brain processes visual information offers a successful approach.

Recognizing objects in a scene, such as the car in the street scence shown here, can be a challenge for computers. A model of how the brain processes visual information offers a successful approach.

Such biologically inspired vision systems could soon be used in surveillance systems, or in smart sensors that can warn drivers of pedestrians and other obstacles. It may also help in the development of so-called visual search engines, says Thomas Serre, a neuroscientist at the Center for Biological and Computational Learning at MIT’s McGovern Institute for Brain Research, who was involved in the project.

Researchers have been interested for years in trying to copy biological vision systems, simply because they are so good, says David Hogg, a computer vision expert at Leeds University in the UK. “This is a very successful example of [mimicking biological vision],” he says.

Teaching a computer to classify objects has proved much harder than was originally anticipated, says Serre, who carried out the work with Tomaso Poggio, codirector of the center. On the one hand, to recognize a particular type of object, such as a car, a computer needs a template or computational representation specific to that particular object. Such a template enables the computer to distinguish a car from objects in other classes–noncars. Yet this representation must be sufficiently flexible to include all types of cars–no matter how varied in appearance–at different angles, positions, and poses, and under different lighting conditions.

“You want to be able to recognize an object anywhere in the field of vision, irrespective of where it is and irrespective of its size,” says Serre. Yet if you analyze images just by their patterns of light and dark pixels, then two portrait images of different people can end up looking more similar than two images of the same person taken from different angles.

The most effective method for getting around such problems is to train a learning algorithm on a set of images and allow it to extract the features they have in common; two wheels aligned with the road could signal a car, for example. Serre and Poggio believe that the human vision system uses a similar approach, but one that depends on a hierarchy of successive layers in the visual cortex. The first layers of the cortex detect an object’s simpler features, such as edges, and higher layers integrate that information to form our perception of the object as a whole.

To test their theory, Serre and Poggio worked with Stanley Bileschi, also at MIT, and Lior Wolf, a member of the computer science department at Tel Aviv University in Israel, to create a computer model comprising 10 million computational units, each designed to behave like clusters of neurons in the visual cortex. Just as in the cortex, the clusters are organized into layers.

When the model first learns to “see,” some of the cell-like units extract rudimentary features from the scene, such as oriented edges, by analyzing very small groups of pixels. “These neurons are typically like pinholes that look at a small portion of the visual field,” says Serre. More-complex units are able to take in a larger portion of the image and recognize features regardless of their size or position. For example, if the simple units detect vertical and horizontal edges, a more complex unit could use that information to detect a corner.

With each successive layer, increasingly complex features are extracted from the image. So are relationships between features, such as the distance between two parts of an object or the different angles at which the two parts are oriented. This information allows the system to recognize the same object at different angles.

“It was a surprise to us when we applied this model to real-world visual tasks and it competed well with the best systems,” says Serre. Indeed, in some tests their model successfully recognized objects more than 95 percent of the time, on average. The more images the system is trained on, the more accurately it performs.

“Maybe we shouldn’t be surprised,” says David Lowe, a computer vision and object recognition expert at the University of British Colombia in Vancouver. “Human vision is vastly better at recognition than any of our current computer systems, so any hints of how to proceed from biology are likely to be very useful.”

At the moment, the system is designed to analyze only still images. But this is very much in line with the way the human vision system works, says Serre. The inputs to the visual cortex are shared by a system that deals with shapes and textures while a separate system deals with movement, he says. The team is now working on incorporating a parallel system to cope with video.

Aviation industry dons ‘shark skins’ to save fuel – Nature is also the inspiration

The aviation industry believes the ocean’s oldest predator, the shark, could hold the key to cutting energy consumption
Germany’s biggest airline Lufthansa announced earlier this month that two of its Airbus A340-300 jets would take part in trials starting this summer to test the properties of shark skin in flight.
A new type of coating are being painted on to the fuselage and wing edges of the aircraft. In its never-ending quest to develop more aerodynamic, more fuel-efficient aircraft, the aviation industry believes the ocean’s oldest predator, the shark, could hold the key to cutting energy consumption. Germany’s biggest airline Lufthansa announced earlier this month that two of its Airbus A340-300 jets would take part in trials starting this summer to test the properties of shark skin in flight. For the two-year trials, eight 10 by 10 centimetre (4 by 4 inch) patches of a new type of coating are being painted on to the fuselage and wing edges of the aircraft.
A new state-of-the-art varnish, developed by the Fraunhofer Institute for Manufacturing Technology and Advanced Materials (FAM) in Bremen, attempts to mimic the skins of fast-swimming sharks. The skin of sharks is covered in tiny riblets that reduce turbulent vortices and the drag they cause, thereby diminishing surface resistance when moving at speed.
The phenomenon of the streamlined shark skin has been known for about 30 years and has fascinated research scientists in a wide range of fields, from military applications to aerospace and aeronautics and from naval construction to wind technology. More recently, its use in sports such as swimming and athletics has brought the special properties of shark skin to much wider attention.
High-tech swimsuits were developed that enabled athletes to move ever faster through water, breaking one swimming record after the next until the suits were eventually banned as unfair in competition. In the past, says Volkmar Stenzel, the project’s head at the Fraunhofer Institute, sheets of plastic imitation shark skin were glued to the aircraft’s exterior.
“But the foil had major disadvantages: it was rather heavy and the added weight cancelled out the amount of fuel that could be saved,” Stenzel said. “Also, it was difficult to stick the foil to curved surfaces without creasing and wrinkling,” he said. Another problem was that aircraft have to be stripped of their paint and recoated every five years “and that was just not possible with these foils,” the expert explained.
Thus, in collaboration with European aircraft maker Airbus and the DLR German Aerospace Center, scientists at the Fraunhofer Institute have developed a new technique to emboss the structures of shark skin into aircraft paints. The idea is to make surfaces more aerodynamic and reduce fuel consumption by about one percent and lower operating costs. The trials on Lufthansa jets represent the last phase before possible industrial application, said Denis Darracq, head of research and flight physics technology at Airbus.
“The expected results have been achieved in terms of performance. It’s now a matter of measuring operational efficiency and durability,” Darracq said. “An airline must not have to clean its aircraft after every flight. The paint needs to last for several years,” he said. The engineer estimated that if an aircraft was covered by between 40-70 percent in the new paint, it can cut fuel consumption by around one percent for very little outlay. And with high fuel prices and customers becoming increasingly sensitive to the environmental impact of flying, that would represent an “enormous benefit” for an airline, Darracq argued.
Nature is also the inspiration for another state-of-the-art technology that is already being used by the industry and may have wider applications.
The leaf of the lotus plant has a unique microstructure consisting of tiny bumps topped with tiny hairs that make the leaf highly water repellent. Special surface coatings have been developed to mimic this effect and they are already used in the interior of the A380 to make it easier to clean. But Airbus is also looking into whether such coatings can be used on the exterior of aircraft as well.
“De-icing is a real problem for planes and represents a substantial cost factor. If there were surfaces where water cannot collect, they wouldn’t freeze over and that would represent a big step forward,” said Darracq. Airlines’ growing interest could therefore help accelerate research in surface technologies “and these may be ready for industrial application in a number of years,” the engineer said.

Robot Swarms

IT_Swarm_roboticsFlying and ground-based robots, which could potentially help search and rescue organisations, are under development at Monash University’s Swarm Robotics lab.

Swarm robotics makes use of principles observed in insect colonies, flocks of birds and physics to co-ordinate the behaviour of groups of robots.

The Robotics lab will collaborate with the university’s Wireless Sensors and Robot Networks (WSRN) laboratory to produce swarm robotics technologies. These technologies enable groups of flying and ground-moving robots to co-ordinate their behaviours — using wireless communication technology — and transmit information about their environment back to a base station.

WSRN co-director Doctor Jan Carlo Bacra said the technologies could be used to search for objects, people and pollution.

“We have chosen to focus on search and rescue in disaster sites, as this will enable us to assist rescue workers in saving human lives,” he said in a statement.

For example, the robots could aid rescue workers tasked with locating people in environments where global positioning systems (GPS) do not work. This may be in regions where smoke obstructs the view from satellites, partially collapsed buildings and cities where structures impede the view from the sky.

Barca predicted that over the next 20 years swarm robotics will evolve in such a way that humans will be able “feel present” at a remote location via robots, and experience a phenomenon known as ‘multi-presence’.

“If there were multiple robots then you could be made to feel that you are at the locations of all the robots simultaneously, hence multi-presence,” he said.

“One simple example is when a guard in a control room looks at many screens that display live footage captured from multiple security cameras.”

Bacra hopes that by 2014 the robotics technology will be advanced enough to carry out tasks that can aid in search and rescue efforts.

Herd Behavior – Bioinspired – Webservices

As computer systems, services and networks evolve, the complexity of the systems is increasing. Tools and metaphors that was used to manage the small collection of servers do not scale to support this growing complexity. By looking at the animal kingdom, the most intriguing autonomic system, individuals are able to create an abstraction which provides for greater clarity on system behavior and management, allowing  to design and implement decision making processes into the systems.

Managing computer systems is becoming an increasingly complex and exciting task. Modern systems are expected to display a range of behavior as compared to the traditional paradigm where they are only required to be up and remain mostly underutilized. Today’s systems should scale, defend themselves, be self-aware, save power and, of course, keep costs down. Furthermore, this is to be achieved while using as little time and resources for the system administrator as possible. In essence, the objective is to enable the systems to make the same decisions individuals would make for them on their behalf.
The traditional approach of writing a configuration is not sufficient any longer, as they only describe a desired end state. Systems which are part of a modern service do not have an end state, but a set of behavioral traits at their core instead. We need to find a way to design and describe these traits at a high level without technical details.

The most common approaches to solving the challenges of implementing behavior has been to introduce automation by orchestrating existing tools to work together, development of in-house tools or acquisition of commercial products. However, what these approaches do not specifically address is how the team of system administrators along with support staff and management can build an understanding of what kind of behavior is actually desired and needed on a high level in order to get a group consensus on the goals and to properly identify key concepts. As a result, most activities are in research and the commercial sector, but little coming from the sysadmin community itself.
A problem with the current research approaches is their tendency to have singular focus. They often emphasize only security, fault tolerance or resource management, but seldom the system as a whole, which is the natural view of the system administrator. This adds another level of complexity and creates a barrier between the software and the system
administrator who is supposed to be implementing the solution.
The importance of an approach which is comprehensive and addresses more than a single subject is apparent.
By simplifying complicated decision making processes one can easier facilitate the threshold for system administrators to implement and use a complex solution. By allowing system administrators to partake in the design process early, they would more likely be able to recognize how their existing tools and systems could play a role in a new, behaving, version of their site.
Biology is a fascinating system of inherently autonomic processes. Biological methods have previously been used as metaphors in other computer fields . Fred Cohen used the biological term ”virus” to explain a computer phenomena, and create an analogy that took a hugely complicated scenario and made it easily understandable by using terms already known. Today, terms like worms and viruses are common for computer threats, and can be explained to people that do not have a computer background. If individuals could translate more of the challenges of system administration into the world of biology, individuals might continue to find autonomic processes and mechanisms which have proven themselves and attempt to re-use them in the systems administration field.
This blog post explores using biology as the starting point for the design, modeling and implementation of a cloud-based webservice. The proposed process utilizes common understanding of nature and the animal kingdom to quickly circle out core concepts and behavioral traits.

Based on this understanding, a more formal diagram is constructed, which leads to the final stage of realization. A case scenario demonstrates the process in three different topics: security, reliability and resource management, allowing a holistic perspective of the entire system.
Even though biology can provide a wealth of ideas and inspiration, a process is needed which enables one to translate these concepts into actual implementations. The overall aim of the process is to create a simple and effective path from high-level description and understanding to actual implementation of a complex distributed system. The path is divided into three steps: analogy, model and implementation.
Not all participants need follow the path through until the end. For example, system architects, support staff and sales personnel may just require an insight into the components involved and how they interact.
Identify the ecosystem and analogies
This phase starts out by looking at the servers, or computer systems, and ask the question ”If your server was an animal, how would you describe it?” The question is meant to provoke ”out-of-the-box” thinking and to look for simple and non-technical ways to describe the server (or groups of servers) in question. Focus should be on using ones imagination and knowledge of the animal kingdom and not on constant reality-checks, even though some technical connection will probably be present during the discussions.
One way to get started, is to look for some specific situations which arise or on the ”surroundings” of the server, in order to find similarities. For example, a server which needs to suspend its service during backup, may be said to sleep or even hibernate like a bear. An email server may eat small pray by the mouthfuls, yet spit out (bounce) the ones who have spikes and sting. A cluster of webservers could be a herd of herbivores, grazing client connections as food for their survival. Mixing technical terms and animal behavior  like this will connect the analogy with the real world.
The deliverable from this phase is a narrative of the server in question in a series of situations which address the core intended purpose of the server. All involved parties should now have an abstract understanding of how the systems function and work together as well as a shared terminology from which to proceed.
Model the important components
The next phase concerns itself with mapping out the components which would be needed in order to re-create the narrative. A component may be any program, server, service or messaging framework. This part has a more technical focus, but one should try to remain on an architectural level.
Often, the modeling of the surroundings is the most difficult, but it helps highlight how they should function and interact. For example, if both chemical signals (long life, slow propagation) and auditive signals (short life, fast propagation) are part of the narrative, how are they realized?
Direct broadcast for auditive and message queue for chemical? The BRIC, or Block-like Representation of Interactive Components, modeling framework allows for simple representations of distributed processing and provides the necessary notation. The resulting model will contain all components and messages which need to be passed, along with the resulting decisions and behavior of the components.

BRIC is a high-level language for the design of multiagent systems based on a modular approach. A BRIC diagram has some inheritance from Petri-nets, a graph system for the modeling of execution in parallel systems , and was originally intended for the field of artificial intelligence.
A BRIC system consists of a set of components linked to each other by communication links. Components receive messages through input-terminals. Messages are sent through output terminals. Triggers are points in the flow where decisions are made. A component can be anything from a program to a collection of programs seen as one, based on the level of detail in the diagram. A system is often represented in several BRIC diagrams of varying detail. For instance, a component in one diagram may be broken down into its sub-components in another, in order to show its inner workings. This flexibility allows for varying abstraction and using black-boxes where it is convenient.
What is left now is to build the system based on its blueprints. In fact, many trusted components and services should be re-used as much as possible in order to utilize established competence and knowledge. Scripts may be created in order to provide communication between services which have no established means of communication.

For example, if a member of a herd of webservers gets attacked by a predator, the remaining herd would want to initiate an evasive maneuver to protect itself based on an alert from the victim.
A static system is straight forward to explain based on its components, but a dynamic system like a scaling service is better explained by its behavioral processes, since its current state may not convey the whole picture.

Using the process shown here, one can create a path from the analogy to the actual implementation which focuses on the environment as well as behavioral strategies. The terms and narratives which have been developed can now be reused in discussions about the system throughout its lifespan.
Individuals can  explain how the method has been used in order to realize a scaling cluster of webservers who are also able to deal with security attacks and counter stability issues which arise from long-running processes.
For all of the three behaviors,  the model is repeated, yet they are all tied together in the implementation as part of the larger idea.
The environment   can be imagined that webservers were a herd of herbivores, living together in a close community but in an otherwise unfriendly environment. Names like zebras, wildebeest and antelopes were used frequently. The herd’s main source of food is client connections. Eating means therefore successfully handling client HTTP requests which distributed among them from a load balancer. The number of client connections varies over time, much like any other food resource. It is also a central component for the survival and proper functioning of the webserver.
The servers are virtual machines, living in a cloud environment.
The group is able to communicate, send signals to each other, both synchronously and asynchronously. The herd is not alone. Predators are constantly on the lookout for a target and may at any point in time attack (compromise) a member of the herd. Also, the webservers might catch diseases such as memory leaks, which tend to make the elderly webservers weak and unable to participate in the herd as much as its younger members.
Regardless of their trials, the herd will always strive to survive as a group and be as prosperous as possible, given the resources available. They rely on their repertoire of behavioral mechanisms to regulate themselves relative to their environment. A prosperous herd will off course honor the business model of the website and make sure that there are enough servers relative to the number of incoming requests as well as keeping the site safe from security break ins and eventual software flaws which cause instability.
Circle of life – Reliability
Once this common understanding was established, individuals asked themselves how these animals tackle their environment. The most elementary part of biology is the circle of life. In this case, it means that members of the herd die and new ones are born with a regular interval. This allows the herd to exchange old and tired members with new ones, and is similar to the old-fashioned but still common practice of rebooting a server because it grows slower over time or crashes.

At regular intervals a new virtual machine is created which starts up and spends a brief period of time as an adolescent while it synchronizes with the policy server to receive its final configuration parameters. This can be compared with receiving tutelage from the parent or other herd members.
If the server fails to comply with the policy by not keeping its promises, it is expelled from the herd and practically shut down again.
For instance, the controller would not send a message to a nonexistent virtual machine, telling it to boot up. Instead, it sends a message to the underlying cloud API, telling it to create a new instance. The circle of life will play an important role in the following scenarios.
Predator attacks – Security
For a herd, it is difficult to avoid total protection from predators. This is also true for servers. Although engaging in a fight with an attacking predator has sometimes been seen as a group behavior, the most common approach is to simply run away. Even though some members of the herd may end up being taken, the group as a whole is able to survive and new members will be born eventually. This strategy to be very intriguing and looked for a way to model this for servers.
The most important aspect is for the rest of the group to notice that a predator is near or has struck. In some species they have lookouts, but in this case individuals felt that the only moment they could know for sure was once a server noticed it had been compromised. When this happens in the animal kingdom, the victim may attempt to send a strong signal to the others, typically a yell or scream, allowing them to flee.
Likewise, the server would immediately send out a warning to the others and identify the IP address of the predator. Since the servers cannot really run, they would imitate fleeing by blocking traffic from both the predator, but also the victim, leaving them both behind. It is therefore the sacrifice of the single server, which enables the others to save themselves.
The affected server will also directly send a message to the controller, telling it to shut itself down. This way, a compromised system becomes worthless, as the attacker won’t be able to use it for further attacks nor get to stay there for long. The rest of the group, however, will now have blocked the attacker and are safe from it.
As the predator sends its attack, the virtual machine sends a message to the herd’s message queue with the warning. Once the message is received by the queue, the virtual machine will subsequently tell the controller to kill it. The use of a queue, as opposed to a broadcast system was due to the idea that broadcast messages might be lost and that there will be no time for the attacked server to retransmit it.
Herd size – Scaling
The size of a herd is a result of a complex process involving many factors. The most important being the availability of resources, disease and presence of natural enemies.
Most herds find equilibriums, where the size of the herd is optimized relative to its environment. The way a herd can grow, is by reproducing often and increase the size of their litters. We wanted the litter size to be the predominant scaling method, thus connecting the herd size to the already established circle of life. If there is an abundance of resources, meaning the servers have a problem eating everything themselves, the chance for survival is bigger for newborn servers and there will be more adolescent servers growing up, increasing the size of the herd, because there are more newcomers than the rate of death from old age.
This process will stop as soon as the new servers participate in eating resources and there is no abundant food left. In practice, a new equilibrium has been found.
Should the resources become scarce, more servers would die of famine than new ones are born, making the herd shrink into to a new equilibrium. However, this process takes longer, as they are capable of using their energy reserves for a while. Related to modern services, individuals normally want them to scale up quickly as a reaction to flash-mobs, but scale down carefully in case there is just a local dip in the number of connections.
The starvation and death of virtual machines are only up to a point, if the herd has reached is minimum level, no further servers will die. Thus the species would ensure survival, although it is technically cheating by biological standards.
The same limitation of the herds maximum size also applies. This is a factor that is decided by the system administrator based on technical and/or economic limitations. The virtual machine is what starts the scaling process, by repeatedly reporting their level of saturation. It is the decision on the controller whether or not a scaling, i.e. increase in litter size, is needed. If there is no change in load, i.e. food, there
is no change in the population. Implementation of models, a scaling web service
The real-life scenario was to recreate the load on a popular norwegian website is the dominant marketplace website for both businesses and personal use, covering everything from personal items to real-estate and automobiles to brokering services. provided a 24hours profile of their web traffic, which was compressed into a 3 hour profile, which allowed more rapid testing. The actual connections were realized using the tool httperf[10] and a specialized script.

The Amazon Elastic Compute Cloud (EC2) served as the platform, whilst Amazon Simple Queueing System (SQS) was used as a messaging system between servers. As cloudcontroller the tool MLN (Manage Large Networks) was used. Each individual server collected information about its performance and sent this information to the main controller using SQS in a regular interval. An average load for each individual server is calculated and used as basis for the scaling mechanism in the herd-size scenario. The average values are calculated with two different datasets, one using a larger dataset and one smaller. The short is used in the scale-up mechanism, allowing a fast adaptation to sudden spikes in traffic. The long-value is used in the scale-down mechanism,
making it less sensitive to periods of less traffic.
By comparing the recreated load profile, it is possible to see how the number of servers correlates with the traffic. By watching the average herd loads the herd size
Two predator attacks at 60 and 70 minutes. equilibrium appears to be stable. The high load in the beginning is due to the adjustment from one single server to the ideal amount. This illustrates also the herds ability to react to a flash-mob. The load increases rapidly, but as the herd grows, we see that the load becomes stable.
An attack is marked by circles at 60 and 70 minutes and demonstrates how the attacked servers shut down. The attack is detected using SNORT, a light weight intrusion detection
system. The priority of the attack is used (i.e. the severity of the attack) to decide the response. Once the attack was identified, a special script was responsible for sending the alarm to the Amazon SQS queue and asking the controller to kill it. The virtual machine is killed through the API from the controller. This means that once the message is sent, there is no way for an attacker to stop the virtual machine from being shut down from the inside.
The loss of the server due to an attack causes an increase in load on the remaining servers. This leads to more available food (i.e. load) allowing the herd to grow further and
replace the lost server. The servers are also continuously rotated, which constitutes the zig-zag behavior of the herd. The oldest living herd members die in intervals. If the load
is low overall, meaning there are few client connections relative to the number of servers, more servers die at each interval, resulting in a smaller herd. This death was graceful,
allowing the webservers to finish their requests and the load balancer to remove them from the pool in order to avoid client errors.
The overall goal of this research is to provide a way for complex decisions to be made understandable and implementable. Whether the starting point is a particular reactive
behavior followed by a search for species which displays something similar, or an inspirational behavior in the animal kingdom which is mimicked by systems. The final running system will be able to handle important events without human interaction. This is not only important for efficiency, but also for the reduction of the complexity of managing
modern systems. The benefit of the approach is that it ends up with a modular design, where new features can be added in the future. Furthermore, the behavior can address many different aspects of management, as we have shown here, making it particularly useful for the field of system administration, which is so multi-faceted. The fact that many of these biological patterns are relatively simple and transparent when on their own is perceived as a benefit and necessary for adoption by system administrators. It is when these traits are combined, we see more complex behaviors being displayed.
Biology as an analogy
Using biology as a design process has several positive traits. The number of species on our planet is between 5-30 million . Each animal has different characteristics and has adapted to their environments. Biology is something that is thought from very low age in school. It’s shown in the news and on the television. By comparison, computer science is much more limited in its reach to the general public.
Everyone has a computer, but not many understand the principle of a datacenter, virtualization or different security mechanisms. Using biology, these terms can now be explained in a way that a non-computer educated person can understand. The use of analogies in teaching is a well-known method to teach complex scientific subjects. Using the familiar, something everyone knows, it is much easier to relate. By relating to the information, the listener becomes a larger part of the learning process. The ability to use biology to explain complex data systems or mechanisms requires knowledge over two very different and very complex fields of study (biology and system administration). However, examples demonstrate that no expert-knowledge on biology is needed. On the other hand, the researchers experienced a greater curiosity towards biology during this process, and it became increasingly easy to find new analogies and discuss variations based on new knowledge of certain species.
Although biology has proved to be an interesting approach to manage virtual machines, it has some limitations. For example, animals that are not able to adapt to changes in their
environment become extinct. Some of these phenomenons clearly break the analogy and may seem as bad business models. However, these extreme situations can clearly help
bring the focus of attention back to managing systems after all. A natural disaster can wipe out a datacenter just as easily as a herd. So instead of brushing it off as a broken
analogy, why not ask how the species survives this? The obvious answer is redundancy: there are several herds of the same species, and loss of one herd is not the end. Likewise, spreading servers across several locations help alleviate the single-location problem. The analogy lives on.

There are numerous interesting variations to how scaling and the circle-of-life can be modeled. For example, the fact that every server lives for equal amounts of time is a rigid implementation. One variation would be the idea of metabolism of food into energy, in that the members of the herd build up energy reserves if they have been well fed over time. This reserve will help them survive for longer should the resources disappear. This analogy would allow the cluster to shrink slower than it grows, possibly handling short-lived scarceness of resources better.
Another very interesting aspect is the use of evolution like development to allow not only increasing the number of animals in the herd but also an individuals overall size and thereby increase in food consumption. Cloud providers already offer the building blocks for this behavior. During times of abundance, not only does the herd increase in size, the young animals are bigger and stronger, requiring more resources than their predecessors. Conversely, during famines, the animal with the best changes for survival would be the smallest, requiring little resources.
An intriguing similarity between system administration and biology is found in the nature vs nurture debate. As a new server is booted up, how much of its future behavior is pre-programmed (cloning and templates) and how much is left for the environment to influence ( configuration management
) If all the software is pre-installed in a template, which is cloned, then they get a simple and predictive species. Conversely, if  rather use basic templates which download and install the latest packages and configure themselves to do the job assigned for them, basically they get a form of adolescence, where the server is still growing before participating fully in its environment. Using this analogy, individuals can explain the core elements of this discussion without too much technical details.
The ”convergence” vs ”congruence” debate is an analog to the ”Nurture” vs ”Nature” debates. One strategy for system management is to create homogeneous services that are congruent to a specific template. However, if we use a biological metaphor, seeing the cluster of webservers as a herd in an ecosystem, looking for ways to adapt to circumstances, we are able to use those metaphors to support the adaptation and evolution of systems to a complex ecosystem. These strategies allow us to explore the behavior of large numbers of systems in new ways.
Other than changing their behavior or traits, animals are of course left with the option of moving to a different environment, if they are unable to adapt quickly enough to rapid changes. This phenomenon of relocating species is particularly relevant in the recent debates about climate change.
For servers, especially in cloud computing, we can identify the same process. The physical server presents a clear boundary to the efficiency of its virtual machines. If a herd, residing on one server, is unable to prosper, it will have to re-locate – or migrate – to a different host or cloud provider.
The same would happen if two herds were to compete for computing resources, representing a typical challenge in a multi-tenant computing environment.
Self-management in system administration is and will continue to be an important field of study, due to ever increasing system complexity and cost of administration.
By using biology it was possible to not only simplify complex technical solutions, but also use biology as a method to design new types of automated management processes.
These processes have been translated into working examples and tested to demonstrate their effectiveness in a real-life scenario. An animals’ ability to adapt to changes in their environment, the ability to protect itself and the herd is a part of what makes animals so inspiring. By using biology it is possible to obtain ideas, discover strengths and weaknesses.
So far the use of biology has been an untapped resource in system administration for the creation of novel automated management processes.
Future work will be to further implement some of the models which have been investigated. Of special interest are different variations of scaling and evolution, but also other evasive maneuvers, such as playing dead and autotomy.
Also, an investigation into how existing configuration management tools could allow these types of behavioral traits would be very interesting. Developing the design process further by comparing it to more formalized processes and alternative ways of modeling  can also be carried out.

Herd behavior – Bio inspired systems

Herd behavior describes how individuals in a group can act together without planned direction. The term pertains to the behavior of animals in herds, flocks and schools, and to human conduct during activities such as stock market bubbles and crashes, street demonstrations, riots and general strikes,[1] sporting events, religious gatherings, episodes of mob violence and everyday decision-making, judgment and opinion-forming.

Raafat, Chater and Frith proposed an integrated approach to herding, describing two key issues, the mechanisms of transmission of thoughts or behavior between individuals and the patterns of connections between them.[2] They suggested that bringing together diverse theoretical approaches of herding behavior illuminates the applicability of the concept to many domains, ranging from cognitive neuroscience[ to economics.

Herd behavior in animals

A group of animals fleeing from a predator shows the nature of herd behavior. In 1971, in the oft cited article “Geometry For The Selfish Herd,” evolutionary biologist W. D. Hamilton asserted that each individual group member reduces the danger to itself by moving as close as possible to the center of the fleeing group. Thus the herd appears as a unit in moving together, but its function emerges from the uncoordinated behavior of self-serving individuals.[4]

Symmetry-breaking in herding behavior

Asymmetric aggregation of animals under panic conditions has been observed in many species, including humans, mice, and ants. Theoretical models have demonstrated symmetry-breaking similar to observations in scientific studies. For example, when panicked individuals are confined to a room with two equal and equidistant exits, a majority will favor one exit while the minority will favor the other.

Possible mechanisms

  • Hamilton’s Selfish herd theory.
  • Byproduct of communication skill of social animal or runaway positive feedback.
  • Neighbor copying.

Escape panic characteristics

  • Individuals attempt to move faster than normal.
  • Interactions between individuals become physical.
  • Exits become arched and clogged.
  • Escape is slowed by fallen individuals serving as obstacles.
  • Individuals display a tendency towards mass or copied behavior.
  • Alternative or less used exits are overlooked.[4]

Herd behavior in human societies

The philosophers Søren Kierkegaard and Friedrich Nietzsche were among the first to criticize what they referred to as “the crowd” (Kierkegaard) and “herd morality” and the “herd instinct” (Nietzsche) in human society. Modern psychological and economic research has identified herd behavior in humans to explain the phenomena of large numbers of people acting in the same way at the same time. The British surgeon Wilfred Trotter popularized the “herd behavior” phrase in his book, Instincts of the Herd in Peace and War (1914). In The Theory of the Leisure Class, Thorstein Veblen explained economic behavior in terms of social influences such as “emulation,” where some members of a group mimic other members of higher status. In “The Metropolis and Mental Life” (1903), early sociologist George Simmel referred to the “impulse to sociability in man”, and sought to describe “the forms of association by which a mere sum of separate individuals are made into a ‘society’ “. Other social scientists explored behaviors related to herding, such as Freud (crowd psychology), Carl Jung (collective unconscious), and Gustave Le Bon (the popular mind). Swarm theory observed in non-human societies is a related concept and is being explored as it occurs in human society.

Stock market bubbles

Large stock market trends often begin and end with periods of frenzied buying (bubbles) or selling (crashes). Many observers cite these episodes as clear examples of herding behavior that is irrational and driven by emotion—greed in the bubbles, fear in the crashes. Individual investors join the crowd of others in a rush to get in or out of the market.

Some followers of the technical analysis school of investing see the herding behavior of investors as an example of extreme market sentiment.[7] The academic study of behavioral finance has identified herding in the collective irrationality of investors, particularly the work of Robert Shiller,[8] and Nobel laureates Vernon L. Smith, Amos Tversky, and Daniel Kahneman.

Hey and Morone (2004) analyzed a model of herd behavior in a market context. Their work is related to at least two important strands of literature. The first of these strands is that on herd behavior in a non-market context. The seminal references are Banerjee (1992) and Bikhchandani, Hirshleifer and Welch (1992), both of which showed that herd behavior may result from private information not publicly shared. More specifically, both of these papers showed that individuals, acting sequentially on the basis of private information and public knowledge about the behavior of others, may end up choosing the socially undesirable option. The second of the strands of literature motivating this paper is that of information aggregation in market contexts. A very early reference is the classic paper by Grossman and Stiglitz (1976) that showed that uninformed traders in a market context can become informed through the price in such a way that private information is aggregated correctly and efficiently. A summary of the progress of this strand of literature can be found in the paper by Plott (2000). Hey and Morone (2004) showed that it is possible to observe herd-type behavior in a market context. Their result is even more interesting since it refers to a market with a well-defined fundamental value. Even if herd behavior might only be observed rarely, this has important consequences for a whole range of real markets – most particularly foreign exchange markets.

One such herdish incident was the price volatility that surrounded the 2007 Uranium bubble, which started with flooding of the Cigar Lake Mine in Saskatchewan, during the year 2006.[9][10][11]

Behavior in crowds

Crowds that gather on behalf of a grievance can involve herding behavior that turns violent, particularly when confronted by an opposing ethnic or racial group. The Los Angeles riots of 1992, New York Draft Riots and Tulsa Race Riot are notorious in U.S. history, but those episodes are dwarfed by the scale of violence and death during the Partition of India. Population exchanges between India and Pakistan brought millions of migrating Hindus and Muslims into proximity; the ensuing violence produced an estimated death toll of between 200,000 and one million. The idea of a “group mind” or “mob behavior” was put forward by the French social psychologists Gabriel Tarde and Gustave Le Bon.

Sporting events can also produce violent episodes of herd behavior. The most violent single riot in history may be the sixth-century Nika riots in Constantinople, precipitated by partisan factions attending the chariot races.[citation needed] The football hooliganism of the 1980s was a well-publicized, latter-day example of sports violence.

During times of mass panic, the herd type behavior can lead to the formation of mobs or large groups of people with destructive intentions. In addition, during such instances, like those during natural disasters, behavior such as mass evacuation and clearing the shelves of food and supplies is common.

Several historians also believe that Adolf Hitler used herd behavior and crowd psychology to his advantage, by placing a group of German officers disguised as civilians within a crowd attending one of his speeches. These officers would cheer and clap loudly for Hitler, and the rest of the crowd followed their example, making it appear that the entire crowd completely agreed with Hitler and his views. These speeches would then be broadcast, increasing the effect.

Everyday decision-making

“Benign” herding behaviors may occur frequently in everyday decisions based on learning from the information of others, as when a person on the street decides which of two restaurants to dine in. Suppose that both look appealing, but both are empty because it is early evening; so at random, this person chooses restaurant A. Soon a couple walks down the same street in search of a place to eat. They see that restaurant A has customers while B is empty, and choose A on the assumption that having customers makes it the better choice. And so on with other passersby into the evening, with restaurant A doing more business that night than B. This phenomenon is also referred as an information cascade.