Betere Dingen

| 22-11-2007 17:57Kauffman network

Click 'read more' to see my little boolean network in an inline frame with an option to comment on it. Or click here to open it in a new window.

Order for Free

This is a simple simulation of a boolean network inspired by the networks described by Stuart Kauffman (At Home in the Universe, chapter 4) to illustrate his ideas on the emergence of order from chaos.

The behavior of the nodes is determined by random 'truthtables'. When the network is generated (press F5 for a new network) the order of the input nodes is randomized and the output state related to each possible input state is also randomized when the net. To save load time, only truthtables for up to 8 connections are generated, but this is more than enough. Here is an example truthtable for a node with 2 input nodes:

input 1:   input 2:   output:  
0 0 0
0 1 1
1 0 1
1 1 0

The table presented above corresponds to a logical XOR gate: the node is only activated when either input node is active, but not when they are both active. All combinations of states of the input nodes are coupled to randomly determined output states using such tables. There is one exception: the first row of each truthtable always sets the output to 0. This means that if there is no input, there can never be output.

Stuart Kauffman uses his simulations to convince people that life would emerge from the chaos of the universe and that it would emerge in a certain way. The centre of his reasoning is that autocatalytic chemical systems are very likely to exist. Nodes in the simulation above can be seen as chemicals and connections as possible reactions. The output is a new chemical substance that is than available for new reactions. If you see any cycles of such autocatalytic chains or circles of reactions these can form the basis of life, and it is the stuff evolution can act upon.

In this simulation you can see that a given network has attraction states: with the right number of connections for every node, any random activation pattern will result in roughly the same cyclic behavior given enough time. This is only possible because the network never changes. This is also the case with the rules of physics and chemistry, so that within a certain set of chemical rules the same kind of autocatalytic reactions are always possible.

Any complex network may also be seen as a metaphor for the brain. The brain is however not constructed as a random system, so the initial configuration of the network may give us a tendency to behave in a certain way. Of course, the brain is also vastly larger than this small simulation and many more aspects determine the activity of a neuron, besides simply the level of excitement of it's neighbors. Nevertheless these kinds of simulations can be an inspiration (perhaps even a tool) in thinking about brains and what they do. Or, for that matter, for what they do not do: in this simulation you can see that as you increase the connectivity the networks eventually display less order. Similarly a brain will not function any better if you just add some wires to it. Adding random neurons to your brain does not make you smarter.

The Edge of Chaos

Networks with more connectivity have different kinds of attractor states than networks with less connectivity. Fractions of connectivity can be entered in the form, so you can toy with this. The precise fraction is not used, but it is approximated by the script. If a number of input nodes of 1.7 is specified, for example, the algorithm tries to have 70 percent of the nodes use two input nodes and the remaining 30 percent will use one input node. On average the networks generated will then indeed use 1.7 input nodes for each node. Varying the number of connections like this allows you to look for the number of connections that produces a network with stability wihtout rigidity. In this simulation the maximum is 8 and the minimum is 0.

Stuart Kauffman has done a lot of math with these kinds of networks and so have some other people. E. Derrida and Y. Pomeau, cited below, have written a short article on some of this math, the rest I copied directly from At Home in the Universe and I will not bother with proving it here.

Networks with 1 input node or less for every node exhibit behavior that is stable to the point of boredom. Networks with a lot of connections per node have very many attractor states that will all have incredibly long cycles. The behavior of such networks is unpredictable in practice and that can never be the basis of homeostasis or self-organization. It appears that random boolean networks exhibit an optimum of stability without plunging into chaos when each node has about 2 input nodes. With this value it doesn't matter much what the initial state is, the network will usually settle in the same cyclic behavior that still involves a lot of nodes and a cycle with a number of steps roughly equal to the square root of the number of nodes.

For random boolean networks the number of network states that comprise a cycle is the square root of 2 powered to the number of nodes in the network. However, the networks generated here are not completely random: a node only uses it's eight neighbors for input. This means that reciprocal relationships between nodes are far more likely than in truly random boolean networks and a cycle in this network should be shorter. What is interesting for me is the question whether such laws apply to other kinds of networks as well. The small and simple simulation on this page can have more states than I can write down in a lifetime and yet it displays stunning order. Any human brain is incomprehensibly complex when compared with the network above but it still manages to organize itself. By what laws does the brain do this?

Here you can see some example runs using the same network with increasing connnectivity. What strikes me the most is that roughly the same nodes seem to be at the 'center' of activity for all levels of connectivity, even though the truthtables are probably very different.

If you don't want to toy around, here is a YouTube movie of an earlier version, demonstrating the effect.
(It was allowed to generate output with no input in that version.)

Interesting links:

(optional field)
(optional field)

Comment moderation is enabled on this site. This means that your comment will not be visible until it has been approved by an editor.

Remember personal info?
Small print: All html tags except <b> and <i> will be removed from your comment. You can make links by just typing the url or mail-address.