A lot of scientists have turned away from mass media because they tend to oversimplify and misrepresent science. Cooperating with journalists is frowned upon in some labs and may even be a bad career move because of this. This whole attitude towards mass media of course makes good scientific sources scarce to journalists and will lead to further misrepresentations and oversimplifications of good science. Cornelia Dean is a science journalist who has noticed that scientists have a peculiar way of communicating with journalists. Communicating results is an inherent part of scientific discours, and is done in ways suitable to science. There are evn differences between fields, for example the worth assigned to conference papers varies. However, all scientists (should) have one thing in common: they meticulously try to say true things. It is generally the meticulousness that is the 'problem'. It is the strength of science, but there is no room for it in mass media. There should be some room for truth in mass media though, so some sort of middle ground is available.
One of the problems I see is that scientists are expected to be just a as media-savvy as politicians and other public figures, and to be able to spout cool one-liners summarizing their research. This is never going to be the case. Scientists don't have a political agenda and do not expect trick questions, nor do they get media training. Cornelia Dean has some sound advice that enables scientist to partially alleviate this problem. Most of it comes down to being well prepared, which is something that scientists should be good at. So there is hope. ;)
The subtitle of this book is "a scientists guide to talking to the public". Most of the advice is however aimed at talking to journalists. Journalists also have some peculiar ways of communicating. If they cite you, they assume that any errors will be assigned to them as authors of the piece, though most people will instead assign the error to the quotee. That is of course the worst thing that can happen to a scientist. Nevertheless journalists will usually not allow you to check a piece for factual errors. And that is a major turn-off for most scientists. Perhaps something can change on the side of journalists as well?
This book is rather short, and the last few chapters deal with the situation in the USA specifically. It is a quick read (I read it on trainrides to and from Belgium) and I found it helpful, but others may feel it contains nothing but common sense.
| 22-11-2007 17:57Kauffman network
Click 'read more' to see my little boolean network in an inline frame with an option to comment on it. Or click here to open it in a new window.
Order for Free
This is a simple simulation of a boolean network inspired by the networks described by Stuart Kauffman (At Home in the Universe, chapter 4) to illustrate his ideas on the emergence of order from chaos.
The behavior of the nodes is determined by random 'truthtables'. When the network is generated (press F5 for a new network) the order of the input nodes is randomized and the output state related to each possible input state is also randomized when the net. To save load time, only truthtables for up to 8 connections are generated, but this is more than enough. Here is an example truthtable for a node with 2 input nodes:
The table presented above corresponds to a logical XOR gate: the node is only activated when either input node is active, but not when they are both active. All combinations of states of the input nodes are coupled to randomly determined output states using such tables. There is one exception: the first row of each truthtable always sets the output to 0. This means that if there is no input, there can never be output.
Stuart Kauffman uses his simulations to convince people that life would emerge from the chaos of the universe and that it would emerge in a certain way. The centre of his reasoning is that autocatalytic chemical systems are very likely to exist. Nodes in the simulation above can be seen as chemicals and connections as possible reactions. The output is a new chemical substance that is than available for new reactions. If you see any cycles of such autocatalytic chains or circles of reactions these can form the basis of life, and it is the stuff evolution can act upon.
In this simulation you can see that a given network has attraction states: with the right number of connections for every node, any random activation pattern will result in roughly the same cyclic behavior given enough time. This is only possible because the network never changes. This is also the case with the rules of physics and chemistry, so that within a certain set of chemical rules the same kind of autocatalytic reactions are always possible.
Any complex network may also be seen as a metaphor for the brain. The brain is however not constructed as a random system, so the initial configuration of the network may give us a tendency to behave in a certain way. Of course, the brain is also vastly larger than this small simulation and many more aspects determine the activity of a neuron, besides simply the level of excitement of it's neighbors. Nevertheless these kinds of simulations can be an inspiration (perhaps even a tool) in thinking about brains and what they do. Or, for that matter, for what they do not do: in this simulation you can see that as you increase the connectivity the networks eventually display less order. Similarly a brain will not function any better if you just add some wires to it. Adding random neurons to your brain does not make you smarter.
The Edge of Chaos
Networks with more connectivity have different kinds of attractor states than networks with less connectivity. Fractions of connectivity can be entered in the form, so you can toy with this. The precise fraction is not used, but it is approximated by the script. If a number of input nodes of 1.7 is specified, for example, the algorithm tries to have 70 percent of the nodes use two input nodes and the remaining 30 percent will use one input node. On average the networks generated will then indeed use 1.7 input nodes for each node. Varying the number of connections like this allows you to look for the number of connections that produces a network with stability wihtout rigidity. In this simulation the maximum is 8 and the minimum is 0.
Stuart Kauffman has done a lot of math with these kinds of networks and so have some other people. E. Derrida and Y. Pomeau, cited below, have written a short article on some of this math, the rest I copied directly from At Home in the Universe and I will not bother with proving it here.
Networks with 1 input node or less for every node exhibit behavior that is stable to the point of boredom. Networks with a lot of connections per node have very many attractor states that will all have incredibly long cycles. The behavior of such networks is unpredictable in practice and that can never be the basis of homeostasis or self-organization. It appears that random boolean networks exhibit an optimum of stability without plunging into chaos when each node has about 2 input nodes. With this value it doesn't matter much what the initial state is, the network will usually settle in the same cyclic behavior that still involves a lot of nodes and a cycle with a number of steps roughly equal to the square root of the number of nodes.
For random boolean networks the number of network states that comprise a cycle is the square root of 2 powered to the number of nodes in the network. However, the networks generated here are not completely random: a node only uses it's eight neighbors for input. This means that reciprocal relationships between nodes are far more likely than in truly random boolean networks and a cycle in this network should be shorter. What is interesting for me is the question whether such laws apply to other kinds of networks as well. The small and simple simulation on this page can have more states than I can write down in a lifetime and yet it displays stunning order. Any human brain is incomprehensibly complex when compared with the network above but it still manages to organize itself. By what laws does the brain do this?
Here you can see some example runs using the same network with increasing connnectivity. What strikes me the most is that roughly the same nodes seem to be at the 'center' of activity for all levels of connectivity, even though the truthtables are probably very different.
If you don't want to toy around, here is a YouTube movie of an earlier version, demonstrating the effect. (It was allowed to generate output with no input in that version.)