A simpler busy beaver triplication Starting from a busy beaver, it is triplicated in the same time as the head (or heads) of the busy beaver(s) compute, in the random version of the graph rewriting algorithm.
Busy beavers in the random church One computer, during its activity, is multiplied into 3 computers which work, in the same time. This feat is done without any higher control and there are never conflicts which appear. All computers work asynchronously (here mimicked by randomness) and eventually they arrive to read and write on the same tape.
Molecular computers. It is possible to design molecules which compute by chemical reactions. Attention, each molecule could compute by itself, this is not about solutions and chemical reactions which give the result of a simple computation by means of the number of molecules from a species. EACH molecule computes here.
The shuffle trick. Both the FOE and FO fanout nodes are essential for self-multiplication. The reason of the existence of two fanout nodes is in the shuffle trick.
Fredkin gate. The universal gate for reversible computing called the Fredkin gate is a gate with three boolean inputs and 3 boolean outputs. You see an example of a computation of a Fredkin gate, where the outputs are big circles coloured with green, purple and red and the interesting inputs are boolean terms, as translated from lambda calculus to chemlambda, with the termination nodes coloured with green and red. It is interesting to see how the small green and red circles arrive near the big green and red ones and how generally the Fredkin gate is destroyed after it achieves the goal.
The working factorial. By a modification of this example, now I have a working factorial computation. (I.e. it works every time.) Recall that I took a lambda term for the factorial from the lambda calculus tutorial by Mayer Goldberg from the little-lisper.org, par. 57, page 14. Then I modified it and got a molecule which computes the factorial in about 20% of the cases. Now, in this working factorial example, I made two supplementary modifications. The first consists in starting from a lambda term which uses the mutiplication in the form L mnf.m(nf) instead of the one used in the tutorial. Secondly, the result of the computation (i.e. the "value" of the factorial) is applied to a SUCC (successor) which is then applied to c0, which result in the generation of the correct result.
The factorial and the little lisper. There are several demos which explore the computation of the factorial, with surprising lessons to learn. The first one is a computation of a function with two inputs and two outputs, then there is an example of a successful computation of the factorial of 3, based on a molecule which is a modification of the molecule obtained from the lambda term proposed in the lambda calculus tutorial by Mayer Goldberg from the little-lisper.org, par. 57, page 14. But that molecule computes successfully only in 20% of the cases (recall that I use a random reduction algorithm), as shown in this example. It is nevertheless a bit successful, differently from the molecule which corresponds exactly to the lambda term from the little lisper tutorial (not that is something wrong with that term, but in chemlambda it does not work as expected). The question is why it does not work well, when in the case of the Ackermann function the term proposed in the tutorial works superbly. The answer seems to be the fact that in chemlambda the molecules which are not needed do not disappear, instead, like in nature, they continue to interact with the useful parts. There is no problem with the recurrence based on a function with two inputs and two outputs, even in a random environment where the values are not passed from here to there, instead there is a signal transduction phenomenon which builds graphs here and there. The problem is with the interaction between trashed parts of the molecule and useful ones. In order to prove this I modified the factorial function molecule (which is successful in 20% of the cases) by a relative of it, and this time the computation is always successful (or have I been lucky all the times?). In this new example there is no trashing used.
Pentapod This creature gets the order to grow 5 limbs. Quadpod This one gets the order to grow 4 limbs. They do just that! But when the creature gets the order to grow 3 limbs, it doesn't. Instead it grows a tubular body with only 1 limb: the Unipod. I don't understand exactly why.
Molecules are not flowcharts In this demo I take the molecule for the number 4 and I draw only the edges which correspond to the syntactic tree of the lambda term which represents it in the Church encoding. Then I add a fanout node FO and the molecule duplicates. But it is visible that none of the nodes (atoms) are gates, nor the molecules are flowcharts, like is usual to consider in other approaches.
Meaningless The 28 quine from the example below, this time with the same treatment as in the demo about the flowcharts, namely only edges which connect A (application) and L (lambda abstraction) nodes are highlighted. It is meaningless to look for syntactic trees in the 28 quine, because this is not how chemlambda works, but is funny and again weirdly biological.
Bigger quine This creature is a 28 quine, described first time here. That means it is a molecule with 28 atoms which would remain the same after each chemical reduction step, in a deterministic reduction algorithm. In this demo you see it in a random environment.
Ackermann(3,2) This is a new computation of the Ackermann function. See below for an older example and explanations. Not for the faint at hearts, because there are thousands of nodes coming and going. That is why the edges are darkened and the nodes are smaller. When you look at it without knowing what will give eventually, i.e. Ack(3,2)=29, you can see only the global, overall structure of a cell or organelle busy with a mysterious, complex activity. But there is nothing global happening there, all graph rewrites performed by the algorithm (random reduction) are purely local, that is they are completely indifferent about any global structure. There is no director of this elaborate cast. I guess that this is what it makes life like.
Rings Duplication of a ring molecule. The duplication happens differently than the FO tree one, mainly because the ring molecule does not correspond to a lambda term. The algorithm is the same though in all examples.
Sparks. A 20 nodes creature which I qualified previously as a quine (see definition of chemlambda quine), struggles to survive in a random environment (random reduction method). In the random environment the molecule is no longer a quine (does no longer preserve its shape), but instead it becomes bigger or smaller. I hypothesize that quines are good models of living creatures. In the deterministical reduction method such a creature would live forever, because its inner metabolism is perfectly equilibrated (keeps its shape after a reduction step). In the random reduction method the quine has a life.
The predecessor. This is a molecule constructed from the lambda term of the predecessor function. It reduces to a number (as seen in chemlambda via the Church encoding). More interestingly, if you examine closer the reduction process (here the random reduction method), then you'l discover inside the molecule a smaller one which I called a "walker". It keeps its shape and "walks" on a trail of nodes. That led to the discovery of chemlambda quines. See the series of posts about that, in my open notebook.
Ackermann(2,2) The Ackermann function is a famous computable but not primitive recursive function. We know that chemlambda is Turing universal (proved first time here, so it is nice to see it can compute the Ackermann function. I took as an example Ackermann(2,2) because numbers are represented via their Church encoding, therefore big numbers are really big graphs in any unary representation. it is fascinating to see how the Ackermann(2,2) is computed in chemlambda because it does so many things in parallel. Another interesting try would be Ackermann(3,2), see this post from my open notebook about it. By using the programs I have, the visualization (not the computation) is a bit punishing for the computer, because there are moments when hundreds of nodes and links appear and dissapear, but probably a better programmer could easily tweak the scripts to behave better. The take home message is this: you can compute without any evaluation, without any variable passing and without any overall control. This is in support for the hypothesis that chemlambda computes in the way living organism compute.
YI multiplied. This is a complex reduction in chemlambda. It is a proof of concept that chemlambda can easily treat in the same time self-multiplication and computation (reductions). In this example we start from the fact that one can write the Y combinator as an expression in the S,K,I, combinators: Y = S (K (S I I)) (S (S (K S) K) (K (S I I))). In a previous experiment, with the older scripts, see this video, I transformed that expression into a molecule and then let it reduce by itself. It worked great. Now, in this example I use the said expression of Y and I apply it to the identity combinator, then I add a FO node on top which will effect the self-reproduction of the big expression. What will happen? So, we have a bunch of reductions (from the long expression to YI) in parallel with the self-reproduction of the whole thing. You shall see that everything works well, and that self-reproduction happens in parallel with reduction, even if all this is done in a model of computation which uses a random reduction strategy.
 (journal)(arxiv) - M. Buliga, Graphic lambda calculus. Complex Systems 22, 4 (2013), 311-360
 (doi)(arxiv) - M. Buliga, Chemical concrete machine (2013).