Origins and Science

167 posts / 0 new
Last post
arakish's picture
@ Sheldon

@ Sheldon

Bwaaaaaaa... haaaaaaaa... haaaaaaa... Good one... Bwaaaaaaa... haaaaaaaa... haaaaaaa...

***tree slowly stumbling due to laughing so hard back into forest***

rmfr

Cognostic's picture
If a tree laughs in the

If a tree laughs in the forest and no one is around, does it smell like Tin Man's dirty socks?

Tin-Man's picture
@Cog Re: "If a tree laughs

@Cog Re: "If a tree laughs in the forest and no one is around, does it smell like Tin Man's dirty socks?"

Only if the tree farts while laughing.

Cognostic's picture
I was just making sure. For

I was just making sure. For a moment I thought it was me.

xenoview's picture
@luke

@luke
I'm a little surprised you haven't tried to say god created life.

Randomhero1982's picture
Let's be fair though, it has

Let's be fair though, it has been a pleasure to have been front and centre, to experience theistic thinking, in all it's glory!

Scientific miracles, earth being an isolated system, morality being built in....

I feel truly privileged!

We need David Attenborough to narrate their postings, "And here approaches the lesser spotted fuckwit, in all it's splendour! Watch as it blindly steps into the rapids and over the waterfall, like an absolute twat!"

Tin-Man's picture
**Note** (Moved from middle

**Note** (Moved from middle of thread to here for better viewing pleasure.)

Re: Luke - "...yes I do, as do I believe is the human body, and the cell,"

Huh?... The Earth is a closed system? The human body is a closed system??? A cell is a closed system?... Uhhhh.... Wow... *frownie face*... Guess my junior high school science teacher lied to me all those years ago. Damn... *puzzled look on face*... So, wait... Where does all that energy from the sun go now? And all that material from meteors and such? Did somebody build a giant shield around the Earth or something? Oh, and what happens to all that food I eat and the fluids I drink? And what are those tiny drops of fluid that form on my exterior when I get hot? And if my body truly is a closed system, then I REALLY want to know what that brown mushy stinky stuff is that comes out of my ass a couple of times each day... *worried look*... Am I defective?... Somebody?... Anybody?... Please help... Getting scared...

LogicFTW's picture
As a tin man, you should be

As a tin man, you should be scared. Very scared.

I do not wish to alarm you, but you should not have brown mush stinky stuff coming out of your metal ass a couple of times every day.

If it was rust colored I would also be scared for you, you well know what happens when you rust out. Some girl with red shoes comes along with a literal straw man lacking a brain (but can talk!) And rudely interrupts your nice nap and takes you on a long dangerous walk with flying monkeys and stuff, and eventually to a wizard that is really just a man behind a curtain making ridiculous demands when apparently he has hearts lying around to give at a moments notice!

Tin-Man's picture
@Logic

@Logic

Well, gee. Thanks. So much for my being able to go to sleep tonight... *biting fingernails nervously*... I have GOT to remember to make an appointment with my mechanic tomorrow... *writing note on calendar*...

Randomhero1982's picture
@Tin-Man

@Tin-Man

Let's hope you don't require another WD40 suppository again!

Tin-Man's picture
@Random Re: "Let's hope you

@Random Re: "Let's hope you don't require another WD40 suppository again!"

Well, if I do, it is going to be at the local self-serve car wash this time. NO WAY I want to have to clean my bathroom like what happened with that last incident.... *shaking head resolutely*...

Edit to add: Hmmm... Maybe I should start saving plenty of quarters...

Calilasseia's picture
Put through the atheistic

Put through the atheistic funnel this makes no sense. There was a first life, which was created from nothing, or rather, the chemicals that composed the primordial soup. Any way you put it scientifically, there must be some reverse of the second law of thermodynamics/miracle in order for the current reality that we live in to exist.

This is, not to put too fine a point on it, garbage.

What part of "thousands of chemical reactions take place spontaneously whilst obeying the second law of thermodynamics" did you fail to learn in basic chemistry classes?

I pointed out how it does.

No you didn't, you merely blindly asserted this. And in the process, demonstrated that you know fuck all about how the second law of thermodynamics actually operates. Time for this:

Creationist Canards About The Laws Of Thermodyamics, Versus The Actual Science

The individual responsible for our current understanding of the laws of thermodynamics, was one Rudolf Clausius. When Clausius formulated his relations, he explicitly stated that the Second Law of Thermodynamics applies to different classes of system, but in different ways. He listed the three classes of system as follows:

[1] Isolated systems are systems that engage in no exchange of matter or energy with their surroundings. Such systems are therefore reliant upon the internal energy that they already possess. However, isolated systems constitute an idealisation that is almost never achieved in practice, and are mostly useful as a starting point for developing thermodynamic theory prior to extending it to the other classes of system.

[2] Closed systems are systems that engage in exchange of energy with the surroundings, but no exchange of matter. A good example of a closed system would be a solar panel, which does not exchange matter with its surroundings, but which, when illuminated, is a net recipient of energy in the form of visible light, which it then converts to electricity, which we can use.

[3] Open systems are systems that engage in exchange of both energy and matter with the surroundings. Living organisms plainly fall into this latter category.

When Rudolf Clausius erected his original statement of the Second Law of Thermodynamics, he stated it thus:

In an isolated system, a process can only occur if it increases the total entropy of the system.

The trouble with the 2LT is that it applies to all of these systems, but the exact manner in which it applies differs between the three classes of system. Clausius' original statement about the application of the 2LT to an isolated system does not apply to the other classes of system in anything like the same manner. Trouble is, creationists alight upon the statement about entropy increasing, which was originally erected by Clausius to describe isolated systems, and think that the formulation Clausius erected to apply to isolated systems applies to all systems in the same manner, when Clausius himself plainly stated that it doesn't.

In a non isolated system, if there is an energy input, that energy input can be harnessed to perform useful work, such as locally decreasing the entropy of entities within the system in exchange for a greater increase in entropy beyond those systems. As long as there exists inhomogeneity within the universe, i.e., there exist regions of differing conditions with respect to material content, energy flux, etc., any net recipient of energy from an outside energy source can harness that energy to perform useful work, including work that results in a temporary local decrease of entropy. The Earth constitutes such a system, because it is engaging in both matter and energy transfer with the surroundings, and is in fact a large net recipient of energy from the surroundings. See that yellow thing in the sky? It's called The Sun. It's a vast nuclear fusion reactor 866,000 miles across that is irradiating the Earth with massive amounts of energy as I type this. Energy that can be harnessed to perform useful work such as constructing living organisms.

Incidentally, as a tangential diversion, the classical formulation has again required revision to take account of more recent developments with respect to observed phenomena, which is why we now have a scientific discipline called Quantum Thermodynamics ... a discipline that was contributed to by, among others, Stephen Hawking, when he published his landmark paper on the radiative nature of black holes that brings them into equilibrium with the Second Law of Thermodynamics. I don't recall him ruling out evolution as a result of this.

Another common fallacy is the wholly non-rigorous association of entropy with "disorder", however this is defined. This has been known to be non-rigorous by physicists for decades, because there exist numerous documented instances of systems whose entropy increases when they spontaneously self-assemble into ordered structures as a result of the effect of electrostatic forces. Lipid bilayers are an important example of this, which are found throughout the biosphere.

The following scientific paper is apposite here:

Gentle Force Of Entropy Bridges Disciplines by David Kestenbaum, Science, 279: 1849 (20th March 1998)

Normally, entropy is a force of disorder rather than organization. But physicists have recently explored the ways in which an increase in entropy in one part of a system can force another part into greater order. The findings have rekindled speculation that living cells might take advantage of this little-known trick of physics.

Phospholipids being an excellent example thereof. In fact, any chemical system in which there exists the capacity for electrostatic forces to apply to either aggregating or reacting molecules can exhibit this phenomenon. Which is why scientists have long since abandoned the notion that "entropy" equals "disorder", which requires a thorough statistical mechanical treatment in terms of microstates in any case.

This is applied to the physics and physical chemistry of lipid bilayers in the following paper:

Electrostatic Repulsion Of Positively Charged Vesicles And Negatively Charged Objects by
Helim Aranda-Espinoza, Yi Chen, Nily Dan, T. C. Lubensky, Philip Nelson, Laurence Ramos and D. A. Weitz, Science, 285: 394-397 (16th July 1999)

in which the authors calculated that the entropy of the lipid bilayer system increased when it arranged itself spontaneously into an ordered structure in accordance with the laws of electrostatics.

Entropy, as rigorously defined, has units of Joules per Kelvin, and is therefore a function of energy versus thermodynamic temperature. The simple fact of the matter is that if the thermodynamic temperature increases, then the total entropy of a given system decreases if no additional energy was input into the system in order to provide the increase in thermodynamic temperature. Star formation is an excellent example of this, because the thermodynamic temperature at the core of a gas cloud increases as the cloud coalesces under gravity. All that is required to increase the core temperature to the point where nuclear fusion is initiated is sufficient mass. No external energy is added to the system. Consequently, the entropy at the core decreases due to the influence of gravity driving up the thermodynamic temperature. Yet the highly compressed gas in the core is hardly "ordered".

More to the point, there are scientific papers in existence establishing that evolution is perfectly consistent with the 2LT. Two important papers being:

Entropy And Evolution by Daniel F. Styer, American Journal of Physics, 78(11): 1031-1033 (November 2008) DOI: 10.1119/1.2973046

Natural Selection As A Physical Principle by Alfred J. Lotka, Proceedings of the National Academy of Sciences of the USA, 8: 151-154 (1922) [full paper downloadable from here]

Evolution Of Biological Complexity by Christoph Adami, Charles Ofria and Travis C. Collier, Proceedings of the National Academy of Sciences of the USA, 97(9): 4463-4468 (25th April 2000) [Full paper downloadable from here]

Order From Disorder: The Thermodynamics Of Complexity In Biology by Eric D. Schneider and James J. Kay, in Michael P. Murphy, Luke A.J. O'Neill (ed), What is Life: The Next Fifty Years. Reflections on the Future of Biology, Cambridge University Press, pp. 161-172 [Full paper downloadable from here]

Natural Selection For Least Action by Ville R. I. Kaila and Arto Annila, Proceedings of the Royal Society of London Part A, 464: 3055-3070 (22nd july 2008) [Full paper downloadable from here]

Evolution And The Second Law Of Thermodynamics by Emory F. Bunn, arXiv.org, 0903.4603v1 (26th March 2009) [Download full paper from here]

Let's take a look at some of these, shall we?

First of all, we have this:

In a paper presented concurrently with this, the principle of natural selection, or of the survival of the fittest (persistence of stable forms), is employed as an instrument for drawing certain conclusions regarding the energetics of a system in evolution.

Aside from such interest as attaches to the conclusions reached, the method itself of the argument presents a feature that deserves special note. The principle of natural selection reveals itself as capable of yielding information which the first and second laws of thermodynamics are not competent to furnish.

The two fundamental laws of thermodynamics are, of course, insufficient to determine the course of events in a physical system. They tell us that certain things cannot happen, but they do not tell us what does happen.

In the freedom which is thus left, certain writers have seen the opportunity for the interference of life and conciousness in the history of a physical system. So W. Ostwald [2] observes that "the organism utilizes, in manyfold ways, the freedom of choice among reaction velocities, through the influence of catalytic substances, to satisfy advantageously its energy requirements." Sir Oliver Lodge also, has drawn attention to the guidance [3] exercised by life and mind upon physical events, within the limits imposed by the requirements of available[4] energy. H. Guilleminot [5] sees the influence of life upon physical systems in the substitution of guidance by choice in place of fortuitous happenings, where Carnot's principle leaves the course of events indeterminate. As to this, it may be objected that the attribute of fortuitousness is not an objective quality of a given. event. It is the expression of our subjective ignorance, our lack of complete information, or else our deliberate ignoring of some of the factors that actually do determine the course of events. Admitting, however, broadly, the directing influence of life upon the world's events, within the limits imposed by the Mayer-Joule and the Carnot-Clausius principles, it would be an error to suppose that the faculty of guidance which the established laws of thermodynamics thus leave open, is a peculiar prerogative of living organisms. If these laws do not fully define the course of events, this does not necessarily mean that this course, in nature, is actually indeterminate, and requires, or even allows, some extra-physical influence to decide happenings. It merely means that the laws, as formulated, take account of certain factors only, leaving others out of consideration; and that the data thus furnished are insufficient to yield an unambiguous answer to our enquiry regarding the course of events in a physical system. Whether life is present or not, something more than the first and second laws of thermodynamics is required to predict the course of events. And, whether life is present or not, something definite does happen, the course of events is determinate, though not in terms of the first and second laws alone. The "freedom" of which living organisms avail themselves under the laws of thermodynamics is not a freedom in fact, but a spurious freedom [6] arising out of an incomplete statement of the physical laws applicable to the case. The strength of Carnot's principle is also its weakness: it holds true independently of the particular mechanism or configuration of the energy transformer (engine) to which it is applied; but, for that very reason it is also incompetent to yield essential information regarding the influence of mechanism upon the course of events. In the ideal case of a reversible heat engine the efficiency is independent of the mechanism. Real phenomena are irreversible; and, in particular, trigger action, [7] which plays so important a role in life processes, is a typically irreversible process, the release of available energy from a "false" equilibrium. Here mechanism is all-important. To deal with problems presented in these cases requires new methods, [8] requires the introduction, into the argument, of new principles. And a principle competent to extend our systematic knowledge in this field seems to be found in the principle of natural selection, the principle of the survival of the fittest, or, to speak in terms freed from biological implications, the principle of the persistence of stable forms.

For the battle array of organic evolution is presented to our view as an assembly of armies of energy transformers-accumulators (plants), and engines (animals); armies composed of multitudes of similar units, the individual organisms. The similarity of the units invites statistical treatment, the development of a statistical mechanics of which the units shall be, not simple material particles in ordinary reversible collision of the type familiar in the kinetic theory, collisions in which action and reaction were equal; the units in the new statistical mechanics will be energy transformers subject to irreversible collisions of peculiar type-collisions in which trigger action is a dominant feature.

So, even as far back as 1922, scientists were arguing that evolution is not in violation of the Second law of Thermodynamics. Interesting revelation, yes?

Lotka continues with this:

In systems evolving toward a true equilibrium (such as thermally and mechanically isolated systems, or the isothermal systems of physical chemistry), the first and second laws of thermodynamics suffice to determinate at any rate the end state; this is, for example, independent of the amount of any purely catalytic substance that may be present. The first and the second law here themselves function as the laws of selection and evolution, as has been recognized by Perrin [9] and others, and exemplified in some detail by the writer, for the case of a monomolecular reversible reaction. [10] But systems receiving a steady supply of available energy (such as the earth illuminated by the sun), and evolving, not toward a true equilibrium, but (probably) toward a stationary state, the laws of thermodynamics are no longer sufficient to determine the end state; a catalyst, in general, does affect the final steady state. Here selection may operate not only among components taking part in transformations, but also upon catalysts, in particular upon auto-catalytic or auto-catakinetic constituents of the system. Such auto-catakinetic constituents are the living organisms, [11] and to them, therefore the principles here discussed, apply.

Now this, as I've jsut stated, was written as far back as 1922, which means that scientists have been aware that thermodynamic laws and evolution are not in conflict for eighty-seven years.

Moving on, let's look at the more recent papers. Let's look first at the abstract of the Adami et al paper:

To make a case for or against a trend in the evolution of complexity in biological evolution, complexity needs to be both rigorously defined and measurable. A recent information-theoretic (but intuitively evident) definition identifies genomic complexity with the amount of information a sequence stores about its environment. We investigate the evolution of genomic complexity in populations of digital organisms and monitor in detail the evolutionary transitions that increase complexity. We show that, because natural selection forces genomes to behave as a natural ‘‘Maxwell Demon,’’ within a fixed environment, genomic complexity is forced to increase.

Oh look. A point I've been arguing for a long time here, namely that a rigorous definition of complexity is needed in order to be able to make precise categorical statements about complexity. I also note with interest that the authors of this paper perform detailed experiments via simulation in order to establish the fact that complexity can arise from simple systems (the behaviour of the Verhust Equation, to cite one example, frequently establishes this, and indeed, the investigation of such systems as the Verhulst Equation and similar dynamical systems is now the subject of its own branch of applied mathematics).

The authors open their paper thus:

Darwinian evolution is a simple yet powerful process that requires only a population of reproducing organisms in which each offspring has the potential for a heritable variation from its parent. This principle governs evolution in the natural world, and has gracefully produced organisms of vast complexity. Still, whether or not complexity increases through evolution has become a contentious issue. Gould (1), for example, argues that any recognizable trend can be explained by the ‘‘drunkard’s walk’’ model, where ‘‘progress’’ is due simply to a fixed boundary condition. McShea (2) investigates trends in the evolution of certain types of structural and functional complexity, and finds some evidence of a trend but nothing conclusive. In fact, he concludes that ‘‘something may be increasing. But is it complexity?’’ Bennett (3), on the other hand, resolves the issue by fiat, defining complexity as ‘‘that which increases when self-organizing systems organize themselves.’’ Of course, to address this issue, complexity needs to be both defined and measurable.

In this paper, we skirt the issue of structural and functional complexity by examining genomic complexity. It is tempting to believe that genomic complexity is mirrored in functional complexity and vice versa. Such an hypothesis, however, hinges upon both the aforementioned ambiguous definition of complexity and the obvious difficulty of matching genes with function. Several developments allow us to bring a new perspective to this old problem. On the one hand, genomic complexity can be defined in a consistent information-theoretic manner [the ‘‘physical’’ complexity (4)], which appears to encompass intuitive notions of complexity used in the analysis of genomic structure and organization (5). On the other hand, it has been shown that evolution can be observed in an artificial medium (6, 7), providing a unique glimpse at universal aspects of the evolutionary process in a computational world. In this system, the symbolic sequences subject to evolution are computer programs that have the ability to self-replicate via the execution of their own code. In this respect, they are computational analogs of catalytically active RNA sequences that serve as the templates of their own reproduction. In populations of such sequences that adapt to their world (inside of a computer’s memory), noisy self-replication coupled with finite resources and an information-rich environment leads to a growth in sequence length as the digital organisms incorporate more and more information about their environment into their genome. Evolution in an information-poor landscape, on the contrary, leads to selection for replication only, and a shrinking genome size as in the experiments of Spiegelman and colleagues (8). These populations allow us to observe the growth of physical complexity explicitly, and also to distinguish distinct evolutionary pressures acting on the genome and analyze them in a mathematical framework.

Moving on, the authors directly address a favourite canard of creationists (though they do not state explicitly that they are doing this), namely that information somehow constitutes a "non-physical" entity. Here's what the authors have to say on this subject:

Information Theory and Complexity. Using information theory to understand evolution and the information content of the sequences it gives rise to is not a new undertaking. Unfortunately, many of the earlier attempts (e.g., refs. 12–14) confuse the picture more than clarifying it, often clouded by misguided notions of the concept of information (15). An (at times amusing) attempt to make sense of these misunderstandings is ref. 16. Perhaps a key aspect of information theory is that information cannot exist in a vacuum; that is, information is physical (17). This statement implies that information must have an instantiation (be it ink on paper, bits in a computer’s memory, or even the neurons in a brain). Furthermore, it also implies that information must be about something. Lines on a piece of paper, for example, are not inherently information until it is discovered that they correspond to something, such as (in the case of a map) to the relative location of local streets and buildings. Consequently, any arrangement of symbols might be viewed as potential information (also known as entropy in information theory), but acquires the status of information only when its correspondence, or correlation, to other physical objects is revealed.

Nice. In brief, the authors clearly state that information requires a physical substrate to reside upon, and a mechanism for the residence of that information upon the requisite physical substrate, in such a manner that said information constitutes a mapping from the arrangement of the physical substrate upon which it resides, to whatever other physical system is being represented by that mapping. I remember one creationist claiming that because the mass of a floppy disc doesn't change when one writes data to it, this somehow "proves" that information is not a physical entity: apparently said creationist didn't pay attention in the requisite basic physics classes, or else he would have learned that the information stored on a floppy disc is stored by materially altering the physical state of the medium, courtesy of inducing changes in the magnetic orientation of the ferric oxide particles in the disc medium. In other words, a physical process was required to generate that information and store it on the disc. I am indebted to the above authors for casting this basic principle in the appropriate (and succinct) general form.

The authors move on with this:

In biological systems the instantiation of information is DNA, but what is this information about? To some extent, it is the blueprint of an organism and thus information about its own structure. More specifically, it is a blueprint of how to build an organism that can best survive in its native environment, and pass on that information to its progeny. This view corresponds essentially to Dawkins’ view of selfish genes that ‘‘use’’ their environment (including the organism itself), for their own replication (18). Thus, those parts of the genome that do correspond to something (the non-neutral fraction, that is) correspond in fact to the environment the genome lives in. Deutsch (19) referred to this view by saying that ‘‘genes embody knowledge about their niches.’’ This environment is extremely complex itself, and consists of the ribosomes the messages are translated in, other chemicals and the abundance of nutrients inside and outside the cell, and the environment of the organism proper (e.g., the oxygen abundance in the air as well as ambient temperatures), among many others. An organism’s DNA thus is not only a ‘‘book’’ about the organism, but is also a book about the environment it lives in, including the species it co-evolves with. It is well known that not all of the symbols in an organism’s DNA correspond to something. These sections, sometimes referred to as ‘‘junk-DNA,’’ usually consist of portions of the code that are unexpressed or untranslated (i.e., excised from the mRNA). More modern views concede that unexpressed and untranslated regions in the genome can have a multitude of uses, such as for example satellite DNA near the centromere, or the polyC polymerase intron excised from Tetrahymena rRNA. In the absence of a complete map of the function of each and every base pair in the genome, how can we then decide which stretch of code is ‘‘about something’’ (and thus contributes to the complexity of the code) or else is entropy (i.e., random code without function)?

A true test for whether a sequence is information uses the success (fitness) of its bearer in its environment, which implies that a sequence’s information content is conditional on the environment it is to be interpreted within (4). Accordingly, Mycoplasma mycoides, for example (which causes pneumonialike respiratory illnesses), has a complexity of somewhat less than one million base pairs in our nasal passages, but close to zero complexity most everywhere else, because it cannot survive in any other environment—meaning its genome does not correspond to anything there. A genetic locus that codes for information essential to an organism’s survival will be fixed in an adapting population because all mutations of the locus result in the organism’s inability to promulgate the tainted genome, whereas inconsequential (neutral) sites will be randomized by the constant mutational load. Examining an ensemble of sequences large enough to obtain statistically significant substitution probabilities would thus be sufficient to separate information from entropy in genetic codes. The neutral sections that contribute only to the entropy turn out to be exceedingly important for evolution to proceed, as has been pointed out, for example, by Maynard Smith (20).

In Shannon’s information theory (22), the quantity entropy (H) represents the expected number of bits required to specify the state of a physical object given a distribution of probabilities; that is, it measures how much information can potentially be stored in it. In a genome, for a site i that can take on four nucleotides with probabilities

{p(C)(i), p(G)(i), p(A)(i), p(T)(i)}, [1]

the entropy of this site is

H(-) = -Σ[C,G,A,T](j) p(j)(i) log p(j)(i) [2]

The maximal entropy per-site (if we agree to take our logarithms to base 4: i.e., the size of the alphabet) is 1, which occurs if all of the probabilities are all equal to 1/4. If the entropy is measured in bits (take logarithms to base 2), the maximal entropy per site is two bits, which naturally is also the maximal amount of information that can be stored in a site, as entropy is just potential information. A site stores maximal information if, in DNA, it is perfectly conserved across an equilibrated ensemble. Then, we assign the probability p = 1 to one of the bases and zero to all others, rendering H(i) = 0 for that site according to Eq. 2. The amount of information per site is thus (see, e.g., ref. 23)

I(i) = H(max) - H(i) [3]

In the following, we measure the complexity of an organism’s sequence by applying Eq. 3 to each site and summing over the sites. Thus, for an organism of l base pairs the complexity is

C = l - Σ(i) H(i) [4]

It should be clear that this value can only be an approximation to the true physical complexity of an organism’s genome. In reality, sites are not independent and the probability to find a certain base at one position may be conditional on the probability to find another base at another position. Such correlations between sites are called epistatic, and they can render the entropy per molecule significantly different from the sum of the per-site entropies (4). This entropy per molecule, which takes into account all epistatic correlations between sites, is defined as

H = Σ(g) p(g|E) log p(g|E) [5]

and involves an average over the logarithm of the conditional probabilities p(g|E) to find genotype g given the current environment E. In every finite population, estimating p(g|E) using the actual frequencies of the genotypes in the population (if those could be obtained) results in corrections to Eq. 5 larger than the quantity itself (24), rendering the estimate useless. Another avenue for estimating the entropy per molecule is the creation of mutational clones at several positions at the same time (7, 25) to measure epistatic effects. The latter approach is feasible within experiments with simple ecosystems of digital organisms that we introduce in the following section, which reveal significant epistatic effects. The technical details of the complexity calculation including these effects are relegated to the Appendix.

Quite a substantial mathematical background, I think everyone will agree. I'll let everyone have fun reading the rest of the details off-post, as they are substantial, and further elaboration here will not be necessary in the light of my providing a link to the full paper.

Moving on to the Kaila and Annila paper, here's the abstract:

The second law of thermodynamics is a powerful imperative that has acquired several expressions during the past centuries. Connections between two of its most prominent forms, i.e. the evolutionary principle by natural selection and the principle of least action, are examined. Although no fundamentally new findings are provided, it is illuminating to see how the two principles rationalizing natural motions reconcile to one law. The second law, when written as a differential equation of motion, describes evolution along the steepest descents in energy and, when it is given in its integral form, the motion is pictured to take place along the shortest paths in energy. In general, evolution is a non-Euclidean energy density landscape in flattening motion.

Ah, this dovetails nicely with Thomas D. Schneider's presentation of a form of the Second Law of Thermodynamics applicable to biological systems that I've covered in past posts. This can be read in more detail here. Note that Thomas D. Schneider is not connected with Eric D. Schneider whose paper is cited above.

Here's how Kaila and Annila introduce their work:

1. Introduction

The principle of least action (de Maupertuis 1744, 1746; Euler 1744; Lagrange 1788) and the evolutionary principle by natural selection (Darwin 1859) account for many motions in nature. The calculus of variation, i.e. ‘take the shortest path’, explains diverse physical phenomena (Feynman & Hibbs 1965; Landau & Lifshitz 1975; Taylor & Wheeler 2000; Hanc & Taylor 2004). Likewise, the theory of evolution by natural selection, i.e. ‘take the fittest unit’, rationalizes various biological courses. Although the two old principles both describe natural motions, they seem to be far apart from each other, not least because still today the formalism of physics and the language of biology differ from each other. However, it is reasonable to suspect that the two principles are in fact one and the same, since for a long time science has failed to recognize any demarcation line between the animate and the inanimate.

In order to reconcile the two principles to one law, the recent formulation of the second law of thermodynamics as an equation of motion (Sharma & Annila 2007) is used. Evolution, when stated in terms of statistical physics, is a probable motion. The natural process directs along the steepest descents of an energy landscape by equalizing differences in energy via various transport and transformation processes, e.g. diffusion, heat flows, electric currents and chemical reactions (Kondepudi & Prigogine 1998). These flows of energy, as they channel down along various paths, propel evolution. In a large and complicated system, the flows are viewed to explore diverse evolutionary paths, e.g. by random variation, and those that lead to a faster entropy increase, equivalent to a more rapid decrease in the free energy, become, in terms of physics, naturally selected (Sharma & Annila 2007). The abstract formalism has been applied to rationalize diverse evolutionary courses as energy transfer processes (Grönholm & Annila 2007; Jaakkola et al. 2008a,b; Karnani & Annila in press).

The theory of evolution by natural selection, when formulated in terms of chemical thermodynamics, is easy to connect with the principle of least action, which also is well established in terms of energy (Maslov 1991). In accordance with Hamilton’s principle (Hamilton 1834, 1835), the equivalence of the differential equation of evolution and the integral equation of dissipative motion is provided here, starting from the second law of thermodynamics (Boltzmann 1905; Stöltzner 2003). In this way, the similarity of the fitness criterion (‘take the steepest gradient in energy’) and the ubiquitous imperative (‘take the shortest path in energy’) becomes evident. The two formulations are equivalent ways of picturing the energy landscape in flattening motion. Thus, there are no fundamentally new results. However, as once pointed out by Feynman (1948), there is a pleasure in recognizing old things from a new point of view.

I advise readers to exercise some caution before diving into this paper in full, as it involves extensive mathematics from the calculus of variations, and a good level of familiarity with Lagrangian and Hamiltonian mechanics is a pre-requisite for understanding the paper in full.

In the meantime, let's take a look at the Schneider & Kay paper. Here's their introduction:

Introduction

In the middle of the nineteenth century, two major scientific theories emerged about the evolution of natural systems over time. Thermodynamics, as refined by Boltzmann, viewed nature as decaying toward a certain death of random disorder in accordance with the second law of thermodynamics. This equilibrium seeking, pessimistic view of the evolution of natural systems is contrasted with the paradigm associated with Darwin, of increasing complexity, specialization, and organization of biological systems through time. The phenomenology of many natural systems shows that much of the world is inhabited by nonequilibrium coherent structures, such as convection cells, autocatalytic chemical reactions and life itself. Living systems exhibit a march away from disorder and equilibrium, into highly organized structures that exist some distance from equilibrium.

This dilemma motivated Erwin Schrödinger, and in his seminal book What is Life? (Schrödinger, 1944), he attempted to draw together the fundamental processes of biology and the sciences of physics and chemistry. He noted that life was comprised of two fundamental processes; one "order from order" and the other "order from disorder". He observed that the gene generated order from order in a species, that is, the progeny inherited the traits of the parent. Over a decade later Watson and Crick (1953) provided biology with a research agenda that has lead to some of the most important findings of the last fifty years.

However, Schrödinger's equally important but less understood observation was his order from disorder premise. This was an effort to link biology with the fundamental theorems of thermodynamics (Schneider, 1987). He noted that living systems seem to defy the second law of thermodynamics which insists that, within closed systems, the entropy of a system should be maximized. Living systems, however, are the antithesis of such disorder. They display marvelous levels of order created from disorder. For instance, plants are highly ordered structures, which are synthesized from disordered atoms and molecules found in atmospheric gases and soils.

Schrödinger solved this dilemma by turning to nonequilibrium thermodynamics. He recognized that living systems exist in a world of energy and material fluxes. An organism stays alive in its highly organized state by taking high quality energy from outside itself and processing it to produce, within itself, a more organized state. Life is a far from equilibrium system that maintains its local level of organization at the expense of the larger global entropy budget. He proposed that the study of living systems from a nonequilibrium perspective would reconcile biological self-organization and thermodynamics. Furthermore he expected that such a study would yield new principles of physics.

This paper examines the order from disorder research program proposed by Schrödinger and expands on his thermodynamic view of life. We explain that the second law of thermodynamics is not an impediment to the understanding of life but rather is necessary for a complete description of living processes. We expand thermodynamics into the causality of the living process and show that the second law underlies processes of self-organization and determines the direction of many of the processes observed in the development of living systems.

Finally, I'll wind up by introducing Emory F. Bunn's paper, which is a particular killer for creationist canards, because it involves direct mathematical derivation of the thermodynamic relationships involved in evolutionary processes, and a direct quantitative analysis demonstrating that evolution is perfectly consistent with the Second Law of Thermodynamics. Here's the abstract:

Skeptics of biological evolution often claim that evolution requires a decrease in entropy, giving rise to a conflict with the second law of thermodynamics. This argument is fallacious because it neglects the large increase in entropy provided by sunlight striking the Earth. A recent article provided a quantitative assessment of the entropies involved and showed explicitly that there is no conflict. That article rests on an unjustified assumption about the amount of entropy reduction involved in evolution. I present a refinement of the argument that does not rely on this assumption.

Here's the opening gambit:

I. INTRODUCTION.

Daniel Styer recently addressed the claim that evolution requires a decrease in entropy and therefore is in conflict with the second law of thermodynamics. [1] He correctly explained that this claim rests on misunderstandings about the nature of entropy and the second law. The second law states that the total entropy of a closed system must never decrease. However, the Earth is not a closed system and is constantly absorbing sunlight, resulting in an enormous increase in entropy, which can counteract the decrease presumed to be required for evolution. This argument is known to those who defend evolution in evolution-creationism debates, [2] but it is usually described in a general, qualitative way. Reference 1 filled this gap with a quantitative argument.

In the following I present a more robust quantitative argument. We begin by identifying the appropriate closed system to which to apply the second law. We find that the second law requires that the rate of entropy increase due to the Earth’s absorption of sunlight, (dS/dt)(sun), must be sufficient to account for the rate of entropy decrease required for the evolution of life, (dS/dt)(life) (a negative quantity). As long as

(dS/dt)(sun) + (dS/dt)(life) ≥ 0

there is no conflict between evolution and the second law.

Styer estimated both (dS/dt)(sun) and (dS/dt)(life) to show that the inequality (1) is satisfied, but his argument rests on an unjustified and probably incorrect assumption about (dS/dt)(life) [1]. I will present a modified version of the argument which does not depend on this assumption and which shows that the entropy decrease required for evolution is orders of magnitude too small to conflict with the second law of thermodynamics.

Once again, I'll let you all have fun reading the paper in full. :)

In the meantime, with respect to chemical reactions, one of the first major lessons chemistry students learn, is that the nature of a chemical reaction is determined in large part by a quantity called enthalpy, which in an elementary treatment, can be defined as follows:

ΔH = (energy required to break chemical bonds in the reactants) - (energy released when bonds are formed in the products)

Let that first energy term be denoted by E(b) (for breaking bonds), and the second energy term by E(f) (for forming new bonds). Thus,

ΔH = E(b) - E(f)

Enthalpy is therefore a measure of the energy consumption of a chemical reaction.

If E(b) > E(f), then the enthalpy is positive, and the reaction consumes energy, which must be supplied by an external source thereof. However, if E(b) < E(f), then the enthalpy is negative, which means that the reaction is liberating energy into the surroundings, frequently in the form of heat. Thousands of chemical reactions with negative enthalpies are known and documented in the literature. Indeed, a good many of the organic reactions that are implicated in the origin of life, are themselves negative-enthalpy reactions. Therefore the energy conditions are such that the reactions in question are driven forward and occur spontaneously, the moment the reactants come into contact with each other. No fucking magic needed.

Indeed, scientific papers exist documenting not only the fact that molecules of interest in origin of life research are found in interstellar space via spectroscopic analysis of dust and gas clouds, but that experiments have been performed in the laboratory, replicating the conditions in those clouds, and demonstrating that the synthesis pathways work.

So if you think magic is needed to produce the requisite organic molecules, then you need to re-take your basic chemistry classes. That's before we factor catalysis into the equation, which shifts the enthalpy balance for numerous reactions of interest.

After all, the coming together of many things was required for the first organism to live

Guess what? I've already covered this in detail here.

the instructions must already have been there to tell the organisms body, no matter how small, how to make energy of whatever source it has.

What part of "energy exchanges take place all the time without intelligent direction" do you not understand? Such as that big yellow thing you see in the sky sometimes? Not to mention all those chemical reactions whose enthalpies are negative? Plus, in the case of many positive enthalpy reactions, modest heating is all that is needed to drive them in the requisite direction. That big yellow thing is there to embarrass you again.

Oh, and since you're peddling creationist canards about "information", I've already addressed those in a previous post.

Life itself is the opposite of entropy

You keep peddling this blind assertion. Read the above papers and weep.

I think that covers all of those bases. Isn't a proper scientific education wonderful?

Nyarlathotep's picture
@Calilasseia

@Calilasseia

From past experiences; I'm guessing the caliber of response you will receive (if any) will be along the lines of: nuh-uh.

rat spit's picture
God Damn, Cali:

God Damn, Cali:

https://youtu.be/V83JR2IoI8k

I was going to say a few words about the process of DNA to mRNA to ribosomes to proteins, etc. The point being the only thing intelligent about DNA is that we humans have discerned the genetic code, figured out what amino acid each triplet pair encodes for - and are on the brink of understanding protein folding (with the help of AI like “Alpha Zero” - Alpha Zero recently trained for about a day, I think, in a protein folding prediction event, and took first place. This is also the software/hardware that trained in chess for 48 hours, I think, and became the strongest chess entity on the planet.)

Sheldon's picture
I think we're done folks, as

I think we're done folks, as lukew0480 has just claimed the earth and the human body are enclosed systems that no thermal energy can enter from outside.

Fnarrr...I wonder if he gets all confused on a sunny day as to why he's getting hotter? Hell, maybe he thinks the sun is inside the earth's atmosphere, I mean creatards are pretty dumb?

Randomhero1982's picture
"How does photosynthesis work

"How does photosynthesis work?"

"Magicccccccc!!!!!" *clapping hands*

Attachments

Attach Image/Video?: 

Yes

Pages

Donating = Loving

Heart Icon

Bringing you atheist articles and building active godless communities takes hundreds of hours and resources each month. If you find any joy or stimulation at Atheist Republic, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.

Or make a one-time donation in any amount.