A question for theists...

296 posts / 0 new
Last post
Nyarlathotep's picture
Jo - Life and the universe

Jo - Life and the universe existing by some massively improbable happenstance...

As far as I know, this probability is unknown. So, since you seem to know it and I don't, why don't you tell me how improbable is it (what is the probability)? And how did you come up with that answer (can you show your work)?
-----------------------------------------

Jo - ...seems like a denial of the obvious to me.

It doesn't seem like it is "denial of the obvious", since I have to ask you for it.

Sheldon's picture
Nyarl, I'd bet a years salary

Nyarl, I'd bet a years salary that Jo is simply parroting a religious polemic he doesn't understand, and that when he claims something is improbable, he's using unevidenced rhetoric in the mistaken impression that such things are gut instincts, rather than being driven by cientific evidence and mathematics.

You have to have to marvel at any intellect that labels he existence of something as highly improbable, when it already exists, and all in order to tack on a bronze age creation myth which has zero objective evidence to support it, and at is core makes superstitious claims for magic, that has no explanatory powers whatsoever. Then claims this addition makes its existence more probable.

Occam must be turning in his grave.

Delaware's picture
@ Nyarlathotep

@ Nyarlathotep

"Jo - Life and the universe existing by some massively improbable happenstance."
It was more of a philosophical statement than a mathematical statement.
It seems more probable to me that God did it than it just happened on its own.

I am interested in what you think of Penrose on a related subject
I think he stated the probability if the universe existing as 1 to the 10 to the 123 power.
https://evolutionnews.org/2010/04/roger_penrose_on_cosmic_finetu/

Hoyle said something similar.
"Life cannot have had a random beginning … The trouble is that there are about two thousand enzymes, and the chance of obtaining them all in a random trial is only one part in 1040,000, an outrageously small probability that could not be faced even if the whole universe consisted of organic soup". Fred Hoyle and N. Chandra Wickramasinghe, Evolution from Space (London: J.M. Dent & Sons, 1981)

"Once we see, however, that the probability of life originating at random is so utterly minuscule as to make it absurd, it becomes sensible to think that the favorable properties of physics on which life depends are in every respect deliberate … . It is therefore almost inevitable that our own measure of intelligence must reflect … higher intelligences … even to the limit of God … such a theory is so obvious that one wonders why it is not widely accepted as being self-evident. The reasons are psychological rather than scientific". Fred Hoyle and N. Chandra Wickramasinghe, Evolution from Space (London: J.M. Dent & Sons, 1981), pp. 141, 144, 130

Please go easy on me, I have trouble with that CAPTCHA thing at the end. :-)

Nyarlathotep's picture
Jo - I think he stated the

Jo - I think he stated the probability if the universe existing as 1 to the 10 to the 123 power.

That is equivalent to a 100% chance. I'll assume you mean 10^(-123); which would represent a very small probability.

I'm guessing he assumed the independence of physical parameters (sometimes sloppily referred to as constants), and used the naive definition of probability (that all outcomes are equally likely)? If I were you, I wouldn't be hitching my wagon to that.

What you cited from Hoyle is a non-peer reviewed popular publication in opposition to the big bang theory. An interesting side note: Hoyle is the person who coined the term "big bang"; and meant it as a pejorative term (he was making fun of the "big bang model", as opposed to the "steady state model" Hoyle was closely associated with). Again, not something I'd be hitching my wagon too.

Sheldon's picture
Jo "Life and the universe

Jo "Life and the universe existing by some massively improbable happenstance. It seems more probable to me that God did it than it just happened on its own."

How does adding an unevidenced deity, using unfathomable magic from a bronze age superstition decrease those odds exactly?

"Probability theory, a branch of mathematics concerned with the analysis of random phenomena. The outcome of a random event cannot be determined before it occurs, but it may be any one of several possible outcomes. The actual outcome is considered to be determined by chance."

Delaware's picture
@ Sheldon

@ Sheldon

If you flip a coin 1 million times and every time it comes up heads.
I think it is much more likely that you some how rigged the outcome, than it happened by chance.
I would believe that you somehow rigged it unless proved otherwise.
I think that is the logical, rational, and wise conclusion.

It is possible to flip a coin a gets heads 1 million times in a row.
How long would it take to get 1 million on a row?
How much more complicated is the universe and life than 1 million heads?

Cognostic's picture
@Jo. You are really bad at

@Jo. You are really bad at coming up with analogies. You should probably avoid it in the future.

1. Is it possible to flip a coin a million times and get heads every single time. "YES"

2. How long would it take? (No one knows.)

3. How much more complected is the universe than one million heads. (How would you determine this? It really does not matter.)

It does not matter how much more complected the cosmos is. It all happened at least once, regardless of the complexity and that is a fact. Here we are. Complexity had nothing at all to do with anything. By the way, complexity is not a property of design. Simplicity of function is.

Cognostic's picture
@Jo. You are really bad at

@Jo. You are really bad at coming up with analogies. You should probably avoid it in the future.

1. Is it possible to flip a coin a million times and get heads every single time. "YES"

2. How long would it take? (No one knows.)

3. How much more complected is the universe than one million heads. (How would you determine this? It really does not matter.)

It does not matter how much more complected the cosmos is. It all happened at least once, regardless of the complexity and that is a fact. Here we are. Complexity had nothing at all to do with anything. By the way, complexity is not a property of design. Simplicity of function is.

Sheldon's picture
@Nyarl

@Nyarl

Jo has clearly read some religious apologist claiming the universe's existence by natural processes is wildly improbable, and is parotting it here.

He doesn't seem to understand the existence of the universe and natural phenomena are two things we can objectively evidence the existence of. Whereas adding an unevidenced deity using unevidenced and inexplicable magic are two things no objective evidence can be demonstrate for.

I think it's fair to say he doesn't understand the science that religious apologetics is misrepresenting here. However he might look ar every major news network and ask himself why they are not emblazoned with the news God's existence confirmed by physicists, and logic.

Nyarlathotep's picture
Right, even the form a

Right, even the form a calculation (of the probability of the universe existing in its current state) would take isn't clear. Sometimes it is presented along the lines of: what is the probability of an astronomer finding themselves in a universe that is capable of supporting astronomers? It seems the answer to that question is 1 (100%). Now I'd be lying if I said that didn't raise my skeptic alarm a little, but it seems at least as good as any other method I've seen suggested.

/e Often times, the hard part is phrasing the question the right way; with everything falling into place quickly when you hit on the right question. That does not seem to have happened yet.

Sheldon's picture
Jo "From there I deduce,

Jo "From there I deduce, that we only know of one source of information and that is intelligence."

Actually we know nature is a source of information, as information is ubiquitous in nature So it's not a deduction, it's pure assumption by religious apologists. We know information exists, we know natural phenomena exist, one assumes you don't disagree?

Now in violation of Occam's razor, you are adding an unevidenced deity using inexplicable magic, and solely because this satisfies your a priori religious belief.

How are the odds on this universe's existence improving exactly, when Occam's razor flatly refutes your assumption that adds things you can neither evidence nor explain?

Delaware's picture
@ Sheldon

@ Sheldon

"Jo "From there I deduce, that we only know of one source of information and that is intelligence.""
I don't think that is a quote from me.
When did I say that?

Delaware's picture
@ Sheldon

@ Sheldon

I have not seen your answer to this quote you attribute to me.

Please note that I did not take this opportunity to accuse you of lying or misrepresenting.
Would you do the same for me?

Calilasseia's picture
Referring to Jo's post above

Referring to Jo's post above ...

[1] The "evolution news" website isn't a proper scientific website. It's a religious apologetics website pushing creatonism. It's not a reliable source of information, because it treats science as a branch of apologetics, a duplicitous practice that should be shunned by anyone possessing a proper regard for the rules of discourse.

[2] Others have already pointed out the dubious provenance of Hoyle's statements, given that he was writing in opposition to the idea that the universe is expanding, despite a large body of observational data pointing to this.

[3] I've been seeing spurious "probability" calculations from pedlars of creationist apologetics for over a decade, and they are precisely that - spurious. In the case of the origin of life, the spurious "probability" calculations from creationists invariably involve the commission of two blatant fallacies - the single trials fallacy and the "one true sequence" fallacy". I shall now deal with both of these at length.

The Serial Trials Fallacy

Typically, what happens in the world of creationist apologetics, is that a probability calculation is constructed, usually on the basis of assumptions that are either left unstated altogether (conveniently preventing independent verification of their validity), or if they are stated, they usually fail to survive intense critical scrutiny. However, even if we allow these assumptions to remain unchallenged, the appearance of the Serial Trials fallacy means that destruction of the validity of the spurious probability calculation is easy even without resorting to the effort of destroying those other assumptions.

Basically, the Serial Trials Fallacy consists of assuming that only one participant in an interacting system is performing the necessary task at any one time. While this may be true for a lone experimenter engaged in a coin tossing exercise, this is assuredly NOT true of any system involving chemical reactions, which involves untold billions of atoms or molecules at any given moment. This of course has import for abiogenesis as well, against which bad probability calculations and the Serial Trials Fallacy are routinely deployed. I shall concentrate here on abiogenetic scenarios, but what follows applies equally to nuclear DNA replication and any absurd arguments based upon bad probability calculations and the Serial Trials Fallacy that mutations cannot occur in a given amount of time.

The idea is simply this. If you only have one participant in the system in question, and the probability of the desired outcome is small, then it will take a long time for that outcome to appear among the other outcomes. But, if you have billions of participants in the system in question, all acting simultaneously, then even a low-probability outcome will occur a lot more quickly.

For example, if I perform trials that consist of ten coin tosses in a row per trial, and this takes about 20 seconds, then I'm going to take a long time to arrive at 10 heads in a row, because the probability is indeed 1/(2^10) = 1/1024. In accordance with a basic law of probability, namely that if the probability of the event is P, the number of serial trials required will be 1/P, I shall need to conduct 1,024 serial trials to obtain 10 heads in a row (averaged over the long term of course) and at 1 trial every 20 seconds, this will take me about six days, if all I do is toss coins without any breaks for sleep, food or other necessary biological functions. If, however, I co-opt 1,024 people to perform these trials in parallel, at least one of them should arrive at 10 heads from the very outset. If I manage by some logistical wonder to co-opt the entire population of China to toss coins in this fashion, then with a billion people tossing the coins, we should see 1,000,000,000/1024, which gives us 976,562 Chinese coin tossers who should see 10 heads in a row out of the total 1,000,000,000 Chinese.

Now given that the number of molecules in any given reaction even in relatively dilute solutions is large (a 1 molar solution contains 6.023 × 10^23 particles of interest per litre of solution, be they atoms, molecules or whatever) then we have scope for some serious participating numbers in terms of parallel trials. Even if we assume, for the sake of argument in a typical prebiotic scenario, that only the top 100 metres of ocean depth is available for parallel trials of this kind (which is a restriction that may prove to be too restrictive once the requisite experimental data are in from various places around the world with respect to this, and of course totally ignores processes around volcanic black smokers in deep ocean waters that could also fuel abiogenetic reactions) and we further assume that the concentration of substancers of interest is only of the order of millimoles per litre, then that still leaves us with the following calculation:

[1] Mean radius of Earth = 6,371,000 m, and 100 m down, that radius is 6,370,900 m

[2] Volume of sea water of interest is therefore 4/3π(R^3-r^3)

which equals 5.1005 × 10^16 cubic metres

1 litre of solution of 1 mmol per litre will contain 6.023 × 10^20 reacting particles of interest, which means that 1 cubic metre of solution will contain 6.023 × 10^26 particles, and therefore the number of particles in the 100 metre layer of ocean around the world will be 3.0730 × 10^43 particles. So already we're well into the territory where our number of parallel trials will make life a little bit easier. At this juncture, if we have this many interacting particles, then any reaction outcome that is computed to have a probability of greater than 1/(3.073 ×10^43) is inevitable with the first reaction sequence.

Now, of course, this assumes that the reactions in question are, to use that much abused word by reality denialists, "random" (though their usage of this word tends to be woefully non-rigorous at the best of times). However, chemical reactions are not "random" by any stretch of the imagination (we wouldn't be able to do chemistry if they were!), which means that once we factor that into the picture alongside the fact that a parallel trial involving massive numbers of reacting molecules is taking place, the spurious nature of these probabilistic arguments against evolution rapidly become apparent.

The same parallel trials of course take place in reproducing populations of organisms. Of course, the notion falsely propagated by reality denialists is that we have to wait for one particular organism to develop one particular mutation, and that this is somehow "improbable". Whereas what we really have to wait for is any one organism among untold millions, or even billions, to develop that mutation, for evolution to have something to work with. If that mutation is considered to have a probability of 1/10^9, then we only have to wait for 10^9 DNA replications in germ cells to take place before that mutation happens. If our working population of organisms is already comprised of 1 billion individuals (last time I checked, the world human population had exceeded 6.6 billion) then that mutation is inevitable.

Now it's time to move on to ...

The "One True Sequence" Fallacy

This fallacy asserts that one, and ONLY one, DNA sequence can code for a protein that performs a specific task. This is usually erected alongside assorted bogus "probability" calculations that purport to demonstrate that evolutionary processes cannot achieve what they plainly do achieve in the real world, but those other probability fallacies will be the subject of other posts. Here I want to destroy the myth that one, and ONLY one, sequence can ever work in a given situation.

Insulin provides an excellent example for my purposes, because insulin is critical to the health and well being of just about every vertebrate organism on the planet. When a vertebrate organism is unable to produce insulin (the well-known condition of diabetes mellitus), then the ability to regulate blood sugar is seriously disrupted, and in the case of Type 1 diabetes mellitus, in which the beta-cells of the Islets of Langerhans in the pancreas are destroyed by an autoimmune reaction, the result is likely to be fatal in the medium to long term due to diabetic nephropathy resulting in renal failure.

Consequently, the insulin molecule is critical to healthy functioning of vertebrate animals. The gene that codes for insulin is well known, and has been mapped in a multiplicity of organisms, including organisms whose entire genomes have been sequenced, ranging from the pufferfish Tetraodon nigroviridis through to Homo sapiens. There is demonstrable variability in insulin molecules (and the genes coding for them) across the entire panoply of vertebrate taxa. Bovine insulin, for example, is not identical to human insulin. I refer everyone to the following gene sequences, all of which have been obtained from publicly searchable online gene databases:

[1] Human insulin gene on Chromosome 11, which is as follows:

atg gcc ctg tgg atg cgc ctc ctg ccc ctg ctg gcg ctg ctg gcc ctc tgg gga cct gac
cca gcc gca gcc ttt gtg aac caa cac ctg tgc ggc tca cac ctg gtg gaa gct ctc tac
cta gtg tgc ggg gaa cga ggc ttc ttc tac aca ccc aag acc cgc cgg gag gca gag gac
ctg cag gtg ggg cag gtg gag ctg ggc ggg ggc cct ggt gca ggc agc ctg cag ccc ttg
gcc ctg gag ggg tcc ctg cag aag cgt ggc att gtg gaa caa tgc tgt acc agc atc tgc
tcc ctc tac cag ctg gag aac tac tgc aac tag

which codes for the following protein sequence (using the standard single letter mnemonics for individual amino acids). I colour coded the sequence in the original presentation of this data, but since this forum doesn't support generation of multi-coloured text, I'll split the relevant orthologous sequence segments onto their own individual numbered lines:

1: MALWMRLLPLLALLALWGPDPAAA
2: FVNQHLCGSHLVEALYLVCGERGFFYTPKT
3: RR
4: EAEDLQVGQVELGGGPGAGSLQPLALEGSLQ
5: KR
6: GIVEQCCTSICSLY
7: QLENYCN

Now, I refer everyone to this data, which is the coding sequence for insulin in the Lowland Gorilla (differences are highlighted in boldface):

atg gcc ctg tgg atg cgc ctc ctg ccc ctg ctg gcg ctg ctg gcc ctc tgg gga cct gac
cca gcc gcg gcc ttt gtg aac caa cac ctg tgc ggc tcc cac ctg gtg gaa gct ctc tac
cta gtg tgc ggg gaa cga ggc ttc ttc tac aca ccc aag acc cgc cgg gag gca gag gac
ctg cag gtg ggg cag gtg gag ctg ggc ggg ggc cct ggt gca ggc agc ctg cag ccc ttg
gcc ctg gag ggg tcc ctg cag aag cgt ggc atc gtg gaa cag tgc tgt acc agc atc tgc
tcc ctc tac cag ctg gag aac tac tgc aac tag

this codes for the protein sequence:

1: MALWMRLLPLLALLALWGPDPAAA
2: FVNQHLCGSHLVEALYLVCGERGFFYTPKT
3: RR
4: EAEDLQVGQVELGGGPGAGSLQPLALEGSLQ
5: KR
6: GIVEQCCTSICSLY
7: QLENYCN

which so happens to be the same precursor protein. However, Gorillas are closely related to humans. Let's move a little further away, to the domestic cow, Bos taurus (whose sequence is found here):

atg gcc ctg tgg aca cgc ctg cgg ccc ctg ctg gcc ctg ctg gcg ctc tgg ccc ccc ccc
ccg gcc cgc gcc ttc gtc aac cag cat ctg tgt ggc tcc cac ctg gtg gag gcg ctg tac
ctg gtg tgc gga gag cgc ggc ttc ttc tac acg ccc aag gcc cgc cgg gag gtg gag ggc
ccg cag gtg ggg gcg ctg gag ctg gcc gga ggc ccg ggc gcg ggc ggc ctg gag ggg ccc
ccg cag aag cgt ggc atc gtg gag cag tgc tgt gcc agc gtc tgc tcg ctc tac cag ctg
gag aac tac tgt aac tag

Already this is a smaller sequence - 318 codons instead of 333 - so we KNOW we're going to get a different insulin molecule with this species ... which is as follows:

1: MALWTRLRPLLALLALWPPPPARA
2: FVNQHLCGSHLVEALYLVCGERGFFYTPK
3: AR
4: REVEGPQVGALELAGGPGAGGLEGPPQKRGI
5: VE
6: QCCASVCSLY
7:QLENYCN

clearly a different protein, but one which still functions as an insulin precursor and results in a mature insulin molecule in cows, one which differs in exact sequence from that in humans. Indeed, prior to the advent of transgenic bacteria, into which human insulin genes had been transplanted for the purpose of harnessing those bacteria to produce human insulin for medical use, bovine insulin harvested from the pancreases of slaughtered beef cows was used to treat diabetes mellitus in humans. Now, of course, with the advent of transgenically manufactured true human insulin, from a sterile source, bovine insulin is no longer needed, much to the relief of those who are aware of the risk from BSE.

Moving on again, we have a different coding sequence from the tropical Zebrafish, Danio rerio, (sequence to be found here) which is as follows:

atg gca gtg tgg ctt cag gct ggt gct ctg ttg gtc ctg ttg gtc gtg tcc agt gta agc
act aac cca ggc aca ccg cag cac ctg tgt gga tct cat ctg gtc gat gcc ctt tat ctg
gtc tgt ggc cca aca ggc ttc ttc tac aac ccc aag aga gac gtt gag ccc ctt ctg ggt
ttc ctt cct cct aaa tct gcc cag gaa act gag gtg gct gac ttt gca ttt aaa gat cat
gcc gag ctg ata agg aag aga ggc att gta gag cag tgc tgc cac aaa ccc tgc agc atc
ttt gag ctg cag aac tac tgt aac tga

And this sequence codes for the following protein:

1: ]MAVWLQAGALLVLLVVSSVSTNPG
2: TPQHLCGSHLVDALYLVCGPTFTGFFYNP
3: KR
4: DVEPLLGFLPPKSAQETEVADFAFKDHAELI
5: RK
6: RGIVEQCCHKPCSI
7: FELQNYCN

so again we have a different insulin precursor protein that is ultimately converted into a different insulin molecule within the Zebra Fish.

I could go on and extract more sequences, but I think the point has already been established, namely that there are a multiplicity of possible insulin molecules in existence, and consequently, the idea that there can only be ONE sequence for a functional protein, even one as critically important to life as insulin, is DEAD FLAT WRONG. Now, if this is true for a protein as crucial to the functioning of vertebrate life as insulin, you can be sure that the same applies to other proteins, including various enzymes, and therefore, whenever the "One True Sequence" fallacy rears its ugly head in various places, the above provides the refutation thereof.

Quite simply, what is important is not one particular sequence, but any sequence that confers function. If there exist a large number of sequences that confer function, then this on its own destroys creationist apologetics involving the "one true sequence" fallacy, even before we delve into more involved scientific rebuttals thereof.

In short, spurious creationist "probability" calculations are rendered null and void, by [1] large numbers of simultaneously participating entities, and [2] large numbers of functionally viable sequences for a given function.

So, next time you see a spurious probability calculation appearing that purports to "disprove evolution", or claims to render a natural origin of life untenable, in order to insert a mythological magic man into the picture, look out for these salient features, namely:

[1] Base assumptions that are either not stated altogether (thus conveniently preventing independent verification) or base assumptions that fail to withstand critical scrutiny;

[2] The Serial Trials Fallacy above, and;

[3] The "One True Sequence" Fallacy above.

Next, it's apposite to destroy once and for all, the fatuous "fine tuning" excrement that creationists masturbate over so much, Courtesy of the following two scientific papers:

Stars In Other Universes: Stellar Structure With Different Fundamental Constants by Fred C. Adams, Journal of Cosmology and Astroparticle Physics, Issue 08, 1-29 (August 2008) [Full paper downloadable from here]

A Universe Without Weak Interactions by Roni Harnik, Graham D. Kribs and Gilad Perez, Physical Review D, 74(3): 035006 (2006) [Full paper downloadable from here]

Let's take a look at these papers, shall we?

First, the Adams paper ...

Abstract. Motivated by the possible existence of other universes, with possible variations in the laws of physics, this paper explores the parameter space of fundamental constants that allows for the existence of stars. To make this problem tractable, we develop a semi-analytical stellar structure model that allows for physical understanding of these stars with unconventional parameters, as well as a means to survey the relevant parameter space. In this work, the most important quantities that determine stellar properties—and are allowed to vary—are the gravitational constant G, the fine structure constant α, and a composite parameter C that determines nuclear reaction rates. Working within this model, we delineate the portion of parameter space that allows for the existence of stars. Our main finding is that a sizable fraction of the parameter space (roughly one fourth) provides the values necessary for stellar objects to operate through sustained nuclear fusion. As a result, the set of parameters necessary to support stars are not particularly rare. In addition, we briefly consider the possibility that unconventional stars (e.g., black holes, dark matter stars) play the role filled by stars in our universe and constrain the allowed parameter space.

In more detail, we have:

1. Introduction

The current picture of inflationary cosmology allows for, and even predicts, the existence of an infinite number of space-time regions sometimes called pocket universes [1, 2, 3]. In many scenarios, these separate universes could potentially have different versions of the laws of physics, e.g., different values for the fundamental constants of nature. Motivated by this possibility, this paper considers the question of whether or not these hypothetical universes can support stars, i.e., long-lived hydrostatically supported stellar bodies that generate energy through (generalized) nuclear processes. Toward this end, this paper develops a simplified stellar model that allows for an exploration of stellar structure with different values of the fundamental parameters that determine stellar properties. We then use this model to delineate the parameter space that allows for the existence of stars.

A little later on, Adams states thus:

Unlike many previous efforts, this paper constrains only the particular constants of nature that determine the characteristics of stars. Furthermore, as shown below, stellar structure depends on relatively few constants, some of them composite, rather than on large numbers of more fundamental parameters. More specifically, the most important quantities that directly determine stellar structure are the gravitational constant G, the fine structure constant α, and a composite parameter C that determines nuclear reaction rates. This latter parameter thus depends in a complicated manner on the strong and weak nuclear forces, as well as the particle masses. We thus perform our analysis in terms of this (α,G, C) parameter space.

Then we start to get into the meat of the paper:

As is well known, and as we re-derive below, both the minimum stellar mass and the maximum stellar mass have the same dependence on fundamental constants that carry dimensions [11]. More specifically, both the minimum and maximum mass can be written in terms of the fundamental stellar mass scale M0 defined according to

M0 = α(G)^−3/2 = (hc/G)^3/2m(p)^-2 ≅ 3.7 × 10^33g ≅ 1.85M(sun), (1)

where α(G) is the gravitational fine structure constant,

α[sub]G[/sub] = Gm(p)^2/hc ≅ 6 × 10^-39 (2)

where m(p) is the mass of the proton. As expected, the mass scale can be written as a dimensionless quantity (α(G)^−3/2) times the proton mass; the approximate value of the exponent (-3/2) in this relation is derived below. The mass scale M0 determines the allowed range of masses in any universe.

In conventional star formation, our Galaxy (and others) produces stars with masses in the approximate range 0.08 ≤ M(*)/M(sun) ≤ 100, which corresponds to the range 0.04 ≤ M(*)/M0 ≤ 50. One of the key questions of star formation theory is to understand, in detail, how and why galaxies produce a particular spectrum of stellar masses (the stellar initial mass function, or IMF) over this range [12]. Given the relative rarity of high mass stars, the vast majority of the stellar population lies within a factor of ~ 10 of the fundamental mass scale M0. For completeness we note that the star formation process does not involve thermonuclear fusion, so that the mass scale of the hydrogen burning limit (at 0.08 M(sun)) does not enter into the process. As a result, many objects with somewhat smaller masses – brown dwarfs – are also produced. One of the objectives of this paper is to understand how the range of possible stellar masses changes with differing values of the fundamental constants of nature.

The author then moves on to this:

2. Stellar Structure Models

In general, the construction of stellar structure models requires the specification and solution of four coupled differential equations, i.e., force balance (hydrostatic equilibrium), conservation of mass, heat transport, and energy generation. This set of equations is augmented by an equation of state, the form of the stellar opacity, and the nuclear reaction rates. In this section we construct a polytropic model of stellar structure. The goal is to make the model detailed enough to capture the essential physics and simple enough to allow (mostly) analytic results, which in turn show how different values of the fundamental constants affect the results. Throughout this treatment, we
will begin with standard results from stellar structure theory [11, 13, 14] and generalize to allow for different stellar input parameters.

2.1. Hydrostatic Equilibrium Structures

In this case, we will use a polytropic equation of state and thereby replace the force balance and mass conservation equations with the Lane-Emden equation. The equation of state thus takes the form

P=Kρ^Γ where Γ = 1+1/n (3)

where the second equation defines the polytropic index n. Note that low mass stars and degenerate stars have polytropic index n = 3/2, whereas high mass stars, with substantial radiation pressure in their interiors, have index n → 3. As a result, the index is slowly varying over the range of possible stellar masses. Following standard methods [15, 11, 13, 14], we define

ξ ≡ r/R, ρ = ρ(c)f^n, and

R^2 = KΓ/((Γ-1)4πGρ(c)^(2-Γ)) (4)

so that the dimensionless equation for the hydrostatic structure of the star becomes

d/dξ(ξ^2 df/dξ) + ξ^2f^n = 0 (5)

Here, the parameter ρ(c) is the central density (in physical units) so that f(ξ)^n is the dimensionless density distribution. For a given polytropic index n (or a given Γ), equation (5) thus specifies the density profile up to the constants ρ(c) and R. Note that once the density is determined, the pressure is specified via the equation of state (3). Further, in the stellar regime, the star obeys the ideal gas law so that temperature is given by T=P/(Rρ), with R = k/; the function f(ξ) thus represents the dimensionless temperature profile of the star. Integration of equation (5) outwards, subject to the boundary conditions f=1 and df/dξ=0 at ξ=0, then determines the position of the outer boundary of the star, i.e., the value ξ, where f(ξ)=0. As a result, the stellar radius is given by:

R(*) = Rξ (6)

The physical structure of the star is thus specified up to the constants ρ(c) and R. These parameters are not independent for a given stellar mass; instead, they are related via the constraint

M(*) = 4πR^3ρ(c) ∫[0 to ξ]* ξ^2f(ξ)^n dξ ≡ 4πR^3ρ(c)μ(0) (7)

where the final equality defines the dimensionless quantity μ(0), which is of order unity, and depends only on the polytropic index n.

Quite a lot of work there just in the first couple of pages, I think you'll agree.

I'll skip the section on nuclear reactions, primarily because I'm not a trained nuclear physicist, and some of the material presented is beyond my scope to comment upon in depth, but anyone who is a trained nuclear physicist, is hereby invited to comment in some detail upon this. :)

Moving on, we have:

3. Constraints on the Existence of Stars

Using the stellar structure model developed in the previous section, we now explore the range of possible stellar masses in universes with varying value of the stellar parameters. First, we find the minimum stellar mass required for a star to overcome quantum mechanical degeneracy pressure (§3.1) and then find the maximum stellar mass as limited by radiation pressure (§3.2). These two limits are then combined to find the allowed range of stellar masses, which can vanish when the required nuclear burning temperatures becomes too high (§3.3). Another constraint on stellar parameters arises from the requirement that stable nuclear burning configurations exist (§3.4). We delineate (in §3.5) the range of parameters for which these two considerations provide the limiting constraints on stellar masses and then find the region of parameter space that allows the existence of stars. Finally, we consider the constraints implied by the Eddington luminosity (§3.6) and show that they are comparable to those considered in the previous subsections.

Moving on past a lot of calculations, we have this:

Figure 5 shows the resulting allowed region of parameter space for the existence of stars. Here we are working in the (α,G) plane, where we scale the parameters by their values in our universe, and the results are presented on a logarithmic scale. For a given nuclear burning constant C, Figure 5 shows the portion of the plane that allows for stars to successfully achieve sustained nuclear reactions. Curves are given for three values of C: the value for p-p burning in our universe (solid curve), 100 times larger than this value (dashed curve), and 100 times smaller (dotted curve). The region of the diagram that allows for the existence of stars is the area below the curves.

Figure 5 provides an assessment of how “fine-tuned” the stellar parameters must be in order to support the existence of stars. First we note that our universe, with its location in this parameter space marked by the open triangle, does not lie near the boundary between universes with stars and those without. Specifically, the values of α, G, and/or C can change by more than two orders of magnitude in any direction (and by larger factors in some directions) and still allow for stars to function. This finding can be stated another way: Within the parameter space shown, which spans 10 orders of magnitude in both α and G, about one fourth of the space supports the existence of stars.

Much of the rest of the paper consists of technical discussions on the effects of various other parameters, such as the Eddington luminosity (which determines the maximum rate of energy liberation of the star, and sets a lower bound upon stellar lifetime), along with some in-depth discussion on unconventional stellar objects in other universes, and the likely physics affecting these. We can move directly on to the conclusion, viz:

5. Conclusion

In this paper, we have developed a simple stellar structure model (§2) to explore the possibility that stars can exist in universes with different values for the fundamental parameters that determine stellar properties. This paper focuses on the parameter space given by the variables (G, α, C), i.e., the gravitational constant, the fine structure constant, and a composite parameter that determines nuclear fusion rates. The main result of this work is a determination of the region of this parameter space for which bona fide stars can exist (§3). Roughly one fourth of this parameter space allows for the existence of “ordinary” stars (see Figure 5). In this sense, we conclude that universes with stars are not especially rare (contrary to previous claims), even if the fundamental constants can vary substantially in other regions of space-time (e.g., other pocket universes in the multiverse). Another way to view this result is to note that the variables (G, α, C) can vary by orders of magnitude from their measured values and still allow for the existence of stars.

That on its own drives a tank battalion through the idea that the universe is "fine-tuned". However, it's even better than that, viz:

For universes where no nuclear reactions are possible, we have shown that unconventional stellar objects can fill the role played by stars in our universe, i.e., the role of generating energy (§4). For example, if the gravitational constant G and the fine structure constant α are smaller than their usual values, black holes can provide viable energy sources (Figure 6). In fact, all universes can support the existence of stars, provided that the definition of a star is interpreted broadly. For example, degenerate stellar objects, such as white dwarfs and neutron stars, are supported by degeneracy pressure, which requires only that quantum mechanics is operational. Although such stars do not experience thermonuclear fusion, they often have energy sources, including dark matter capture and annihilation, residual cooling, pycnonuclear reactions, and proton decay. Dark matter particles can also (in principle) form degenerate stellar objects (see §4).

Of course, there are some caveats with respect to this, but in the main, these are of a fairly technical nature, and do not adversely affect the above conclusions. In short, vary the so-called "fine-tuned" constants over a wide range of values, and star formation of the sort we observe in the universe remains intact within 25% of that parameter space.

But it gets even better than this. Now it's time for the Harnik et al paper:

Abstract

A universe without weak interactions is constructed that undergoes big-bang nucleosynthesis, matter domination, structure formation, and star formation. The stars in this universe are able to burn for billions of years, synthesize elements up to iron, and undergo supernova explosions, dispersing heavy elements into the interstellar medium. These definitive claims are supported by a detailed analysis where this hypothetical "Weakless Universe" is matched to our Universe by simultaneously adjusting Standard Model and cosmological parameters. For instance, chemistry and nuclear physics are essentially unchanged. The apparent habitability of the Weakless Universe suggests that the anthropic principle does not determine the scale of electroweak breaking, or even require that it be smaller than the Planck scale, so long as technically natural parameters may be suitably adjusted. Whether the multi-parameter adjustment is realized or probable is dependent on the ultraviolet completion, such as the string landscape. Considering a similar analysis for the cosmological constant, however, we argue that no adjustments of other parameters are able to allow the cosmological constant to raise up even remotely close to the Planck scale while obtaining macroscopic structure. The fine-tuning problems associated with the electroweak breaking scale and the cosmological constant therefore appear to be qualitatively different from the perspective of obtaining a habitable universe.

As part of the preamble, I point everyone to this:

We do not engage in discussion of the likelihood of doing simultaneous tunings of parameters nor the outcome of statistical ensembles of parameters. These questions are left up to the ultraviolet completion, such as the string landscape, which is outside of the scope of effective field theory. Instead, we are interested in "running the universe forward" from a time after inflation and baryogenesis through billions of years of evolution. We will exploit the knowledge of our Universe as far as possible, adjusting Standard Model and cosmological parameters so that the relevant micro- and macro-physical outcomes match as closely as possible. We emphasize that this is really a practical matter, not one of principle, since any significant relaxation of the "follow our Universe" program would be faced with horrendously complicated calculations. Put another way, there is probably a wide range of habitable universes with parameters and structures that look nothing like our Universe. For us, it is enough to find one habitable Weakless Universe about which we can make the most concrete statements, hence matching to our Universe as closely as possible.

We define a habitable universe as one having big-bang nucleosynthesis, large-scale structure, star formation, stellar burning through fusion for long lifetimes (billions of years) and plausible means to generate and disperse heavy elements into the interstellar medium. As a consequence, we will demand ordinary chemistry and basic nuclear physics be largely unchanged from our Universe so that matching to our Universe is as straightforward as possible. We are not aware of insurmountable obstacles extending our analysis to planet formation, habitable planets, and the generation of carbon-based life. Nevertheless, these questions are beyond the scope of this paper, and we do not consider them further.

Finally, we should emphasize from the outset that this paper represents a purely theoretical exercise. There are no (even in principle) experimental predictions. The purpose of our paper is to provide a specific, concrete counter-example to anthropic selection of a small electroweak breaking scale.

This paper involves a fairly in-depth knowledge of particle physics, and the physics of various intricate quantum actions, so I'll spare everyone the spectacle of my trying to comment on a field about which I do not possess sufficient knowledge (a lesson that some supernaturalists would do well to learn). However, I shall point everyone to one paragraph of interest:

At BBN [Big Bang Nucleosynthesis] the visible matter can be described by two parameters, the ratio of the visible baryon abundance to photons η(b) and the ratio of protons to neutron abundance. We take

η(b) ≅ 4 × 10^-12 ≅ 10^-2 T(b) (7)

where we emphasize that this corresponds to just the baryon asymmetry in protons and neutrons, not hyperons. This is taken to be about two orders of magnitude smaller than in our Universe. This judicious parameter adjustment allows the Weakless Universe to have a hydrogen-to-helium ratio the same as our Universe without strong sensitivity to the ratio of the proton to neutron abundance. Hence, the first galaxies and stars are formed of roughly the same material as in our Universe. Moreover, the lower helium abundance that results from the lower baryon asymmetry occurs simultaneous with a substantially increased abundance of deuterium. The much increased deuterium abundance allows stars in the Weakless Universe to ignite through proton-deuterium fusion, explained in detail in Sec. 12.

Later on, after a large amount of technical discussions relating to particle generation and relative abundances thereof, we have this:

7 Chemistry

In the post-BBN phase of the Universe, the main players in our Universe are electromagnetism and gravity. Both of the these forces are unchanged in the Weakless Universe. The elemental abundances of the Weakless Universe have also been matched to our Universe (and are chemically indistinguishable, aside from the irrelevant tiny abundance of lithium). Chemistry in the Weakless Universe is virtually indistinguishable from that of our Universe. The only differences are the higher fraction of deuterium as hydrogen and the absence of atomic parity-violating interactions.

Maintaining this similarity between the Universes relies on having only one stable charged lepton: the electron. The presence of muons or taus (with masses as observed in our Universe) would allow for various exotic chemical properties and nuclear reaction rates. For instance, the Coulomb barrier would be far smaller for atoms with orbiting muons or taus, allowing dense-packed molecules and fusion at extremely low temperatures. Though this could be an interesting universe [4] it does not match our Universe and so we choose to remove muons and taus from the Weakless Universe.

Other more benign effects occur if the heavier quarks (c; b; t) were present in the Weakless Universe. Given that individual quark number is conserved, the lightest baryons carrying a heavy quark are stable. This means in addition to protons, neutrons, and Λ[0] hyperons, there would be several new stable baryons including Λ[+](c) , Λ[0](b) and Λ[+](t). [5] If these exotic stable baryons were in significant abundance in the Weakless Universe, there would be numerous anomalously heavy isotopes of hydrogen (and heavier elements). These are not obviously an impediment to successful BBN or star formation, but it would change the details of stellar nucleosynthesis reactions in ways that we are not able to easily calculate. Again, following our program of matching to our Universe as closely as possible, we eliminate this problem by insisting that the Weakless Universe is devoid of these heavy quarks.

After an in-depth discussion of such topics as matter domination, density perturbations, dark matter candidates, the stability of light and heavy elements (along with exotic isotopes of the former), star formation, stellar nucelosynthesis, stellar lifetimes, supernovae, and the population of the interstellar medium with heavy elements, we reach this:

15 A Natural Value of the Cosmological Constant?

We have shown that even with electroweak breaking at the Planck scale, a habitable universe can result so long as we are able to adjust technically natural parameters. It would be interesting to perform the same procedure for the cosmological constant (CC). Here our goal is far more modest than in our previous discussion: we simply wish to examine whether large scale structure and complex macroscopic systems can result if the cosmological constant is pushed to the Planck scale while we freely adjust other parameters. Performing a thorough analysis of this question is beyond the scope of this work. We will instead simply sketch some of the issues by examining simplified toy models. We find an upper bound on the CC from two qualitative requirements: that density perturbations grow, and that complex macroscopic systems consist of a large number of particles.

In order for structure to be formed in our Universe, a period of matter domination is vital to allow for linear growth of density perturbations. Matter domination is cutoff by CC domination, which is just the Weinberg bound on the anthropic size of the cosmological constant. Naively we could raise δρ/ρ up to order one, so that the bound on the cosmological constant relaxes to

ρ(Λ) ≅ T(eq)^4. (37)

But even this modest gain (about 10 orders of magnitude of 120) is much too optimistic. For Universes qualitatively similar to ours, Refs. [4, 6] found other astrophysical constraints limit the size of density perturbations (and place constraints on other parameters, such as the baryon density) such that the largest relaxation of the CC is closer to about 3 orders of magnitude of 120. Hence, even varying multiple cosmological parameters simultaneously, this appears to be as far as one can go without radically changing the Standard Model itself.

However, as the authors themselves state earlier, the cosmological constant is an unnatural parameter of the relevant effective field theories, and is therefore possibly itself a derived parameter, arising from as yet uninvestigated more fundamental natural parameters.

On to the conclusion:

16 Discussion

In this paper we have constructed a Universe without weak interactions that undergoes BBN, matter domination, structure formation, star formation, long periods of stellar burning, stellar nucleosynthesis up to iron, star destruction by supernovae, and and dispersal of heavy elements into the interstellar medium. These properties of the Weakless Universe were shown by a detailed analysis that matched to our Universe as closely as possible by arbitrarily adjusting Standard Model and cosmological parameters. The Weakless Universe therefore provides a simple explicit counter-example to anthropic selection of a small electroweak breaking scale, so long as we are allowed to simultaneously adjust technically natural parameters relative to our observed Universe. As an aside, we are unaware of any obstruction to obtain a "partial Weakless Universe" in which v < v(bar) < M(Pl) while allowing analogous adjustments of technically natural parameters.

This hypothetical universe is a counter-example to anthropic selection of the electroweak scale in the context of an effective field theory, where we are free to imagine arbitrary adjustments in technically natural parameters. An ultraviolet completion, however, may or may not permit these parameter adjustments, and as a result the Weakless Universe may or may not be "accessible". This requires detailed knowledge of the ensemble of universes that are predicted. String theory indeed appears to contain a huge number of vacua, a "landscape" [27, 28, 29, 30], in which some parameters adjust from one vacuum to another. Furthermore, only a specific set of parameters vary in the field theory landscapes considered in [31]. In its most celebrated form, the string landscape provides a potential anthropic rationale for the size of the cosmological constant [3]. However, reliable model-independent correlations between the size of the CC and other parameters is lacking, and so we have no way to know yet whether the variation of parameters discussed here is realized on the string landscape.

So, the authors provide a demonstration that it is possible for one of the four fundamental forces of the universe to be omitted, namely the weak nuclear force, and still produce a habitable universe. I think that more or less wraps it up for "fine tuning", don't you?

Let's make this short and sweet, so that even a pedlar of apologetics can understand this. It's possible to vary so-called "fine-tuned" constants over a wide range, and still produce working stars such as the ones we observe today, with practically identical nucleosynthesis of chemical elements in place, and it is ALSO possible to REMOVE THE WEAK NUCLEAR FORCE ALTOGETHER FROM THE UNIVERSE, and still produce a habitable universe differing only from our own in subtle details.

Game Over for "fine tuning".

Old man shouts at clouds's picture
@Cali

@Cali

To introduce a coarse note ....I fucking love this and read every word....

Thanks again Cali

Delaware's picture
@ Calilasseia

@ Calilasseia

I did a web search for Penrose odds of universe existing.
The first site I saw that had the 10 to 10 to 123 (or whatever it is) was the one I referenced.
That is all, was not saying it was a proper scientific website.

Doesn't the four digit DNA look like code, or language?

Nyarlathotep's picture
Jo - How much more

Jo - How much more complicated is the universe and life than 1 million heads?

What do you mean by complicated? For starters: how complicated is a result of 1 million heads (presumably out of 1 million fair coin tosses)?

At the very least can you tell us the dimensions of this property: "complicated"? For example: the dimensions of speed are distance * time^(-1).

Delaware's picture
@ Nyartlathotep

@ Nyartlathotep

Another good point. Am I ever going to pass this course? :-)

Maybe a better word would be unlikely, or greater odds.

Nyarlathotep's picture
Jo - Maybe a better word

Jo - Maybe a better word would be unlikely, or greater odds.

Well that is a lot better. I'll go further and suggest using the word probability; and will attempt to rephrase your question using that:
How much more probable is getting 1 million heads on 1 million fair coin tosses, when compared to the probability of the universe and life existing? Calculating the first part is easy, its (1/2)^(1,000,000) which is about 10^(-300,000). No one knows how to calculate the 2nd part, but the suspicious calculation you cited was 10^(-1230).

So if we accept that calculation about the universe (which I don't) then the likelihood of the universe and life existing is about 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 times more probable than your coin toss scenario. But remember you had it the other way around, making your suggestion the largest error I've seen in a while (but I've seen worse). But at least you have the courage to allow your beliefs to be examined; instead of protecting them with undefined terminology; which is what typically happens.

Calilasseia's picture
Doesn't the four digit DNA

Doesn't the four digit DNA look like code, or language?

What it looks like and what it is may be two entirely separate entities. Indeed, I've already dealt with canards about "information" and spurious assertions about "codes" in this previous post. Which includes as a bonus a nice selection of scientific papers covering the evolvability of the "genetic code". NOTE: that post was written before I learned how to use the board tags for text formatting, so, to make your life a little easier, I'll now reproduce that post here, with the formatting reinstated as I originally intended, along with some minor abridgements. Hold on to your hat for a very. undulating roller coaster ride ...

Creationist Canards About "Information" And "Codes"

Information is nothing more than the observational data available with respect to the current state of a system of interest. That is IT. Two rigorous mathematical treatments of information, namely Shannon's treatment and the treatment by Kolmogorov & Chaitin, are predicated on this fundamental notion. Indeed, when Claude Shannon wrote his seminal 1948 paper on information transmission, he explicitly removed ascribed meaning from that treatment, because ascribed meaning was wholly irrelevant to the analysis of the behaviour of information in a real world system. Allow me to present a nice example from the world of computer programming. Be aware that multiple conventions exist with respect to the writing of hexadecimal numbers: in the world of Intel x86 programming, hexadecimal numbers are represented using a 'H' postfix, as in 2A00H, whilst in the world of Motorola CPU programming, the convention is a '$' sign prefix, e.g., $2A00. Both will appear below in the exposition that follows.

Now, with that preamble over, allow me to present to you a string of data (written as hexadecimal bytes):

81 16 00 2A FF 00

Now, to an 8086 processor, this string of bytes codes for a single 8086 machine language instruction, namely:

ADC [2A00H], 00FFH

which adds the immediate value of 00FFH (255 decimal) to whatever value is currently stored at the 16-bit memory location addressed by DS:2A00H.

Note that 8086 processors, and their later relations, use segmented memory addressing in what's known as "real mode". The actual memory address referenced by an 80x86 processor in 'real mode', is the address given by adding the offset (here 2A00H), to 16 times the contents of a segment register (this was how these processors accessed a 1 MB address space in the earliest days of the processor family). Four such segment registers exist - DS, the data segment register, CS, the code segment register, SS, the stack segment register, and ES, the extra segment register. For the majority of data access instructions, DS is implied as the default segment register, unless the base address is of the form [BP+disp], in which case the default segment register is SS. A complication that makes complete description of the above instruction all the more tedious.

However, on an older, 8-bit 6502 processor, the above sequence codes for multiple instructions, namely the following sequence:

CLC
ASL ($00,X)
LDX #$FF
BRK

The first of these instructions clears the carry flag in the processor status register P. The second instruction takes the operand $00, adds to it the contents of the X register (8-bit addition only), and uses that computed address (call this N) as an index into the first 256 bytes of memory (page zero). The contents of address N and address N+1 together then constitute a 16-bit pointer into another location in memory. The 8-bit contents of this location are then shifted one bit position left (ASL stands for Arithmetic Shift Left). The third instruction loads the contents of the X register with the immediate value $FF (255 decimal). The third instruction, BRK, is a breakpoint instruction, and performs a complex sequence of operations. First, it takes the current value of the program counter (PC), which is now pointing at the BRK instruction, adds 2 to that value, and pushes it onto the stack (2 bytes are therefore pushed). It then pushes the contents of the processor status register P. Then, it loads the contents of the memory locations $FFFE and $FFFF (the top 2 locations in the 6502 address space) into the program counter and continues execution from there. The top end of memory in a 6502 system typically consists of ROM, and the hard-coded value stored in locations $FFFE/$FFFF is typically a vector to a breakpoint debugging routine in ROM, but that's an implementation dependent feature, and the exact contents of $FFFE/$FFFF vary accordingly from system to system.

To make matters even more interesting, the bytes also have meaning to a Motorola 6809 processor, viz:

CMPA #$16
NEG $2AFF
NEG ??

The first instruction is "compare accumulator A with the value $16 (22 decimal)". This performs an implicit subtraction of the operand $16 from the current contents of accumulator A, sets the condition codes (CC) according to whether the result is positive, zero or negative (and also sets other bits allowing more intricate comparisons to be made) but discards the actual result of the subtraction. The next instruction, NEG $2AFF, takes the contents of the memory location at address $2AFF (decimal 11,007), and negates it (so that a value of +8 becomes -8 and vice versa, assuming 2's complement storage). The next instruction is incomplete, hence the ?? operand, because the NEG opcode (the 00 byte) needs two following bytes to specify a memory address in order to specify which memory location's contents to negate. So, whatever two bytes follow our 6-byte stream will become the address operand for this NEG instruction.

Now, that's ONE stream of bytes, which has THREE different meanings for three different processors. Therefore ascribing meaning to the byte stream as part of the process of analysing transmission of the data is erroneous. Meaning only becomes important once the data has been transmitted and received, and the receiver decides to put that data to use. If we have three different computers receiving this 6-byte stream from appropriate sources, then the Shannon information content of each bit stream is identical, but our three different computers will ascribe totally different meanings to the byte stream, if they are regarded as part of a program instruction sequence. An 8086-based computer will regard the byte stream as an ADC instruction, the 6502-based computer will regard it as a CLC, ASL, LDX, BRK sequence, and the 6809-based computer will regard it as a CMPA, NEG, NEG sequence (and the latter will demand two more bytes to be transmitted in order to complete the last instruction).

Consequently, ascribed meaning is wholly irrelevant to the rigorous treatment of information. Creationists routinely introduce the error of assuming a priori that "information" and "ascribed meaning" are synonymous, which the above example refutes wholesale (along with thousands of others that could be posted if I had the time). Of course, creationists conflate information with ascribed meaning deliberately, because they seek to expound the view that information is a magic entity, and therefore requires an invisible magic man in order to come into existence. This is complete rot, as the Shannon and Kolmogorov/Chaitin analyses of information demonstrate readily, not to mention Turing's large body of work with respect to information. All that matters, at bottom, is that the entities and interactions applicable to a given system of interest produce different results when applied to different states of that system. Information sensu stricto, namely the observational data available with respect to the current state of a system, only becomes "meaningful" when different states lead to different outcomes during appropriate interactions applicable to the system, and the only "meaning" that matters, at bottom, is what outcomes result from those different system states, which in the case of the computer data above, differs from system to system.

Plus, Marshall erects the bogus argument that DNA is a "code". This IS bogus. DNA is simply an organic molecule that is capable of existing in a large number of states, each of which results in a different outcome with respect to the chemical interactions that the molecule takes part in. Because it can exist in a large number of states, because those states are all associated with specific, systematic interactions (such as the production of a particular protein after transcription), and because those states are coupled to those systematic and well-defined interactions in a largely one-to-one manner (for the time being, I'll leave to one side complications such as selenocysteine, which were afterthoughts grafted onto the original system), they can be treated in an information-theoretic manner as if they constituted a "code", because doing so simplifies our understanding of those systematic interactions, and facilitates further detailed analysis of that system. That, again, is IT. The idea that DNA constitutes a "code" intrinsically is merely a baseless creationist assertion resulting from deliberate apologetic misrepresentation of the code analogy. A misrepresentation that itself is then subject to rampant discoursive misuse, because the argument erected consists of:

[1] DNA is a code (unsupported baseless assertion);

[2] All codes are produced by "intelligence" (deliberately omitting the fact that the only "intelligence" we have evidence of that produces codes is human intelligence);

[3] Therefore an "intelligence" produced DNA (the inference being that this "intelligence" is supernatural, which doesn't even arise as a corollary from [2] when one factors in the omitted detail, that the only "intelligence" we have evidence for as a code producer is human, and therefore natural, intelligence).

This argument is fatuous as it stands, even without factoring in extra scientific knowledge that has been acquired in relatively recent times, but when we do factor in this knowledge, it becomes absurd to the Nth degree. That scientific knowledge consists of at least twenty-three (as of 2011, when I last updated the list - there ARE more now) scientific papers demonstrating that the "genetic code" is itself an EVOLVABLE ENTITY. Those papers are:

[1] A Co-Evolution Theory Of The Genetic Code by J. Tze-Fei Wong, Proceedings of the National Academy of Sciences of the USA, 72(5): 1909-1912 (May 1975)

[2] A Mechanism For The Association Of Amino Acids With Their Codons And The Origin Of The Genetic Code by Shelley D. Copley, Eric Smith & Harold J. Morowitz, Proceedings of the National Academy of Sciences of the USA, 102(12): 4442-4447 (22nd March 2005)

[3] An Expanded Genetic Code With A Functional Quadruplet Codon by J. Christopher Anderson, Ning Wu, Stephen W. Santoro, Vishva Lakshman, David S. King & Peter G. Schultz, Proceedings of the National Academy of Sciences of the USA, 101(20): 7566-7571 (18th May 2004)

[4] Collective Evolution And The Genetic Code by Kalin Vetsigian, Carl Woese and Nigel Goldenfeld, Proceedings of the National Academy of Sciences of the USA, 103(28): 10696-10701 (11th July 2006)

[5] Emergence Of A Code In The Polymerization Of Amino Acids Along RNA Templates by Jean Lehmann, Michael Cibils & Albert Libchaber, PLoS One, 4(6): e5773 (3rd June 2009) DOI:10.1371/journal.pone.0005773

[6] Encoding Multiple Unnatural Amino Acids Via Evolution Of A Quadruplet Decoding Ribosome by Heinz Neumann, Kaihang Wang, Lloyd Davis, Maria Garcia-Alai & Jason W. Chin, Nature, 464: 441-444 (18th March 2010)

[7] Evolution And Multilevel Optimisation Of The Genetic Code by Tobias Bollenbach, Kalin Vetsigian & Roy Kishony, Genome Research (Cold Spring Harbour Press), 17: 401-404 (2007)

[8] Evolution Of Amino Acid Frequencies In Proteins Over Deep Time: Inferred Order Of Introduction of Amino Acids Into The Genetic Code by Dawn J. Brooks, Jacques R. Fresco, Arthur M. Lesk & Mona Singh, Molecular & Biological Evolution, 19(10):1645-1655 (2002)

[9] Evolution Of The Aminoacyl-tRNA Synthetases And The Origin Of The Genetic Code by R. Wetzel, Journal of Molecular Evolution, 40: 545-550 (1995)

[10] Evolution Of The Genetic Code: Partial Optimization Of A Random Code For Robustness To Translation Error In A Rugged Fitness Landscape by Artem S Novozhilov, Yuri I Wolf and Eugene V Koonin, Biology Direect, 2: 24 (23rd October 2007) DOI:10.1186/1745-6150-2-24

[11] Exceptional Error Minimization In Putative Primordial Genetic Codes by Artem S Novozhilov & Eugene V. Koonin, Biology direct, 4(1): 44 (2009)

[12] Expanding The Genetic Code Of Escherichia coli by Lei Wang, Angsar Brock, Brad Herberich & Peter G. Schultz, Science, 292: 498-500 (20th April 2001)

[13] Experimental Rugged Fitness Landscape In Protein Sequence Space by Yuuki Hayashi, Takuyo Aita, Hitoshi Toyota, Yuzuru Husimi, Itaru Urabe & Tetsuya Yomo, PLoS One, 1(1): e96 (2006) DOI:10.1371/journal.pone.0000096

[14] Importance Of Compartment Formation For A Self-Encoding System by Tomoaki Matsuura, Muneyoshi Yamaguchi, Elizabeth P. Ko-Mitamura, Yasufumi Shima, Itaru Urabe & Tetsuya Yomo, Proceedings of the National Academy of Sciences of the USA, 99(11): 7514-7517 (28th May 2002)

[15] On The Origin Of The Genetic Code: Signatures Of Its Primordial Complementarity In tRNAs And Aaminoacyl-tRNA Synthetases by S. N. Rodin and A. S. Rodin, Heredity, 100: 341-355 (5th March 2008)

[16] Origin And Evolution Of The Genetic Code: The Universal Enigma by Eugene V. Koonin & Artem S. Novozhilov, IUBMB Life, 61(2): 99-111 (February 2009) (Also available at arXiv)

[17] Protein Evolution With An Expanded Genetic Code by Chang C. Liu, Antha V. Mack, Meng-Lin Tsao, Jeremy H. Mills, Hyun Soo Lee, Hyeryun Choe, Michael Farzan, Peter G. Schultz & Vaughn V. Smider, Proceedings of the National Academy of Sciences of the USA, 105(46): 17688-17693 (18th November 2008)

[18] Protein Stability Promotes Evolvability by Jesse D. Bloom, Sy T. Labthavikul, Christopher R. Otey & Frances H. Arnold, Proceedings of the National Academy of Sciences of the USA, 103(15): 5869-5874 (11th April 2006)

[19] Reassigning Cysteine In The Genetic Code Of Escherichia coli by Volker Döring and Philippe Marlière, Genetics, 150: 543-551 (October 1998)

[20] Recent Evidence For Evolution Of The Genetic Code by Syozo Osawa, Thomas H, Jukes, Kimitsuna Watanabe & Akira Muto, Microbiological Reviews, 56(1): 229-264 (March 1992)

[21] Rewiring The Keyboard: Evolvability Of The Genetic Code by Robin D. Knight, Stephen J. Freeland & Laura F. Landweber, Nature Reviews Genetics, 2: 41-58 (January 2001)

[22] Thawing The Frozen Accident by C. W. Carter Jr., Heredity, 100: 339-340 (13th February 2008)

[23] A Simple Model Based On Mutation And Selection Explains Trends In Codon And Amino-Acid Usage And GC Composition Within And Across Genomes by Robin D. Knight, Stephen J. Freeland & Laura F. Landweber, Genome Biology, 2(4): research0010.1–0010.13 (22nd March 2001)

This collection of papers is incomplete, as more have been published in the relevant journals since I compiled this list.

So, since we have peer reviewed scientific papers demonstrating that the "genetic code" is itself an evolvable entity, and indeed, since scientists have published experimental work investigating the behaviour of alternative genetic codes arising from this research, the idea that an invisible magic man was needed for this is recrudescently nonsensical.

However, an essential concept is required to be covered in more detail here, namely, the use of analogy to aid understanding. Scientists generate analogies as a means of summarising interactions and entities that would, if expounded in detail, result in truly frightening levels of verbosity. Analogies are constructed for two purposes - disseminating understanding of a system of interest, and brevity. Analogies are conceptual tools we press into service to make sense of intricate systems of entities and interactions. Those analogies are NOT the systems in question, an elementary concept that is frequently discarded in a duplicitous manner by pedlars of creationist apologetics, who frequently deploy improper conflations not to enlighten, but to obfuscate in pursuit of an agenda. That concept is summarised succinctly as "the map is not the terrain" (a phrase donated to me by an acquaintance with a particularly keen eye for such matters).

Indeed, thanks to Turing and his successors in the relevant fields, we can see that all systems of interaction, that can be determined by observation to obey well defined rules, can be modelled by a suitably constructed Turing machine - this is, indeed what every simulation program in existence does. A simulation models the behaviour of a system of interest, by applying the well-defined rules determined to be in operation therein, and generating appropriate output, so that [1] the correlation with observational reality can be checked, and [2] to allow us to investigate, within a "sandbox" of sorts, what is likely to happen if that system is taken into regions of operation that would be impractical or dangerous to take the real system into. I'm reminded at this juncture, that Turing's seminal discovery can be summarised as follows: "every process in the universe can be reduced to a meaningless string of symbols", just as Gödel's Incompleteness Theorem can be summarised as "every idea in the universe can be reduced to a meaningless string of symbols", which is what he did to number theory in order to demonstrate said incompleteness. :)

It should come as no surprise, that chemistry, a discipline whose entities obey well-defined rules of interaction, to the point where chemists have been able to investigate syntheses and reactions by the million, is itself amenable to such modelling, and as a corollary, amenable to the construction of numerous analogies to facilitate understanding of those interactions. DNA, as an organic molecule, falls within this remit admirably. Not least, because particular subunits have their own well-defined interactions, and which, as a corollary, are modelable and subject to representation by analogy. The so-called "genetic code" is simply another one of those analogies, and, courtesy of the above papers (along with MANY others), has itself been demonstrated to be an entity subject to evolutionary processes. Once again, it's testable natural processes all the way down, resulting in system state changes in collections of relevant entities. That is IT. We don't need to introduce superfluous mythological entities to understand any of this, we simply need to expend diligent effort learning from antecedent biochemists.

Indeed, every time I've seen a creationist try to erect a fake "gotcha", by pointing to some gap in scientific knowledge, it transpires that either [1] the gap ends up being filled quickly by relevant research, or [2] wasn't a gap in the first place, because extant research had already answered the relevant questions. Furthermore, that research sometimes answers questions that creationists didn't even know existed when the research was being conducted, but which, in all too familiarly observed mendacious manner, then become co-opted into the apologetic fabrications that creationists mistakenly think enjoys the same imprimatur as real scientific research.

There are, of course other fallacies relevant to cover here, but space is limited, and it will be apposite to return to those at a later date. But for now, the key concepts to be remembered are:

[1] Information is NOT a magic entity. It is simply the data extant with respect to the current state of a system of interest.

[2] Ascribed meaning is also not a magic entity. It is simply the set of subsequent interactions that are set in motion, when the current state of a system of interest is modified by other systems of interest.

[3] That pithy phrase, "the map is not the terrain", is suitably illuminative with respect to the above.

[4] All of the above come into play, the moment any well defined rules of interaction exist, describing the behaviour of a system of interest.

Indeed, that is, at bottom, what science does - it examines systems of interest, determines what entities and interactions are present therein, and what well-defined rules describe the behaviour thereof. And that brings me to the other concept to take note of here - science is a DEscriptive enterprise, NOT a PREscriptive enterprise. Science works as well as it does, because it pays attention to observational data, including when said data tells us we need to revise our view of a system of interest, and as a consequence, DEscribes what happens, instead of attempting to PREscribe what happens. That distinction is important, because it results in the emergence of a vast canyon separating science and religion.

Religion purports to declare by decree, that the universe and its contents operate in a given manner, regardless of how frequently observational reality laughs at the pretension inherent therein, whilst science lets the data determine what is being said. Another massive difference, is that religion attempts to pretend that its blind assertions constitute The Truth™, unswerving and unbending forever, regardless of how often reality says otherwise, whilst science simply says "this is our best current model, and so far, works well enough to allow us to do the following when we use it", and remains open to changing that model when the data tells us change is needed. The power and flexibility arising from letting the data do the talking, is one of science's greatest gifts, and one we should be openly celebrating. Not least because it also tells us, in no uncertain terms, which ideas are wrong.

And that's the beauty of a genuine scientific hypothesis. When constructed, a genuine scientific hypothesis is prepared to be wrong. It's prepared to be given short shrift by the data, once the experiments are conducted. A genuine scientific hypothesis results in predictions about the behaviour of the system of interest, which can be searched for, and if not found, send the authors back to the drawing board. On the other hand, if the data says that said hypothesis is in accord with observation, we've learned something special. Something that can never be learned from mythological assertion, because mythological assertion is upheld by fabricating excuses to hand-wave away inconvenient falsifying data, in a desperate attempt to preserve the so-called "sacred" status of the assertion. The bad news for those who think this is the way forward, is that nothing is sacred. EVERY assertion is, by definition, a free-fire zone for every bearer of discoursive miniguns to open fire at. If you don't want your precious assertions subject to such attention, don't parade them in public.

I think this covers relevant important bases.

Tin-Man's picture
@Cali

@Cali

Hot damn! That was fuckin' awesome to read. Honestly, you lost me with a small bit of the computer code stuff, but I at least understood the gist of it. But the rest was simply fantastic! Great having you around here... *flourishing bow*...

Calilasseia's picture
You can tell I was an

You can tell I was an assembly language programmer in the late 1980s and early 1990s, can't you? :)

Tin-Man's picture
@Cali Re: "You can tell I

@Cali Re: "You can tell I was an assembly language programmer in the late 1980s and early 1990s, can't you? :)"

...*chuckle*... I'm afraid I'll have to take your word for it. I was a whiz with BASIC language in high school during mid-eighties. Matter of fact, me and a buddy of mine pretty much taught the programming class, because the assigned teacher was more or less tossed into the job without a life jacket. (As I recall, it was only the first or second year the computer course had been available. TRS-80's, if that tells you anything... *grin*...) After high school, though, when the Commodores and Apples started taking over, I ended up losing interest in all of it for some reason I have never really been able to pinpoint.... *shrugging shoulders*... Funny sometimes how life throws little curveballs like that... *chuckle*...

Old man shouts at clouds's picture
@ Cali

@ Cali

Thankyou, I love learning this stuff from you.

Calilasseia's picture
Ok, in that earlier post, I

Ok, in that earlier post, I listed some papers covering the evolution and evolvability of the genetic code. Let's take a look at some of these papers in more detail, shall we? First, the PNAS paper by Wong:

ABSTRACT The theory is proposed that the structure of the genetic code was determined by the sequence of evolutionary emergence of new amino acids within the primordial biochemical system.

In more detail, the author opens with the following:

The genetic code for protein molecules is a triplet code, consisting of the 64 triplets of the four bases adenine, guanine, cytosine and uracil (1, 2). The cracking of the code was a monumental achievement, but it posed in turn what Monod (3) regards as one of the challenges of biology, namely the "riddle of the code's origin." Crick (4) has discussed two different theories which have been proposed regarding this origin. The Stereochemical Theory postulates that each amino acid became linked to its triplet codons on account of stereochemical reasons, whereas the Frozen Accident Theory postulates that the linkage arose purely by chance. Since neither theory has given a systematic solution to the riddle, the present purpose is to explore a third hypothesis, which postulates that:

The structure of the codon system is primarily an imprint of the prebiotic pathways of amino-acid formation, which remain recognizable in the enzymic pathways of amino-acid biosynthesis. Consequently the evolution of the genetic code can be elucidated on the basis of the precursor-product relationships between amino acids in their biosynthesis. The codon domains of most pairs of precursor-product amino acids should be contiguous, i.e., separated by only the minimum separation of a single base change.

This theory, which may be called a Co-evolution Theory, is readily tested. If many pairs of amino acids which bear a nearest (in terms of the number of enzymic steps) precursor product relationship to each other in a biosynthetic pathway fail to occupy contiguous codon domains, the theory would be untenable. The known precursor-product conversions between amino acids are (5-7):

Glu -> Gln
Glu -> Pro
Glu -> Arg
Asp -> Asn
Asp -> Thr
Asp -> Lys
Gln -> His
Thr -> Ile
Thr -> Met
Set -> Trp
Ser -> Cys
Val -> Leu
Phe -> Tyr

Of these, only the relationships of Asp to Lys and Thr to Met require some comment. Lys can be synthesized either from Asp via the diaminopimelate pathway (8), or from Glu via the a-aminoadipate pathway (9). Since the former pathway operates in prokaryotes and the latter in eukaryotes, an Asp-Lys pairing has greater prebiotic significance than a Glu-Lys pairing. The biosynthesis of Met can proceed best from Asp, but Thr is nearer to Met in terms of the number of enzymic steps involved (homoserine, which might represent a more primitive form of Thr, is even nearer still to Met). Although Ser and Cys can enter into the Met-biosynthetic pathway subsequent to the entry of Thr, neither Ser nor Cys is a straightforward precursor of Met. Ser is not the only possible contributor of a one-carbon group to -Met, and Cys is not the only possible contributor of sulfur (10). a-Transaminations, because of their relative nonspecificity, are not regarded as useful criteria for the tracing of precursor-product relationships. Aside from the above precursor-product relationships, Glu, Asp, and Ala are known to be interconvertible via the tricarboxylate cycle, and Ala, Ser, and Gly via the metabolism of pyruvate, glycerate, and glyoxylate (6).

Evolutionary map of the genetic code

When the codons for various precursor-product amino acids (Table 1) are examined, many of the codon domains of product amino acids are found to be contiguous with those of their respective precursors. The only noncontiguities are those of the Glu-Pro, Glu-Arg, Asp-Thr, and Asp-Lys pairs. If the prebiotic derivations of Gln from Glu, and Asn from Asp, had not occurred at the earliest stages of codon distribution, CAA and CAG could be expected to form part of the early Glu codons, and AAU and AAC part of the early Asp codons. This simple secondary postulate regarding the dicarboxylic amino acids and their amides suffices to remove all noncontiguities between precursors and products. It becomes possible to construct in Fig. 1 a map of the genetic code in which the codon domains of every precursor-product pair of amino acids (connected by single-headed arrows), as well as those of other interconvertible pairs (connected by double-headed arrows) are separated by only a single base change. This confirms the prediction by the Co-evolution Theory that codon distribution is closely related to amino-acid biosynthesis. Furthermore, since the theory suggests that the enzymic pathways of amino-acid biosynthesis largely stemmed from the prebiotic pathways of amino-acid formation, the pathways of this map are regarded as co-evolutionary pathways through which new amino acids were generated within the primordial system, and through which the triplet codons became distributed to finally the 20 amino acids.

MY INSERTED NOTE: pay particular attention to two figures, Fig. 1 and Table 1, from the paper.

Tests for randomness

The correlation between codon distribution and amino-acid biosynthesis indicated in Fig. 1 could arise not only from coevolution, but also in principle from chance. However, the unlikelihood of the latter explanation can be demonstrated in two different ways. First, consider the widespread contiguities between the codons of precursor and product amino acids. For any precursor codon triplets, there will be a other triplets in the genetic code which are contiguous with the group, and b other triplets which are noncontiguous. If a product of this amino acid has n codons, the random probability P that as many as x of these n codons turn out to be contiguous with some precursor codon is determined by the hypergeometric distribution (see paper for the actual formula) ...

The calculated values of P for eight precursor-product pairs are shown in Table 2. Using the method of Fisher (11), the eight corresponding -2 1n P values can be summed to give a &#chi;&#sup2; value of 45.01 with 16 degrees of freedom; this indicates an aggregate probability of less than 0.0002 that these eight sets of contiguities could have become so numerous by chance. Amongst the eight amino-acid pairs, either Phe-Tyr or Val-Leu may represent sibling products of a common biosynthetic pathway rather than true precursor and product. Their deletion from calculation leaves a &#chi;&#sup2; value of 27.10 with 12 degrees of freedom, which still points to an aggregate probability of only 0.0075. The potential Glu-Pro, Glu-Arg, Asp-Thr, Asp-Lys, Thr-Met, Ala-Ser-Gly and Glu-Asp-Ala contiguities, plausible but less certain, have not been included in these calculations; their inclusion would lower the aggregate probability even further. Also, there are other ways to perform the statistical analysis, e.g., by taking a pair of codons such as UGU and UGC as one rather than two units in the hypergeometric distribution, but the nonrandom character of the precursor-product contiguities is far too striking to be fundamentally circumventable by statistical methodology.

Secondly, Gln, Pro, and Arg are biosynthetic siblings of the Glu family, and Asn, Thr, and Lys are siblings of the Asp family. Likewise, Cys and Trp are siblings of the Ser family, and Ile and Met are siblings of the Thr family. Of the seven pairs of amino acids in Table 1 that share the first two bases, Ile-Miet, Asn-Lys, and Cys-Trp are siblings. His-Gln are precursor-product, and Asp-Glu are either siblings or precursor-product. Only Phe-Leu and Ser-Arg are unrelated pairs. There are 190 possible amino-acid pairs amongst the 20 amino acids, and the four families of siblings generate a total of eight sibling pairs. Accordingly the probability of randomly finding as many as three out of any seven amino-acid pairs to be sibling pairs is only 0.00161 on the basis of Eq. 1 (a = 8, b = 182, n = 7, x = 3). If Ile-Met are not regarded as siblings, this probability would be raised to 0.0224, but then there are also grounds to consider Asp-Glu as siblings of the tricarboxylate cycle, whereupon it would be reverted to 0.00161. In any case the enrichment of siblings amongst amino-acid pairs sharing the same first two bases appears strongly nonrandom, and provides further evidence against a chance origin of the correlation between amino-acid biosynthesis and codon distribution.

The rest of the paper can be read in full by downloading the PDF from here.

Moving on, let's look at the Copley et al paper, which can be downloaded from here. This opens as follows:

The genetic code has certain regularities that have resisted mechanistic interpretation. These include strong correlations between the first base of codons and the precursor from which the encoded amino acid is synthesized and between the second base of codons and the hydrophobicity of the encoded amino acid. These regularities are even more striking in a projection of the modern code onto a simpler code consisting of doublet codons encoding a set of simple amino acids. These regularities can be explained if, before the emergence of macromolecules, simple amino acids were synthesized in covalent complexes of dinucleotides with &#alpha;-keto acids originating from the reductive tricarboxylic acid cycle or reductive acetate pathway. The bases and phosphates of the dinucleotide are proposed to have enhanced the rates of synthetic reactions leading to amino acids in a small-molecule reaction network that preceded the RNA translation apparatus but created an association between amino acids and the first two bases of their codons that was retained when translation emerged later in evolution.

The authors continue thus:

The genetic code has many regularities (1), of which only a subset have explanations in terms of tRNA function (2) or robustness against deleterious effects of mutation (3, 4) or errors in translation (3, 5). There is a strong correlation between the first bases of codons and the biosynthetic pathways of the amino acids they encode (1, 6). Codons beginning with C, A, and U encode amino acids synthesized from &#alpha;-ketoglutarate (α-KG), oxaloacetate (OAA), and pyruvate, respectively. [¶] These correlations are especially striking in light of the structural diversity of amino acids whose codons share a first base. For example, codons for Glu and Pro both begin with C, and those for Cys and Leu begin with U. Codons beginning with G encode amino acids that can be formed by direct reductive amination of a simple &#alpha;-keto acid. These include glycine, alanine, aspartate, and glutamate, which can be formed by reductive amination of glyoxalate, pyruvate, OAA, and &#alpha;-KG, respectively. There is also a long-recognized relationship between the hydrophobicity of the amino acid and the second base of its codon (1). Codons having U as the second base are associated with the most hydrophobic amino acids, and those having A as the second base are associated with the most hydrophilic amino acids.

We suggest that both correlations can be explained if, before the emergence of macromolecules, simple amino acids were synthesized from α-keto acid precursors covalently attached to dinucleotides that catalyzed the reactions required to synthesize specific amino acids (see Fig. 1). This is a significant departure from previous theories attempting to explain the regularities in the genetic code (3). The ‘‘stereochemical’’ hypothesis suggests that binding interactions between amino acids and their codons or anticodons dictated the structure of the genetic code (7–10). The ‘‘coevolution’’ hypothesis (6) suggests that the original genetic code specified a small number of simple amino acids, and that, as more complex amino acids were synthesized from these precursors, some codons that initially encoded a precursor were ceded to its more complex products. Finally, the genetic code has been proposed to be simply a ‘‘frozen accident’’ (11).

Recent analysis suggests that the reductive tricarboxylic acid cycle could serve as a network-autocatalytic self-sufficient source for simple &#alpha;-keto acids, including glyoxalate, pyruvate, OAA, and &#alpha;-KG, as well as the carbon backbones of sugars and nucleobases (12). &#alpha;-Keto acids can also be generated from the reductive acetyl CoA pathway (13). Most simple amino acids can be reached from an &#alpha;-keto acid precursor by a small number of relatively simple chemical transformations, and the synthetic pathway that will be followed is determined within the first three steps. We propose that the positions of functional groups in a dinucleotide–α-keto acid complex determine what reactions can be effectively catalyzed for a given &#alpha;-keto acid. An example of a series of reactions leading from &#alpha;-KG to five amino acids, each attached to the first two bases of its codon, is shown in Fig. 2, which can be regarded as a ‘‘decision tree’’ in which the nature of the bases in the dinucleotide determines which types of reactions occur. The pathways proposed follow closely those in extant organisms (14), differing primarily in the timing of the reductive amination leading to the final amino acid. The motivation for this approach is that modern biosynthetic pathways likely emerged by gradual acquisition of enzymes capable of catalyzing reactions that had previously occurred in the absence of macromolecular catalysts. Thus, modern pathways are ‘‘metabolic fossils’’ that provide insight into prebiotic synthetic pathways, although some refinements and permutations are expected to have occurred.

Once again, I'll let everyone read the full paper at leisure, as it's a fairly large and complex one. :)

Moving on, we have the Vetsigian et al paper, which is downloadable from here. This paper opens as follows:

A dynamical theory for the evolution of the genetic code is presented, which accounts for its universality and optimality. The central concept is that a variety of collective, but non-Darwinian, mechanisms likely to be present in early communal life generically lead to refinement and selection of innovation-sharing protocols, such as the genetic code. Our proposal is illustrated by using a simplified computer model and placed within the context of a sequence of transitions that early life may have made, before the emergence of vertical descent.

The authors continue with:

The genetic code could well be optimized to a greater extent than anything else in biology and yet is generally regarded as the biological element least capable of evolving.

There would seem to be four reasons for this paradoxical situation, all of which reflect the reductionist molecular perspective that so shaped biological thought throughout the 20th century. First, the basic explanation of gene expression appears to lie in its evolution, and not primarily in the specific structural or stereochemical considerations that are sufficient to account for gene replication. Second, the problem’s motto, ‘‘genetic code,’’ is a misnomer that makes the codon table the defining issue of gene expression. A satisfactory level of understanding of the gene should provide a unifying account of replication and expression as two sides of the same coin. The genetic code is merely the linkage between these two facets. Thus, and thirdly, the assumption that the code and the decoding mechanism are separate problems, individually solvable, is a reductionist fallacy that serves to deny the fundamental biological nature of the problem. Finally, the evolutionary dynamic that gave rise to translation is undoubtedly non-Darwinian, to most an unthinkable notion that we now need to entertain seriously. These four considerations structure the approach we take in this article.

To this point in time, biologists have seen the universality of the code as either a manifestation of the Doctrine of Common Descent or simply as a ‘‘frozen accident.’’ Viewing universality as following from common descent renders unthinkable the notion explored here that a universal code may be a necessary precondition for common ancestry, indeed even for life as we know it.We will argue in this article [a maturation of the earlier concept of the progenote (1)] that the very fact of the code’s evolvability, together with the details of its internal structure, provides strong clues to the nature of early life, and in particular its essential communal character (2).

Beyond the code’s universality we have very few clues to guide us in trying to understand its evolution and that of the underlying decoding mechanism. The principal ones again are properties of the code itself; specifically, the obvious structure of the codon table. The table possesses (at least) two types of order: synonymorder and relatedness order. The first is the relatedness of codons assigned to the same amino acid; the second is the relatedness of codons assigned to related amino acids. Relatedness among the amino acids is context-dependent and in the context of the codon table could a priori reflect almost anything about the amino acids: their various properties, either individually or in combination; the several macromolecular contexts in which they are found, such as protein structure, the translation mechanism, and the evolution of translation; or the pretranslational context of the so-called RNA world. Although we do not know what defines amino acid ‘‘similarity’’ in the case of the code, we do know one particular amino acid measure that seems to express it quite remarkably in the coding context. That measure is amino acid polar requirement (3–5). Although the relatedness order of the code is marginally evident from simple inspection of the codon table (3, 4, 6–8), it is pronounced when the amino acids are represented by their respective polar requirements (4).

A major advance was provided by computer simulation studies (9–14) of the relatedness ordering of the amino acids over the codon table, which showed that the code is indeed relationally ordered and moreover is optimized to near the maximum extent possible. Compared with randomly generated codes, the canonical code is ‘‘one in a million’’ when the relatedness measure is the polar requirement. No other amino acid measure is known to possess this characteristic (14) (in our opinion, the significance of this observation has not been adequately recognized or pursued). These precisely defined relatedness constraints in the codon table were unexpected and still cry out for explanation.

As far as interpretation goes, the optimal aspect of the genetic code is surely a reflection of the last aspect of the coding problem that needs to be brought into consideration: namely, the precision or biological specificity with which translation functions. Precision, along with every aspect of the genetic code, needs to be understood as part of an evolutionary process. We would contend that at early stages in cellular evolution, ambiguous translation was tolerated (there being no alternative) and was an important and essential part of the evolutionary dynamic (see below). What we imply by ambiguity here is inherent in the concept of group codon assignments, where a group of related codons is assigned as a whole to a corresponding group of related amino acids (3). From this flows the concept of a ‘‘statistical protein,’’ wherein a given gene can be translated not into a unique protein but instead into a family of related protein sequences. Note that we do not say that these are an approximation to a perfect translation of the gene, thereby implying that these sequences are in some sense erroneous. Early life did not require a refined level of tolerance, and so there was no need for a perfect translation. Ambiguity is therefore not the same thing as ‘‘error.’’

I'll break off from here, because this paper is very heavy with respect to mathematical content, and some of the relevant expressions are extremely difficult to render in board tags. However, this paper should prove interesting to read. :)

Next, we have the Lehmann et al paper, which can be downloaded from here. This opens as follows:

AbstractThe origin of the genetic code in the context of an RNA world is a major problem in the field of biophysical chemistry. In this paper, we describe how the polymerization of amino acids along RNA templates can be affected by the properties of both molecules. Considering a system without enzymes, in which the tRNAs (the translation adaptors) are not loaded selectively with amino acids, we show that an elementary translation governed by a Michaelis-Menten type of kinetics can follow different polymerization regimes: random polymerization, homopolymerization and coded polymerization. The regime under which the system is running is set by the relative concentrations of the amino acids and the kinetic constants involved. We point out that the coding regime can naturally occur under prebiotic conditions. It generates partially coded proteins through a mechanism which is remarkably robust against non-specific interactions (mismatches) between the adaptors and the RNA template. Features of the genetic code support the existence of this early translation system.

The authors continue with:

Introduction

A major issue about the origin of the genetic system is to understand how coding rules were generated before the appearance of a family of coded enzymes, the aminoacyl-tRNA synthetases. Each of these ~20 different enzymes has a binding pocket specific for one of the 20 encoded amino acids, and also displays an affinity for a particular tRNA, the adaptor for translation [Fig. 1(a)]. These adaptors are characterized by their anticodons, a triplet of base located on a loop. The synthetases establish the code by attaching specific amino acids onto the 3' ends of their corresponding tRNAs, a two-step process called aminoacylation [1]. The first step (activation) involves an ATP, and leads to the formation of a highly reactive intermediate, aa–AMP (aa= amino acid). The second step consists of the transfer of the amino acid from AMP onto the 3' end of the tRNA. Those tRNAs can subsequently participate in the translation of RNA templates, during which codons about to be translated are tested by the anticodons of incoming tRNAs. When anticodon-codon complementarity occurs, an amino acid is added onto the nascent protein through the formation of a new peptide bond [2].

How could a translation system operate in the absence of the synthetases? Recent works have shown that particular RNA stemloops of ~25 bases can self-catalyze the covalent binding of amino acids onto their own 3' ends [3,4]. These RNAs however require aa–AMP as a substrate because they cannot manage the activation step in their present form. In addition, they show little specificity for the amino acids, raising the question of how a code could be generated by them. Some answers will likely be provided by the activation step if possible to implement on these small RNAs. This issue is not examined in the present paper.

Based on an earlier investigation [5], the present analysis shows that the translation process itself can contribute to the establishment of coding rules. Consider an elementary translation system constituted by RNA templates made up of two types of codons {I, II}, tRNAs with anticodons complementary to these codons, and two types of amino acids {1, 2}. Suppose that the tRNAs are not selectively loaded with amino acids (i.e. the rates of loading only depend on the relative concentrations of the amino acids). Our analysis shows that it is possible to observe a coded polymerization. We calculate the probability of codon I being translated by amino acid 1 and the probability of codon II being translated by amino acid 2, the coding regime occurring when both probabilities are simultaneously higher than 0.5. These probabilities are functions of the anticodon-codon association and dissociation rate constants, the amino acids concentrations and their respective kinetic constants of peptide bond formation. One general configuration allows a coding regime to occur: the amino acid with the slow kinetics (i.e. the ‘‘slow’’ amino acid) is more concentrated in solution than the ‘‘fast’’ amino acid. Given two appropriate codons, the competition for the translation of the codon dissociating quickly from its cognate tRNA (i.e. the ‘‘weak’’ codon) is won by the fast amino acid. As for the ‘‘strong’’ codon, for which the amino acid kinetics are equal or higher than the anticodon-codon dissociation rate constant, the higher concentration of the slow amino acid makes it a better competitor in that case. Although other types of polymerization are possible, we show that this coding regime is favored under prebiotic conditions. It is furthermore remarkably robust against anticodon-codon mismatches. We conclude our analysis by showing that this model can naturally be implemented by a system of four codons and four amino acids thought to be a plausible original genetic code.

Next, we have the Brooks et al paper, which can be downloaded in full from here. The authors begin with:

To understand more fully how amino acid composition of proteins has changed over the course of evolution, a method has been developed for estimating the composition of proteins in an ancestral genome. Estimates are based upon the composition of conserved residues in descendant sequences and empirical knowledge of the relative probability of conservation of various amino acids. Simulations are used to model and correct for errors in the estimates. The method was used to infer the amino acid composition of a large protein set in the Last Universal Ancestor (LUA) of all extant species. Relative to the modern protein set, LUA proteins were found to be generally richer in those amino acids that are believed to have been most abundant in the prebiotic environment and poorer in those amino acids that are believed to have been unavailable or scarce. It is proposed that the inferred amino acid composition of proteins in the LUA probably reflects historical events in the establishment of the genetic code.

I'll move quickly on, and cover in slightly more detail the Novozhilov et al (2007) paper, which opens as follows:

Abstract

Background: The standard genetic code table has a distinctly non-random structure, with similar amino acids often encoded by codons series that differ by a single nucleotide substitution, typically, in the third or the first position of the codon. It has been repeatedly argued that this structure of the code results from selective optimization for robustness to translation errors such that translational misreading has the minimal adverse effect. Indeed, it has been shown in several studies that the standard code is more robust than a substantial majority of random codes. However, it remains unclear how much evolution the standard code underwent, what is the level of optimization, and what is the likely starting point.

Results: We explored possible evolutionary trajectories of the genetic code within a limited domain of the vast space of possible codes. Only those codes were analyzed for robustness to translation error that possess the same block structure and the same degree of degeneracy as the standard code. This choice of a small part of the vast space of possible codes is based on the notion that the block structure of the standard code is a consequence of the structure of the complex between the cognate tRNA and the codon in mRNA where the third base of the codon plays a minimum role as a specificity determinant. Within this part of the fitness landscape, a simple evolutionary algorithm, with elementary evolutionary steps comprising swaps of four-codon or two-codon series, was employed to investigate the optimization of codes for the maximum attainable robustness. The properties of the standard code were compared to the properties of four sets of codes, namely, purely random codes, random codes that are more robust than the standard code, and two sets of codes that resulted from optimization of the first two sets. The comparison of these sets of codes with the standard code and its locally optimized version showed that, on average, optimization of random codes yielded evolutionary trajectories that converged at the same level of robustness to translation errors as the optimization path of the standard code; however, the standard code required considerably fewer steps to reach that level than an average random code. When evolution starts from random codes whose fitness is comparable to that of the standard code, they typically reach much higher level of optimization than the standard code, i.e., the standard code is much closer to its local minimum (fitness peak) than most of the random codes with similar levels of robustness. Thus, the standard genetic code appears to be a point on an evolutionary trajectory from a random point (code) about half the way to the summit of the local peak. The fitness landscape of code evolution appears to be extremely rugged, containing numerous peaks with a broad distribution of heights, and the standard code is relatively unremarkable, being located on the slope of a moderate-height peak.

Conclusion: The standard code appears to be the result of partial optimization of a random code for robustness to errors of translation. The reason the code is not fully optimized could be the trade-off between the beneficial effect of increasing robustness to translation errors and the deleterious effect of codon series reassignment that becomes increasingly severe with growing complexity of the evolving system. Thus, evolution of the code can be represented as a combination of adaptation and frozen accident.

Again, this paper involves some heavy mathematics, and a rather involved computer simulation, so I'll jump straight to the discussion and conclusion:

Discussion and Conclusion

In this work, we examined possible evolutionary paths of the genetic code within a restricted domain of the vast parameter space that is, in principle, available for a mapping of 20 amino acids over 64 nucleotide triplets. Specifically, we examined only those codes that possess the same block structure and the same degree of degeneracy as the standard code. It should be noticed, however, that this choice of a small part of the overall, vast code space for further analysis is far from being arbitrary. Indeed, the block structure of the standard code appears to be a direct consequence of the structure of the complex between the cognate tRNA and the codon in mRNA where the third base of the codon plays a minimum role as a specificity determinant. Within this limited – and, presumably, elevated – part of the fitness landscape, we implemented a very simple evolutionary algorithm by taking as an elementary evolutionary step a swap of four-codon or two-codon series. Of course, one has to realize that the model of code's evolution considered here is not necessarily realistic and, technically, should be viewed as a "toy" model. It is conceivable that codon series swaps were not permissible at the stage in the code's evolution when all 20 amino acids have been already recruited. Nevertheless, we believe that the idealized scheme examined here allows for meaningful comparison between the standard code and various classes of random codes.

The evolution of the standard code was compared to the evolution of four sets of codes, namely, purely random codes (r), random codes with robustness greater than that of the standard code (R), and two sets of codes that resulted from optimization of the first two sets (o and O, respectively). With the above caveats, the comparison of these sets of codes with the standard code and its locally optimized version yielded several salient observations that held for both measures of amino acid replacements (the PRS and the Gilis matrix) that we employed.

1. The code fitness landscape is extremely rugged such that almost any random initial point (code) tends to its own local optimum (fitness peak).

2. The standard genetic code shows a level of optimization for robustness to errors of translation that can be achieved easily and exceeded by minimization procedure starting from almost any random code.

3. On average, optimization of random codes yielded evolutionary trajectories that converged at the same level of robustness as the optimization path of the standard code; however, the standard code required considerably fewer steps to reach that level than an average random code.

4. When evolutionary trajectories start from random codes whose fitness is comparable to the fitness of the standard code, they typically reach much higher level of optimization than that achieved by optimization of the standard code as an initial condition, and the same holds true for the minimization percentage. Thus, the standard code is much closer to its local minimum (fitness peak) than most of the random codes with similar levels of robustness (Fig. 9).

5. Principal component analysis of the between amino acids distance vectors indicates that the standard code is very different from the sets r (all random codes) and O (highly optimized codes produced by error cost minimization for random codes that are better than the standard code), and more similar to the codes from o (optimized random codes) and R (the robust subset of random codes). More importantly, the optimized code produced by minimization of the standard code is much closer to the set of optimized random codes (o) than to any other of the analyzed sets of codes.

6. In this fitness landscape, it takes only 15–30 evolutionary steps (codon series swaps) for a typical code to reach the nearest local peak. Notably, the average number of steps that are required for a random code to reach the peak minus the number of steps necessary for the standard code to reach its own peak takes a random code to the same level of robustness as that of the standard code.

Putting all these observations together, we conclude that, in the fitness landscape explored here, the standard genetic code appears to be a point on an evolutionary trajectory from a random point (code) about half the way to the summit of the local peak. Moreover, this peak appears to be rather mediocre, with a huge number of taller peaks existing in the landscape. Of course, it is not known how the code actually evolved but it does seem likely that swapping of codon series was one of the processes involved, at least, at a relatively late stage of code's evolution, when all 20 amino acids have already been recruited. If so, perhaps, the most remarkable thing that we learned, from these modeling exercises, about the standard genetic code is that the null hypothesis on code evolution, namely, that it is a partially optimized random code, could not be rejected. Why did the code's evolution stop where is stopped, i.e., in the middle of the slope of a local fitness peak (Fig. 9), rather than taking it all the way to the summit, especially, as the number of steps required to get there is relatively small? It appears reasonable to view the evolution of the code as a balance of two forces, the positive selection for increasing robustness to errors of translation and the negative selection against any change, i.e., the drive to "freeze an accident". Indeed, codon series swapping is, obviously, a "macromutation" that simultaneously affects all proteins in an organism and would have a deleterious effect that would become increasingly more severe as the complexity of the evolving system increases. This is why, in all likelihood, no such events occurred during advanced stages of life's evolution, i.e., after the cellular organization was established. Conceivably, such an advanced stage in the evolution of life forms was reached before the code reached its local fitness peak, in support of a scenario of code evolution that combines selection for translational robustness with Crick's frozen accident.

Needless to say, the rest of the papers in my above list are freely downloadable via Google Scholar, and also contain much of interest to the serious student of this topic. :)

Pages

Donating = Loving

Heart Icon

Bringing you atheist articles and building active godless communities takes hundreds of hours and resources each month. If you find any joy or stimulation at Atheist Republic, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.

Or make a one-time donation in any amount.