A question for theists...
Donating = Loving
Bringing you atheist articles and building active godless communities takes hundreds of hours and resources each month. If you find any joy or stimulation at Atheist Republic, please consider becoming a Supporting Member with a recurring monthly donation of your choosing, between a cup of tea and a good dinner.
Log in or create an account to join the discussions on the Atheist Republic forums.
As far as I know, this probability is unknown. So, since you seem to know it and I don't, why don't you tell me how improbable is it (what is the probability)? And how did you come up with that answer (can you show your work)?
-----------------------------------------
It doesn't seem like it is "denial of the obvious", since I have to ask you for it.
Nyarl, I'd bet a years salary that Jo is simply parroting a religious polemic he doesn't understand, and that when he claims something is improbable, he's using unevidenced rhetoric in the mistaken impression that such things are gut instincts, rather than being driven by cientific evidence and mathematics.
You have to have to marvel at any intellect that labels he existence of something as highly improbable, when it already exists, and all in order to tack on a bronze age creation myth which has zero objective evidence to support it, and at is core makes superstitious claims for magic, that has no explanatory powers whatsoever. Then claims this addition makes its existence more probable.
Occam must be turning in his grave.
@ Nyarlathotep
"Jo - Life and the universe existing by some massively improbable happenstance."
It was more of a philosophical statement than a mathematical statement.
It seems more probable to me that God did it than it just happened on its own.
I am interested in what you think of Penrose on a related subject
I think he stated the probability if the universe existing as 1 to the 10 to the 123 power.
https://evolutionnews.org/2010/04/roger_penrose_on_cosmic_finetu/
Hoyle said something similar.
"Life cannot have had a random beginning … The trouble is that there are about two thousand enzymes, and the chance of obtaining them all in a random trial is only one part in 1040,000, an outrageously small probability that could not be faced even if the whole universe consisted of organic soup". Fred Hoyle and N. Chandra Wickramasinghe, Evolution from Space (London: J.M. Dent & Sons, 1981)
"Once we see, however, that the probability of life originating at random is so utterly minuscule as to make it absurd, it becomes sensible to think that the favorable properties of physics on which life depends are in every respect deliberate … . It is therefore almost inevitable that our own measure of intelligence must reflect … higher intelligences … even to the limit of God … such a theory is so obvious that one wonders why it is not widely accepted as being self-evident. The reasons are psychological rather than scientific". Fred Hoyle and N. Chandra Wickramasinghe, Evolution from Space (London: J.M. Dent & Sons, 1981), pp. 141, 144, 130
Please go easy on me, I have trouble with that CAPTCHA thing at the end. :-)
That is equivalent to a 100% chance. I'll assume you mean 10^(-123); which would represent a very small probability.
I'm guessing he assumed the independence of physical parameters (sometimes sloppily referred to as constants), and used the naive definition of probability (that all outcomes are equally likely)? If I were you, I wouldn't be hitching my wagon to that.
What you cited from Hoyle is a non-peer reviewed popular publication in opposition to the big bang theory. An interesting side note: Hoyle is the person who coined the term "big bang"; and meant it as a pejorative term (he was making fun of the "big bang model", as opposed to the "steady state model" Hoyle was closely associated with). Again, not something I'd be hitching my wagon too.
How does adding an unevidenced deity, using unfathomable magic from a bronze age superstition decrease those odds exactly?
"Probability theory, a branch of mathematics concerned with the analysis of random phenomena. The outcome of a random event cannot be determined before it occurs, but it may be any one of several possible outcomes. The actual outcome is considered to be determined by chance."
@ Sheldon
If you flip a coin 1 million times and every time it comes up heads.
I think it is much more likely that you some how rigged the outcome, than it happened by chance.
I would believe that you somehow rigged it unless proved otherwise.
I think that is the logical, rational, and wise conclusion.
It is possible to flip a coin a gets heads 1 million times in a row.
How long would it take to get 1 million on a row?
How much more complicated is the universe and life than 1 million heads?
@Jo. You are really bad at coming up with analogies. You should probably avoid it in the future.
1. Is it possible to flip a coin a million times and get heads every single time. "YES"
2. How long would it take? (No one knows.)
3. How much more complected is the universe than one million heads. (How would you determine this? It really does not matter.)
It does not matter how much more complected the cosmos is. It all happened at least once, regardless of the complexity and that is a fact. Here we are. Complexity had nothing at all to do with anything. By the way, complexity is not a property of design. Simplicity of function is.
@Jo. You are really bad at coming up with analogies. You should probably avoid it in the future.
1. Is it possible to flip a coin a million times and get heads every single time. "YES"
2. How long would it take? (No one knows.)
3. How much more complected is the universe than one million heads. (How would you determine this? It really does not matter.)
It does not matter how much more complected the cosmos is. It all happened at least once, regardless of the complexity and that is a fact. Here we are. Complexity had nothing at all to do with anything. By the way, complexity is not a property of design. Simplicity of function is.
@Nyarl
Jo has clearly read some religious apologist claiming the universe's existence by natural processes is wildly improbable, and is parotting it here.
He doesn't seem to understand the existence of the universe and natural phenomena are two things we can objectively evidence the existence of. Whereas adding an unevidenced deity using unevidenced and inexplicable magic are two things no objective evidence can be demonstrate for.
I think it's fair to say he doesn't understand the science that religious apologetics is misrepresenting here. However he might look ar every major news network and ask himself why they are not emblazoned with the news God's existence confirmed by physicists, and logic.
Right, even the form a calculation (of the probability of the universe existing in its current state) would take isn't clear. Sometimes it is presented along the lines of: what is the probability of an astronomer finding themselves in a universe that is capable of supporting astronomers? It seems the answer to that question is 1 (100%). Now I'd be lying if I said that didn't raise my skeptic alarm a little, but it seems at least as good as any other method I've seen suggested.
/e Often times, the hard part is phrasing the question the right way; with everything falling into place quickly when you hit on the right question. That does not seem to have happened yet.
Actually we know nature is a source of information, as information is ubiquitous in nature So it's not a deduction, it's pure assumption by religious apologists. We know information exists, we know natural phenomena exist, one assumes you don't disagree?
Now in violation of Occam's razor, you are adding an unevidenced deity using inexplicable magic, and solely because this satisfies your a priori religious belief.
How are the odds on this universe's existence improving exactly, when Occam's razor flatly refutes your assumption that adds things you can neither evidence nor explain?
@ Sheldon
"Jo "From there I deduce, that we only know of one source of information and that is intelligence.""
I don't think that is a quote from me.
When did I say that?
@ Sheldon
I have not seen your answer to this quote you attribute to me.
Please note that I did not take this opportunity to accuse you of lying or misrepresenting.
Would you do the same for me?
Referring to Jo's post above ...
[1] The "evolution news" website isn't a proper scientific website. It's a religious apologetics website pushing creatonism. It's not a reliable source of information, because it treats science as a branch of apologetics, a duplicitous practice that should be shunned by anyone possessing a proper regard for the rules of discourse.
[2] Others have already pointed out the dubious provenance of Hoyle's statements, given that he was writing in opposition to the idea that the universe is expanding, despite a large body of observational data pointing to this.
[3] I've been seeing spurious "probability" calculations from pedlars of creationist apologetics for over a decade, and they are precisely that - spurious. In the case of the origin of life, the spurious "probability" calculations from creationists invariably involve the commission of two blatant fallacies - the single trials fallacy and the "one true sequence" fallacy". I shall now deal with both of these at length.
The Serial Trials Fallacy
Typically, what happens in the world of creationist apologetics, is that a probability calculation is constructed, usually on the basis of assumptions that are either left unstated altogether (conveniently preventing independent verification of their validity), or if they are stated, they usually fail to survive intense critical scrutiny. However, even if we allow these assumptions to remain unchallenged, the appearance of the Serial Trials fallacy means that destruction of the validity of the spurious probability calculation is easy even without resorting to the effort of destroying those other assumptions.
Basically, the Serial Trials Fallacy consists of assuming that only one participant in an interacting system is performing the necessary task at any one time. While this may be true for a lone experimenter engaged in a coin tossing exercise, this is assuredly NOT true of any system involving chemical reactions, which involves untold billions of atoms or molecules at any given moment. This of course has import for abiogenesis as well, against which bad probability calculations and the Serial Trials Fallacy are routinely deployed. I shall concentrate here on abiogenetic scenarios, but what follows applies equally to nuclear DNA replication and any absurd arguments based upon bad probability calculations and the Serial Trials Fallacy that mutations cannot occur in a given amount of time.
The idea is simply this. If you only have one participant in the system in question, and the probability of the desired outcome is small, then it will take a long time for that outcome to appear among the other outcomes. But, if you have billions of participants in the system in question, all acting simultaneously, then even a low-probability outcome will occur a lot more quickly.
For example, if I perform trials that consist of ten coin tosses in a row per trial, and this takes about 20 seconds, then I'm going to take a long time to arrive at 10 heads in a row, because the probability is indeed 1/(2^10) = 1/1024. In accordance with a basic law of probability, namely that if the probability of the event is P, the number of serial trials required will be 1/P, I shall need to conduct 1,024 serial trials to obtain 10 heads in a row (averaged over the long term of course) and at 1 trial every 20 seconds, this will take me about six days, if all I do is toss coins without any breaks for sleep, food or other necessary biological functions. If, however, I co-opt 1,024 people to perform these trials in parallel, at least one of them should arrive at 10 heads from the very outset. If I manage by some logistical wonder to co-opt the entire population of China to toss coins in this fashion, then with a billion people tossing the coins, we should see 1,000,000,000/1024, which gives us 976,562 Chinese coin tossers who should see 10 heads in a row out of the total 1,000,000,000 Chinese.
Now given that the number of molecules in any given reaction even in relatively dilute solutions is large (a 1 molar solution contains 6.023 × 10^23 particles of interest per litre of solution, be they atoms, molecules or whatever) then we have scope for some serious participating numbers in terms of parallel trials. Even if we assume, for the sake of argument in a typical prebiotic scenario, that only the top 100 metres of ocean depth is available for parallel trials of this kind (which is a restriction that may prove to be too restrictive once the requisite experimental data are in from various places around the world with respect to this, and of course totally ignores processes around volcanic black smokers in deep ocean waters that could also fuel abiogenetic reactions) and we further assume that the concentration of substancers of interest is only of the order of millimoles per litre, then that still leaves us with the following calculation:
[1] Mean radius of Earth = 6,371,000 m, and 100 m down, that radius is 6,370,900 m
[2] Volume of sea water of interest is therefore 4/3π(R^3-r^3)
which equals 5.1005 × 10^16 cubic metres
1 litre of solution of 1 mmol per litre will contain 6.023 × 10^20 reacting particles of interest, which means that 1 cubic metre of solution will contain 6.023 × 10^26 particles, and therefore the number of particles in the 100 metre layer of ocean around the world will be 3.0730 × 10^43 particles. So already we're well into the territory where our number of parallel trials will make life a little bit easier. At this juncture, if we have this many interacting particles, then any reaction outcome that is computed to have a probability of greater than 1/(3.073 ×10^43) is inevitable with the first reaction sequence.
Now, of course, this assumes that the reactions in question are, to use that much abused word by reality denialists, "random" (though their usage of this word tends to be woefully non-rigorous at the best of times). However, chemical reactions are not "random" by any stretch of the imagination (we wouldn't be able to do chemistry if they were!), which means that once we factor that into the picture alongside the fact that a parallel trial involving massive numbers of reacting molecules is taking place, the spurious nature of these probabilistic arguments against evolution rapidly become apparent.
The same parallel trials of course take place in reproducing populations of organisms. Of course, the notion falsely propagated by reality denialists is that we have to wait for one particular organism to develop one particular mutation, and that this is somehow "improbable". Whereas what we really have to wait for is any one organism among untold millions, or even billions, to develop that mutation, for evolution to have something to work with. If that mutation is considered to have a probability of 1/10^9, then we only have to wait for 10^9 DNA replications in germ cells to take place before that mutation happens. If our working population of organisms is already comprised of 1 billion individuals (last time I checked, the world human population had exceeded 6.6 billion) then that mutation is inevitable.
Now it's time to move on to ...
The "One True Sequence" Fallacy
This fallacy asserts that one, and ONLY one, DNA sequence can code for a protein that performs a specific task. This is usually erected alongside assorted bogus "probability" calculations that purport to demonstrate that evolutionary processes cannot achieve what they plainly do achieve in the real world, but those other probability fallacies will be the subject of other posts. Here I want to destroy the myth that one, and ONLY one, sequence can ever work in a given situation.
Insulin provides an excellent example for my purposes, because insulin is critical to the health and well being of just about every vertebrate organism on the planet. When a vertebrate organism is unable to produce insulin (the well-known condition of diabetes mellitus), then the ability to regulate blood sugar is seriously disrupted, and in the case of Type 1 diabetes mellitus, in which the beta-cells of the Islets of Langerhans in the pancreas are destroyed by an autoimmune reaction, the result is likely to be fatal in the medium to long term due to diabetic nephropathy resulting in renal failure.
Consequently, the insulin molecule is critical to healthy functioning of vertebrate animals. The gene that codes for insulin is well known, and has been mapped in a multiplicity of organisms, including organisms whose entire genomes have been sequenced, ranging from the pufferfish Tetraodon nigroviridis through to Homo sapiens. There is demonstrable variability in insulin molecules (and the genes coding for them) across the entire panoply of vertebrate taxa. Bovine insulin, for example, is not identical to human insulin. I refer everyone to the following gene sequences, all of which have been obtained from publicly searchable online gene databases:
[1] Human insulin gene on Chromosome 11, which is as follows:
atg gcc ctg tgg atg cgc ctc ctg ccc ctg ctg gcg ctg ctg gcc ctc tgg gga cct gac
cca gcc gca gcc ttt gtg aac caa cac ctg tgc ggc tca cac ctg gtg gaa gct ctc tac
cta gtg tgc ggg gaa cga ggc ttc ttc tac aca ccc aag acc cgc cgg gag gca gag gac
ctg cag gtg ggg cag gtg gag ctg ggc ggg ggc cct ggt gca ggc agc ctg cag ccc ttg
gcc ctg gag ggg tcc ctg cag aag cgt ggc att gtg gaa caa tgc tgt acc agc atc tgc
tcc ctc tac cag ctg gag aac tac tgc aac tag
which codes for the following protein sequence (using the standard single letter mnemonics for individual amino acids). I colour coded the sequence in the original presentation of this data, but since this forum doesn't support generation of multi-coloured text, I'll split the relevant orthologous sequence segments onto their own individual numbered lines:
1: MALWMRLLPLLALLALWGPDPAAA
2: FVNQHLCGSHLVEALYLVCGERGFFYTPKT
3: RR
4: EAEDLQVGQVELGGGPGAGSLQPLALEGSLQ
5: KR
6: GIVEQCCTSICSLY
7: QLENYCN
Now, I refer everyone to this data, which is the coding sequence for insulin in the Lowland Gorilla (differences are highlighted in boldface):
atg gcc ctg tgg atg cgc ctc ctg ccc ctg ctg gcg ctg ctg gcc ctc tgg gga cct gac
cca gcc gcg gcc ttt gtg aac caa cac ctg tgc ggc tcc cac ctg gtg gaa gct ctc tac
cta gtg tgc ggg gaa cga ggc ttc ttc tac aca ccc aag acc cgc cgg gag gca gag gac
ctg cag gtg ggg cag gtg gag ctg ggc ggg ggc cct ggt gca ggc agc ctg cag ccc ttg
gcc ctg gag ggg tcc ctg cag aag cgt ggc atc gtg gaa cag tgc tgt acc agc atc tgc
tcc ctc tac cag ctg gag aac tac tgc aac tag
this codes for the protein sequence:
1: MALWMRLLPLLALLALWGPDPAAA
2: FVNQHLCGSHLVEALYLVCGERGFFYTPKT
3: RR
4: EAEDLQVGQVELGGGPGAGSLQPLALEGSLQ
5: KR
6: GIVEQCCTSICSLY
7: QLENYCN
which so happens to be the same precursor protein. However, Gorillas are closely related to humans. Let's move a little further away, to the domestic cow, Bos taurus (whose sequence is found here):
atg gcc ctg tgg aca cgc ctg cgg ccc ctg ctg gcc ctg ctg gcg ctc tgg ccc ccc ccc
ccg gcc cgc gcc ttc gtc aac cag cat ctg tgt ggc tcc cac ctg gtg gag gcg ctg tac
ctg gtg tgc gga gag cgc ggc ttc ttc tac acg ccc aag gcc cgc cgg gag gtg gag ggc
ccg cag gtg ggg gcg ctg gag ctg gcc gga ggc ccg ggc gcg ggc ggc ctg gag ggg ccc
ccg cag aag cgt ggc atc gtg gag cag tgc tgt gcc agc gtc tgc tcg ctc tac cag ctg
gag aac tac tgt aac tag
Already this is a smaller sequence - 318 codons instead of 333 - so we KNOW we're going to get a different insulin molecule with this species ... which is as follows:
1: MALWTRLRPLLALLALWPPPPARA
2: FVNQHLCGSHLVEALYLVCGERGFFYTPK
3: AR
4: REVEGPQVGALELAGGPGAGGLEGPPQKRGI
5: VE
6: QCCASVCSLY
7:QLENYCN
clearly a different protein, but one which still functions as an insulin precursor and results in a mature insulin molecule in cows, one which differs in exact sequence from that in humans. Indeed, prior to the advent of transgenic bacteria, into which human insulin genes had been transplanted for the purpose of harnessing those bacteria to produce human insulin for medical use, bovine insulin harvested from the pancreases of slaughtered beef cows was used to treat diabetes mellitus in humans. Now, of course, with the advent of transgenically manufactured true human insulin, from a sterile source, bovine insulin is no longer needed, much to the relief of those who are aware of the risk from BSE.
Moving on again, we have a different coding sequence from the tropical Zebrafish, Danio rerio, (sequence to be found here) which is as follows:
atg gca gtg tgg ctt cag gct ggt gct ctg ttg gtc ctg ttg gtc gtg tcc agt gta agc
act aac cca ggc aca ccg cag cac ctg tgt gga tct cat ctg gtc gat gcc ctt tat ctg
gtc tgt ggc cca aca ggc ttc ttc tac aac ccc aag aga gac gtt gag ccc ctt ctg ggt
ttc ctt cct cct aaa tct gcc cag gaa act gag gtg gct gac ttt gca ttt aaa gat cat
gcc gag ctg ata agg aag aga ggc att gta gag cag tgc tgc cac aaa ccc tgc agc atc
ttt gag ctg cag aac tac tgt aac tga
And this sequence codes for the following protein:
1: ]MAVWLQAGALLVLLVVSSVSTNPG
2: TPQHLCGSHLVDALYLVCGPTFTGFFYNP
3: KR
4: DVEPLLGFLPPKSAQETEVADFAFKDHAELI
5: RK
6: RGIVEQCCHKPCSI
7: FELQNYCN
so again we have a different insulin precursor protein that is ultimately converted into a different insulin molecule within the Zebra Fish.
I could go on and extract more sequences, but I think the point has already been established, namely that there are a multiplicity of possible insulin molecules in existence, and consequently, the idea that there can only be ONE sequence for a functional protein, even one as critically important to life as insulin, is DEAD FLAT WRONG. Now, if this is true for a protein as crucial to the functioning of vertebrate life as insulin, you can be sure that the same applies to other proteins, including various enzymes, and therefore, whenever the "One True Sequence" fallacy rears its ugly head in various places, the above provides the refutation thereof.
Quite simply, what is important is not one particular sequence, but any sequence that confers function. If there exist a large number of sequences that confer function, then this on its own destroys creationist apologetics involving the "one true sequence" fallacy, even before we delve into more involved scientific rebuttals thereof.
In short, spurious creationist "probability" calculations are rendered null and void, by [1] large numbers of simultaneously participating entities, and [2] large numbers of functionally viable sequences for a given function.
So, next time you see a spurious probability calculation appearing that purports to "disprove evolution", or claims to render a natural origin of life untenable, in order to insert a mythological magic man into the picture, look out for these salient features, namely:
[1] Base assumptions that are either not stated altogether (thus conveniently preventing independent verification) or base assumptions that fail to withstand critical scrutiny;
[2] The Serial Trials Fallacy above, and;
[3] The "One True Sequence" Fallacy above.
Next, it's apposite to destroy once and for all, the fatuous "fine tuning" excrement that creationists masturbate over so much, Courtesy of the following two scientific papers:
Stars In Other Universes: Stellar Structure With Different Fundamental Constants by Fred C. Adams, Journal of Cosmology and Astroparticle Physics, Issue 08, 1-29 (August 2008) [Full paper downloadable from here]
A Universe Without Weak Interactions by Roni Harnik, Graham D. Kribs and Gilad Perez, Physical Review D, 74(3): 035006 (2006) [Full paper downloadable from here]
Let's take a look at these papers, shall we?
First, the Adams paper ...
In more detail, we have:
A little later on, Adams states thus:
Then we start to get into the meat of the paper:
The author then moves on to this:
Quite a lot of work there just in the first couple of pages, I think you'll agree.
I'll skip the section on nuclear reactions, primarily because I'm not a trained nuclear physicist, and some of the material presented is beyond my scope to comment upon in depth, but anyone who is a trained nuclear physicist, is hereby invited to comment in some detail upon this. :)
Moving on, we have:
Moving on past a lot of calculations, we have this:
Much of the rest of the paper consists of technical discussions on the effects of various other parameters, such as the Eddington luminosity (which determines the maximum rate of energy liberation of the star, and sets a lower bound upon stellar lifetime), along with some in-depth discussion on unconventional stellar objects in other universes, and the likely physics affecting these. We can move directly on to the conclusion, viz:
That on its own drives a tank battalion through the idea that the universe is "fine-tuned". However, it's even better than that, viz:
Of course, there are some caveats with respect to this, but in the main, these are of a fairly technical nature, and do not adversely affect the above conclusions. In short, vary the so-called "fine-tuned" constants over a wide range of values, and star formation of the sort we observe in the universe remains intact within 25% of that parameter space.
But it gets even better than this. Now it's time for the Harnik et al paper:
As part of the preamble, I point everyone to this:
This paper involves a fairly in-depth knowledge of particle physics, and the physics of various intricate quantum actions, so I'll spare everyone the spectacle of my trying to comment on a field about which I do not possess sufficient knowledge (a lesson that some supernaturalists would do well to learn). However, I shall point everyone to one paragraph of interest:
Later on, after a large amount of technical discussions relating to particle generation and relative abundances thereof, we have this:
After an in-depth discussion of such topics as matter domination, density perturbations, dark matter candidates, the stability of light and heavy elements (along with exotic isotopes of the former), star formation, stellar nucelosynthesis, stellar lifetimes, supernovae, and the population of the interstellar medium with heavy elements, we reach this:
However, as the authors themselves state earlier, the cosmological constant is an unnatural parameter of the relevant effective field theories, and is therefore possibly itself a derived parameter, arising from as yet uninvestigated more fundamental natural parameters.
On to the conclusion:
So, the authors provide a demonstration that it is possible for one of the four fundamental forces of the universe to be omitted, namely the weak nuclear force, and still produce a habitable universe. I think that more or less wraps it up for "fine tuning", don't you?
Let's make this short and sweet, so that even a pedlar of apologetics can understand this. It's possible to vary so-called "fine-tuned" constants over a wide range, and still produce working stars such as the ones we observe today, with practically identical nucleosynthesis of chemical elements in place, and it is ALSO possible to REMOVE THE WEAK NUCLEAR FORCE ALTOGETHER FROM THE UNIVERSE, and still produce a habitable universe differing only from our own in subtle details.
Game Over for "fine tuning".
@Cali
To introduce a coarse note ....I fucking love this and read every word....
Thanks again Cali
@ Calilasseia
I did a web search for Penrose odds of universe existing.
The first site I saw that had the 10 to 10 to 123 (or whatever it is) was the one I referenced.
That is all, was not saying it was a proper scientific website.
Doesn't the four digit DNA look like code, or language?
What do you mean by complicated? For starters: how complicated is a result of 1 million heads (presumably out of 1 million fair coin tosses)?
At the very least can you tell us the dimensions of this property: "complicated"? For example: the dimensions of speed are distance * time^(-1).
@ Nyartlathotep
Another good point. Am I ever going to pass this course? :-)
Maybe a better word would be unlikely, or greater odds.
Well that is a lot better. I'll go further and suggest using the word probability; and will attempt to rephrase your question using that:
How much more probable is getting 1 million heads on 1 million fair coin tosses, when compared to the probability of the universe and life existing? Calculating the first part is easy, its (1/2)^(1,000,000) which is about 10^(-300,000). No one knows how to calculate the 2nd part, but the suspicious calculation you cited was 10^(-1230).
So if we accept that calculation about the universe (which I don't) then the likelihood of the universe and life existing is about 1000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000 times more probable than your coin toss scenario. But remember you had it the other way around, making your suggestion the largest error I've seen in a while (but I've seen worse). But at least you have the courage to allow your beliefs to be examined; instead of protecting them with undefined terminology; which is what typically happens.
What it looks like and what it is may be two entirely separate entities. Indeed, I've already dealt with canards about "information" and spurious assertions about "codes" in this previous post. Which includes as a bonus a nice selection of scientific papers covering the evolvability of the "genetic code". NOTE: that post was written before I learned how to use the board tags for text formatting, so, to make your life a little easier, I'll now reproduce that post here, with the formatting reinstated as I originally intended, along with some minor abridgements. Hold on to your hat for a very. undulating roller coaster ride ...
Creationist Canards About "Information" And "Codes"
Information is nothing more than the observational data available with respect to the current state of a system of interest. That is IT. Two rigorous mathematical treatments of information, namely Shannon's treatment and the treatment by Kolmogorov & Chaitin, are predicated on this fundamental notion. Indeed, when Claude Shannon wrote his seminal 1948 paper on information transmission, he explicitly removed ascribed meaning from that treatment, because ascribed meaning was wholly irrelevant to the analysis of the behaviour of information in a real world system. Allow me to present a nice example from the world of computer programming. Be aware that multiple conventions exist with respect to the writing of hexadecimal numbers: in the world of Intel x86 programming, hexadecimal numbers are represented using a 'H' postfix, as in 2A00H, whilst in the world of Motorola CPU programming, the convention is a '$' sign prefix, e.g., $2A00. Both will appear below in the exposition that follows.
Now, with that preamble over, allow me to present to you a string of data (written as hexadecimal bytes):
81 16 00 2A FF 00
Now, to an 8086 processor, this string of bytes codes for a single 8086 machine language instruction, namely:
ADC [2A00H], 00FFH
which adds the immediate value of 00FFH (255 decimal) to whatever value is currently stored at the 16-bit memory location addressed by DS:2A00H.
Note that 8086 processors, and their later relations, use segmented memory addressing in what's known as "real mode". The actual memory address referenced by an 80x86 processor in 'real mode', is the address given by adding the offset (here 2A00H), to 16 times the contents of a segment register (this was how these processors accessed a 1 MB address space in the earliest days of the processor family). Four such segment registers exist - DS, the data segment register, CS, the code segment register, SS, the stack segment register, and ES, the extra segment register. For the majority of data access instructions, DS is implied as the default segment register, unless the base address is of the form [BP+disp], in which case the default segment register is SS. A complication that makes complete description of the above instruction all the more tedious.
However, on an older, 8-bit 6502 processor, the above sequence codes for multiple instructions, namely the following sequence:
CLC
ASL ($00,X)
LDX #$FF
BRK
The first of these instructions clears the carry flag in the processor status register P. The second instruction takes the operand $00, adds to it the contents of the X register (8-bit addition only), and uses that computed address (call this N) as an index into the first 256 bytes of memory (page zero). The contents of address N and address N+1 together then constitute a 16-bit pointer into another location in memory. The 8-bit contents of this location are then shifted one bit position left (ASL stands for Arithmetic Shift Left). The third instruction loads the contents of the X register with the immediate value $FF (255 decimal). The third instruction, BRK, is a breakpoint instruction, and performs a complex sequence of operations. First, it takes the current value of the program counter (PC), which is now pointing at the BRK instruction, adds 2 to that value, and pushes it onto the stack (2 bytes are therefore pushed). It then pushes the contents of the processor status register P. Then, it loads the contents of the memory locations $FFFE and $FFFF (the top 2 locations in the 6502 address space) into the program counter and continues execution from there. The top end of memory in a 6502 system typically consists of ROM, and the hard-coded value stored in locations $FFFE/$FFFF is typically a vector to a breakpoint debugging routine in ROM, but that's an implementation dependent feature, and the exact contents of $FFFE/$FFFF vary accordingly from system to system.
To make matters even more interesting, the bytes also have meaning to a Motorola 6809 processor, viz:
CMPA #$16
NEG $2AFF
NEG ??
The first instruction is "compare accumulator A with the value $16 (22 decimal)". This performs an implicit subtraction of the operand $16 from the current contents of accumulator A, sets the condition codes (CC) according to whether the result is positive, zero or negative (and also sets other bits allowing more intricate comparisons to be made) but discards the actual result of the subtraction. The next instruction, NEG $2AFF, takes the contents of the memory location at address $2AFF (decimal 11,007), and negates it (so that a value of +8 becomes -8 and vice versa, assuming 2's complement storage). The next instruction is incomplete, hence the ?? operand, because the NEG opcode (the 00 byte) needs two following bytes to specify a memory address in order to specify which memory location's contents to negate. So, whatever two bytes follow our 6-byte stream will become the address operand for this NEG instruction.
Now, that's ONE stream of bytes, which has THREE different meanings for three different processors. Therefore ascribing meaning to the byte stream as part of the process of analysing transmission of the data is erroneous. Meaning only becomes important once the data has been transmitted and received, and the receiver decides to put that data to use. If we have three different computers receiving this 6-byte stream from appropriate sources, then the Shannon information content of each bit stream is identical, but our three different computers will ascribe totally different meanings to the byte stream, if they are regarded as part of a program instruction sequence. An 8086-based computer will regard the byte stream as an ADC instruction, the 6502-based computer will regard it as a CLC, ASL, LDX, BRK sequence, and the 6809-based computer will regard it as a CMPA, NEG, NEG sequence (and the latter will demand two more bytes to be transmitted in order to complete the last instruction).
Consequently, ascribed meaning is wholly irrelevant to the rigorous treatment of information. Creationists routinely introduce the error of assuming a priori that "information" and "ascribed meaning" are synonymous, which the above example refutes wholesale (along with thousands of others that could be posted if I had the time). Of course, creationists conflate information with ascribed meaning deliberately, because they seek to expound the view that information is a magic entity, and therefore requires an invisible magic man in order to come into existence. This is complete rot, as the Shannon and Kolmogorov/Chaitin analyses of information demonstrate readily, not to mention Turing's large body of work with respect to information. All that matters, at bottom, is that the entities and interactions applicable to a given system of interest produce different results when applied to different states of that system. Information sensu stricto, namely the observational data available with respect to the current state of a system, only becomes "meaningful" when different states lead to different outcomes during appropriate interactions applicable to the system, and the only "meaning" that matters, at bottom, is what outcomes result from those different system states, which in the case of the computer data above, differs from system to system.
Plus, Marshall erects the bogus argument that DNA is a "code". This IS bogus. DNA is simply an organic molecule that is capable of existing in a large number of states, each of which results in a different outcome with respect to the chemical interactions that the molecule takes part in. Because it can exist in a large number of states, because those states are all associated with specific, systematic interactions (such as the production of a particular protein after transcription), and because those states are coupled to those systematic and well-defined interactions in a largely one-to-one manner (for the time being, I'll leave to one side complications such as selenocysteine, which were afterthoughts grafted onto the original system), they can be treated in an information-theoretic manner as if they constituted a "code", because doing so simplifies our understanding of those systematic interactions, and facilitates further detailed analysis of that system. That, again, is IT. The idea that DNA constitutes a "code" intrinsically is merely a baseless creationist assertion resulting from deliberate apologetic misrepresentation of the code analogy. A misrepresentation that itself is then subject to rampant discoursive misuse, because the argument erected consists of:
[1] DNA is a code (unsupported baseless assertion);
[2] All codes are produced by "intelligence" (deliberately omitting the fact that the only "intelligence" we have evidence of that produces codes is human intelligence);
[3] Therefore an "intelligence" produced DNA (the inference being that this "intelligence" is supernatural, which doesn't even arise as a corollary from [2] when one factors in the omitted detail, that the only "intelligence" we have evidence for as a code producer is human, and therefore natural, intelligence).
This argument is fatuous as it stands, even without factoring in extra scientific knowledge that has been acquired in relatively recent times, but when we do factor in this knowledge, it becomes absurd to the Nth degree. That scientific knowledge consists of at least twenty-three (as of 2011, when I last updated the list - there ARE more now) scientific papers demonstrating that the "genetic code" is itself an EVOLVABLE ENTITY. Those papers are:
[1] A Co-Evolution Theory Of The Genetic Code by J. Tze-Fei Wong, Proceedings of the National Academy of Sciences of the USA, 72(5): 1909-1912 (May 1975)
[2] A Mechanism For The Association Of Amino Acids With Their Codons And The Origin Of The Genetic Code by Shelley D. Copley, Eric Smith & Harold J. Morowitz, Proceedings of the National Academy of Sciences of the USA, 102(12): 4442-4447 (22nd March 2005)
[3] An Expanded Genetic Code With A Functional Quadruplet Codon by J. Christopher Anderson, Ning Wu, Stephen W. Santoro, Vishva Lakshman, David S. King & Peter G. Schultz, Proceedings of the National Academy of Sciences of the USA, 101(20): 7566-7571 (18th May 2004)
[4] Collective Evolution And The Genetic Code by Kalin Vetsigian, Carl Woese and Nigel Goldenfeld, Proceedings of the National Academy of Sciences of the USA, 103(28): 10696-10701 (11th July 2006)
[5] Emergence Of A Code In The Polymerization Of Amino Acids Along RNA Templates by Jean Lehmann, Michael Cibils & Albert Libchaber, PLoS One, 4(6): e5773 (3rd June 2009) DOI:10.1371/journal.pone.0005773
[6] Encoding Multiple Unnatural Amino Acids Via Evolution Of A Quadruplet Decoding Ribosome by Heinz Neumann, Kaihang Wang, Lloyd Davis, Maria Garcia-Alai & Jason W. Chin, Nature, 464: 441-444 (18th March 2010)
[7] Evolution And Multilevel Optimisation Of The Genetic Code by Tobias Bollenbach, Kalin Vetsigian & Roy Kishony, Genome Research (Cold Spring Harbour Press), 17: 401-404 (2007)
[8] Evolution Of Amino Acid Frequencies In Proteins Over Deep Time: Inferred Order Of Introduction of Amino Acids Into The Genetic Code by Dawn J. Brooks, Jacques R. Fresco, Arthur M. Lesk & Mona Singh, Molecular & Biological Evolution, 19(10):1645-1655 (2002)
[9] Evolution Of The Aminoacyl-tRNA Synthetases And The Origin Of The Genetic Code by R. Wetzel, Journal of Molecular Evolution, 40: 545-550 (1995)
[10] Evolution Of The Genetic Code: Partial Optimization Of A Random Code For Robustness To Translation Error In A Rugged Fitness Landscape by Artem S Novozhilov, Yuri I Wolf and Eugene V Koonin, Biology Direect, 2: 24 (23rd October 2007) DOI:10.1186/1745-6150-2-24
[11] Exceptional Error Minimization In Putative Primordial Genetic Codes by Artem S Novozhilov & Eugene V. Koonin, Biology direct, 4(1): 44 (2009)
[12] Expanding The Genetic Code Of Escherichia coli by Lei Wang, Angsar Brock, Brad Herberich & Peter G. Schultz, Science, 292: 498-500 (20th April 2001)
[13] Experimental Rugged Fitness Landscape In Protein Sequence Space by Yuuki Hayashi, Takuyo Aita, Hitoshi Toyota, Yuzuru Husimi, Itaru Urabe & Tetsuya Yomo, PLoS One, 1(1): e96 (2006) DOI:10.1371/journal.pone.0000096
[14] Importance Of Compartment Formation For A Self-Encoding System by Tomoaki Matsuura, Muneyoshi Yamaguchi, Elizabeth P. Ko-Mitamura, Yasufumi Shima, Itaru Urabe & Tetsuya Yomo, Proceedings of the National Academy of Sciences of the USA, 99(11): 7514-7517 (28th May 2002)
[15] On The Origin Of The Genetic Code: Signatures Of Its Primordial Complementarity In tRNAs And Aaminoacyl-tRNA Synthetases by S. N. Rodin and A. S. Rodin, Heredity, 100: 341-355 (5th March 2008)
[16] Origin And Evolution Of The Genetic Code: The Universal Enigma by Eugene V. Koonin & Artem S. Novozhilov, IUBMB Life, 61(2): 99-111 (February 2009) (Also available at arXiv)
[17] Protein Evolution With An Expanded Genetic Code by Chang C. Liu, Antha V. Mack, Meng-Lin Tsao, Jeremy H. Mills, Hyun Soo Lee, Hyeryun Choe, Michael Farzan, Peter G. Schultz & Vaughn V. Smider, Proceedings of the National Academy of Sciences of the USA, 105(46): 17688-17693 (18th November 2008)
[18] Protein Stability Promotes Evolvability by Jesse D. Bloom, Sy T. Labthavikul, Christopher R. Otey & Frances H. Arnold, Proceedings of the National Academy of Sciences of the USA, 103(15): 5869-5874 (11th April 2006)
[19] Reassigning Cysteine In The Genetic Code Of Escherichia coli by Volker Döring and Philippe Marlière, Genetics, 150: 543-551 (October 1998)
[20] Recent Evidence For Evolution Of The Genetic Code by Syozo Osawa, Thomas H, Jukes, Kimitsuna Watanabe & Akira Muto, Microbiological Reviews, 56(1): 229-264 (March 1992)
[21] Rewiring The Keyboard: Evolvability Of The Genetic Code by Robin D. Knight, Stephen J. Freeland & Laura F. Landweber, Nature Reviews Genetics, 2: 41-58 (January 2001)
[22] Thawing The Frozen Accident by C. W. Carter Jr., Heredity, 100: 339-340 (13th February 2008)
[23] A Simple Model Based On Mutation And Selection Explains Trends In Codon And Amino-Acid Usage And GC Composition Within And Across Genomes by Robin D. Knight, Stephen J. Freeland & Laura F. Landweber, Genome Biology, 2(4): research0010.1–0010.13 (22nd March 2001)
This collection of papers is incomplete, as more have been published in the relevant journals since I compiled this list.
So, since we have peer reviewed scientific papers demonstrating that the "genetic code" is itself an evolvable entity, and indeed, since scientists have published experimental work investigating the behaviour of alternative genetic codes arising from this research, the idea that an invisible magic man was needed for this is recrudescently nonsensical.
However, an essential concept is required to be covered in more detail here, namely, the use of analogy to aid understanding. Scientists generate analogies as a means of summarising interactions and entities that would, if expounded in detail, result in truly frightening levels of verbosity. Analogies are constructed for two purposes - disseminating understanding of a system of interest, and brevity. Analogies are conceptual tools we press into service to make sense of intricate systems of entities and interactions. Those analogies are NOT the systems in question, an elementary concept that is frequently discarded in a duplicitous manner by pedlars of creationist apologetics, who frequently deploy improper conflations not to enlighten, but to obfuscate in pursuit of an agenda. That concept is summarised succinctly as "the map is not the terrain" (a phrase donated to me by an acquaintance with a particularly keen eye for such matters).
Indeed, thanks to Turing and his successors in the relevant fields, we can see that all systems of interaction, that can be determined by observation to obey well defined rules, can be modelled by a suitably constructed Turing machine - this is, indeed what every simulation program in existence does. A simulation models the behaviour of a system of interest, by applying the well-defined rules determined to be in operation therein, and generating appropriate output, so that [1] the correlation with observational reality can be checked, and [2] to allow us to investigate, within a "sandbox" of sorts, what is likely to happen if that system is taken into regions of operation that would be impractical or dangerous to take the real system into. I'm reminded at this juncture, that Turing's seminal discovery can be summarised as follows: "every process in the universe can be reduced to a meaningless string of symbols", just as Gödel's Incompleteness Theorem can be summarised as "every idea in the universe can be reduced to a meaningless string of symbols", which is what he did to number theory in order to demonstrate said incompleteness. :)
It should come as no surprise, that chemistry, a discipline whose entities obey well-defined rules of interaction, to the point where chemists have been able to investigate syntheses and reactions by the million, is itself amenable to such modelling, and as a corollary, amenable to the construction of numerous analogies to facilitate understanding of those interactions. DNA, as an organic molecule, falls within this remit admirably. Not least, because particular subunits have their own well-defined interactions, and which, as a corollary, are modelable and subject to representation by analogy. The so-called "genetic code" is simply another one of those analogies, and, courtesy of the above papers (along with MANY others), has itself been demonstrated to be an entity subject to evolutionary processes. Once again, it's testable natural processes all the way down, resulting in system state changes in collections of relevant entities. That is IT. We don't need to introduce superfluous mythological entities to understand any of this, we simply need to expend diligent effort learning from antecedent biochemists.
Indeed, every time I've seen a creationist try to erect a fake "gotcha", by pointing to some gap in scientific knowledge, it transpires that either [1] the gap ends up being filled quickly by relevant research, or [2] wasn't a gap in the first place, because extant research had already answered the relevant questions. Furthermore, that research sometimes answers questions that creationists didn't even know existed when the research was being conducted, but which, in all too familiarly observed mendacious manner, then become co-opted into the apologetic fabrications that creationists mistakenly think enjoys the same imprimatur as real scientific research.
There are, of course other fallacies relevant to cover here, but space is limited, and it will be apposite to return to those at a later date. But for now, the key concepts to be remembered are:
[1] Information is NOT a magic entity. It is simply the data extant with respect to the current state of a system of interest.
[2] Ascribed meaning is also not a magic entity. It is simply the set of subsequent interactions that are set in motion, when the current state of a system of interest is modified by other systems of interest.
[3] That pithy phrase, "the map is not the terrain", is suitably illuminative with respect to the above.
[4] All of the above come into play, the moment any well defined rules of interaction exist, describing the behaviour of a system of interest.
Indeed, that is, at bottom, what science does - it examines systems of interest, determines what entities and interactions are present therein, and what well-defined rules describe the behaviour thereof. And that brings me to the other concept to take note of here - science is a DEscriptive enterprise, NOT a PREscriptive enterprise. Science works as well as it does, because it pays attention to observational data, including when said data tells us we need to revise our view of a system of interest, and as a consequence, DEscribes what happens, instead of attempting to PREscribe what happens. That distinction is important, because it results in the emergence of a vast canyon separating science and religion.
Religion purports to declare by decree, that the universe and its contents operate in a given manner, regardless of how frequently observational reality laughs at the pretension inherent therein, whilst science lets the data determine what is being said. Another massive difference, is that religion attempts to pretend that its blind assertions constitute The Truth™, unswerving and unbending forever, regardless of how often reality says otherwise, whilst science simply says "this is our best current model, and so far, works well enough to allow us to do the following when we use it", and remains open to changing that model when the data tells us change is needed. The power and flexibility arising from letting the data do the talking, is one of science's greatest gifts, and one we should be openly celebrating. Not least because it also tells us, in no uncertain terms, which ideas are wrong.
And that's the beauty of a genuine scientific hypothesis. When constructed, a genuine scientific hypothesis is prepared to be wrong. It's prepared to be given short shrift by the data, once the experiments are conducted. A genuine scientific hypothesis results in predictions about the behaviour of the system of interest, which can be searched for, and if not found, send the authors back to the drawing board. On the other hand, if the data says that said hypothesis is in accord with observation, we've learned something special. Something that can never be learned from mythological assertion, because mythological assertion is upheld by fabricating excuses to hand-wave away inconvenient falsifying data, in a desperate attempt to preserve the so-called "sacred" status of the assertion. The bad news for those who think this is the way forward, is that nothing is sacred. EVERY assertion is, by definition, a free-fire zone for every bearer of discoursive miniguns to open fire at. If you don't want your precious assertions subject to such attention, don't parade them in public.
I think this covers relevant important bases.
@Cali
Hot damn! That was fuckin' awesome to read. Honestly, you lost me with a small bit of the computer code stuff, but I at least understood the gist of it. But the rest was simply fantastic! Great having you around here... *flourishing bow*...
You can tell I was an assembly language programmer in the late 1980s and early 1990s, can't you? :)
@Cali Re: "You can tell I was an assembly language programmer in the late 1980s and early 1990s, can't you? :)"
...*chuckle*... I'm afraid I'll have to take your word for it. I was a whiz with BASIC language in high school during mid-eighties. Matter of fact, me and a buddy of mine pretty much taught the programming class, because the assigned teacher was more or less tossed into the job without a life jacket. (As I recall, it was only the first or second year the computer course had been available. TRS-80's, if that tells you anything... *grin*...) After high school, though, when the Commodores and Apples started taking over, I ended up losing interest in all of it for some reason I have never really been able to pinpoint.... *shrugging shoulders*... Funny sometimes how life throws little curveballs like that... *chuckle*...
@ Cali
Thankyou, I love learning this stuff from you.
Ok, in that earlier post, I listed some papers covering the evolution and evolvability of the genetic code. Let's take a look at some of these papers in more detail, shall we? First, the PNAS paper by Wong:
In more detail, the author opens with the following:
The rest of the paper can be read in full by downloading the PDF from here.
Moving on, let's look at the Copley et al paper, which can be downloaded from here. This opens as follows:
The authors continue thus:
Once again, I'll let everyone read the full paper at leisure, as it's a fairly large and complex one. :)
Moving on, we have the Vetsigian et al paper, which is downloadable from here. This paper opens as follows:
The authors continue with:
I'll break off from here, because this paper is very heavy with respect to mathematical content, and some of the relevant expressions are extremely difficult to render in board tags. However, this paper should prove interesting to read. :)
Next, we have the Lehmann et al paper, which can be downloaded from here. This opens as follows:
The authors continue with:
Next, we have the Brooks et al paper, which can be downloaded in full from here. The authors begin with:
I'll move quickly on, and cover in slightly more detail the Novozhilov et al (2007) paper, which opens as follows:
Again, this paper involves some heavy mathematics, and a rather involved computer simulation, so I'll jump straight to the discussion and conclusion:
Needless to say, the rest of the papers in my above list are freely downloadable via Google Scholar, and also contain much of interest to the serious student of this topic. :)
Pages