Wikipedia:Reference desk/Archives/Science/2012 December 3

From Wikipedia, the free encyclopedia
Science desk
< December 2 << Nov | December | Jan >> December 4 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 3[edit]

Chromosomes Question[edit]

I apologize if this is a stupid question, but are the chromosomes of every human being different? Also, in regards to the chromosomes in cancer cells, these chromosomes are always duplications or alterations of one or more of the (original) chromosomes of the human being where this cancer now resides, correct? Futurist110 (talk) 03:40, 3 December 2012 (UTC)[reply]

Since the genes of every human being are different, the chromosomes (the material that contains the DNA within it) must also be different. Wikipedia has a (fairly poor) article titled Genetics of cancer which has a little bit of information for your second question. --Jayron32 04:13, 3 December 2012 (UTC)[reply]
The one exception is identical twins, who, theoretically, could have identical DNA. However, even though they may start out identical (or nearly so), there are subtle changes in the DNA which occur throughout our lives, such as telomere shortening (based partly on the environment), causing twins to slowly diverge. In fact, your DNA isn't even completely identical from one cell to the next, due to oxidation damage, etc. StuRat (talk) 06:22, 3 December 2012 (UTC)[reply]
BTW, in case there is any confusion on the part of the OP, Genetic recombination in the form of Chromosomal crossover during Prophase I of meiosis means that the each chromosome is different, people do not inherit chromosomes wholesale from their parents. (Chromosome abnormalities as discussed below in some cases arise from problems occuring during crossing over.) Edit: So yes, those genetic differences reflect differences in chromosomes (genes which are close together in a chromosome will not be inherited independently hence genetic linkage). Probably should also mention independent assortment. See also Mendelian inheritance. (End edit.) Even sex chromosomes generally undergo limited crossing over (mentioned in our article on meiosis and discussed in more detail at Pseudoautosomal region). Nil Einne (talk) 18:44, 3 December 2012 (UTC)[reply]
Most humans have the same number of chromosomes in most of their somatic cells - 23 pairs, so 46 altogether. What differs from one human to another is the contents of those chromosomes - the genes. A small proportion of humans have a non-standard number of chromosomes, usually due to a chromosome abnormality such as Down Syndrome or Turner syndrome. Chromosome abnormalities such as aneuploidy (having an abnormal number of chromosomes) are also often found in cancer cells, although I am not sure whether they are considered a cause of the cancer or a side-effect. Gandalf61 (talk) 10:39, 3 December 2012 (UTC)[reply]
Generally speaking, widespread chromosomal abnormalities – multiple translocations and changes in chromosome number with this sort of ugly karyotype – are a consequence of the overall genomic instability associated with cancer, caused by a combination of loss of cell cycle checkpoints and failure of mechanisms associated with DNA lesion detection and repair. (Of course, this instability increases the heterogeneity of the population of malignant cells, increasing their likelihood of accumulating further harmful mutations and developing drug resistance.)
On the other hand, there are certainly a few specific chromosomal abnormalities that are linked to cancer. The so-called Philadelphia chromosome is probably the canonical example; this translocation of parts of chromosomes 9 and 22 produces an oncogenic fusion protein that is the primary cause of chronic myelogenous leukemia. TenOfAllTrades(talk) 14:31, 4 December 2012 (UTC)[reply]
  • In theory, with a drop of blood from a crime scene, you should be able to clone segments of DNA (by polymerase chain reaction) from regions of V(D)J recombination, which have undergone said recombination (by choosing a short length, which amplifies better anyway). If you know the blood came from one of two identical twins, you could see if the prevalence of B cells matches one twin better than another, which is likely, since each event is random and which events prevail depend on prior exposure to antigens throughout life. I have no idea if this has ever seen a courtroom; you'd probably be paying someone's salary for three months to get the test done and half the time they can't be bothered to do even a simple DNA match for a case. Wnt (talk) 02:01, 5 December 2012 (UTC)[reply]

Are there any blood vessels in the skin? (Skin = Epi, Dermis, Hypo)[edit]

thanks. — Preceding unsigned comment added by 109.64.163.33 (talk) 04:06, 3 December 2012 (UTC)[reply]

See the Wikipedia article titled skin. --Jayron32 04:11, 3 December 2012 (UTC)[reply]
Capillaries, certainly, or you wouldn't bleed when you prick your skin. Unless you have spider veins or varicose veins, you probably don't have much larger blood vessels in your skin. StuRat (talk) 06:18, 3 December 2012 (UTC)[reply]

Is this idea accepted as plausible by psychologists, or almost scholars of ancient literature? It seems extraordinary to claim that people were not conscious 3000 years ago, yet I can't find much about how mainstream this idea is. --140.180.249.151 (talk) 06:44, 3 December 2012 (UTC)[reply]

"Is this idea accepted"? No. Is it mainstream? No. If it was there'd be more recent discussion of the question. The scholarly consensus is probably that human cognitive abilities have changed little over the last few millennia - but if it has, it may well have deteriorated, for environmental rather than genetic reasons. AndyTheGrump (talk) 07:05, 3 December 2012 (UTC)[reply]
Thanks for your answer. I didn't know what the most recent discussion of the question was, being completely unfamiliar with psychology. But I do think human cognitive abilities have improved drastically due to increased education, better living standards, and a more advanced world--this is reflected in the Flynn effect. --140.180.249.151 (talk) 18:22, 3 December 2012 (UTC)[reply]
Looks like total BS to me. Very little evolution occurs in 3000 years, certainly not enough to change the core nature of the human mind. StuRat (talk) 07:46, 3 December 2012 (UTC)[reply]
To my knowledge, Jaynes did not hypothesize that genetic evolution played any part in the breakdown of bicameralism. Rather, his thesis is more reminiscent of the evolution of memes, which was curiously proposed in the same year as bicameralism. Someguy1221 (talk) 09:52, 3 December 2012 (UTC)[reply]
This bicameralism theory is from 1976, the meme theory was popularized by Dawkins in a publication of the same year, but it's much older. OsmanRF34 (talk) 17:53, 3 December 2012 (UTC)[reply]
Bicameralism is somewhat akin to the Aquatic Ape Hypothesis in that both are not properly scientific due to being untestable (so far as is known). For that reason alone, beyond whatever anyone actually thinks of these concepts, they're never going to be widely accepted. On the one hand, they are very tempting because they appear to answer so many profound questions at a stroke, while on the other there is frustratingly little rigorous evidence. Matt Deres (talk) 22:42, 4 December 2012 (UTC)[reply]

Binary Black Holes[edit]

This is a long one.

Let us assume that we have an observer O, a distant star S, and a Black Hole H between S and O. Thus the light going straight towards O will enter H; however, due to H's gravity, light can go around H in an arc. Within a plane, there are two solutions: one arc goes through X, and one goes through Y.

ASCII art:

. S'
.
.
.              X
.
.S             H             O
.
.              Y
.
.
. S"

So there will be two apparent stars S: at locations S' and S". In the 3D case, O will see a ring with H and the true S at its center (although they won't see H or S). S' and S" will be red-shifted equally in that case, and the amount of redshift will depend on the escape velocity of S and the celestial body O is on. Which can even be negative if, say, S is an asymptotic Red Giant and O is on Mercury (planet) deep within the Sun's gravity well. But whatever, S' and S" will have the same spectrum.

So far, so good. Now let us assume that H is not a single, but a binary Black Hole, rotating clockwise within the plane shown above. Both equal mass, centered at H. I'm interested in the spectra in that case.

My friend argued that because the Black Holes emit gravity waves, S" will have a lower redshift; "its" light passes through Y and hits the gravity wave head-on, which will compress the wave. OTOH, the light from S' will pass through X and move in the same direction as the gravity wave; there will be no collision at all, thus the full redshift will be observed.

I suspect that that's wrong, and that the true way of interaction is completely different; that light does not get compressed like some kind of spring. I have the feeling that S" would exhibit the highest redshift, because the gravity wave takes some momentum from the light when it passes through Y. S' would be the other way around; it would gain a comparable amount of momentum, and thus the wavelength will be the shortest.

Bonus question: Is X or Y closer to H? (I'd guess X, because light passing through X gains momentum, which helps escape the gravity well - thus the light which passes X can make the closer pass and still escape at all).

Is the effect significant? From what it looks like, it takes not only a binary Black Hole, but a quite massive pair, lest the orbit size be negligible to the distances H-X and H-Y. So I guess the pairs which do exist are unlikely to exhibit a significant asymmetry. - ¡Ouch! (hurt me / more pain) 10:13, 3 December 2012 (UTC)[reply]

You may get an effect from a massive spinning black hole using the Kerr metric. An orbiting pair would have s similar effect. However the light rays will have to pass fairly close. If a binary black hole merges into one a significant fraction of mass is converted to gravitational waves, and as they pass your observer O space can be shifted permanently. I expect that this will cause a temporary red or blue shift. Using gravitational waves observers also hope to see variations in pulsar timings. Graeme Bartlett (talk) 10:42, 3 December 2012 (UTC)[reply]
Thanks. Kerr metrics seem to support my point, at least as far as the light paths and the X, Y points are concerned. Particles gain or lose momentum as I predicted, but will that effect red/blueshift light? - ¡Ouch! (hurt me / more pain) 16:06, 3 December 2012 (UTC)[reply]
shifting of the frequency could happen if the observer or source are near the black hole(s) so that they are in the gravitational potential well, and then experience gravitational red shift. But it do not see that one side of the gravitational arc would be shifted differently to the other. The gravitational potential for emitter and observer should be roughly constant for the light traveling via different parts of the arc. Graeme Bartlett (talk) 20:22, 3 December 2012 (UTC)[reply]
Let's assume that S and O are at the same distance from H. - ¡Ouch! (hurt me / more pain) 08:04, 4 December 2012 (UTC)[reply]
Then the gravitational potential due to the black-holes would be the same, no extra gravitational red-shift. Graeme Bartlett (talk) 09:57, 5 December 2012 (UTC)[reply]

Origin of Intelligence and Mental Illness Linked to Ancient Genetic Accident[edit]

Or so claims this article. The claim is so sweeping I find difficulty in even taking it seriously. What do you think? --Halcatalyst (talk) 12:32, 3 December 2012 (UTC)[reply]

It looks like they just reprinetd the University press release [1], which can often be pretty far off the mark. I don't have access to anything other than the abstract of the original paper [2], but that is the place to look for the real (although harder to read) version of the story. 209.131.76.183 (talk) 12:40, 3 December 2012 (UTC)[reply]
The role of paleopolyploidy in vertebrate evolution has been known for some time. Most of our genes form a group of up to four different versions that trace back to that event. As a result, double and triple knockout experiments like this are important. This one focused on a single set of genes, Dlg1, Dlg2, Dlg3, Dlg4, and their effects on some touchscreen tests. The university publicity office took that and spun it in a weird way. Note that when the whole genome duplications actually happened, the animals at the time might have seemed bigger or had some other simple differences, but since the genes were all the same these effects would be no more sophisticated than for any other single mutation with pervasive effects. It was not a recipe for instant brilliance. Wnt (talk) 21:16, 5 December 2012 (UTC)[reply]

relationship between conductivity and mean free path[edit]

This is a very basic formula found in most textbooks, but I don't have my textbook and the absurdity of this simple relationship not being on Wikipedia is aggravating me. Where can I find it? 128.143.1.238 (talk) 13:44, 3 December 2012 (UTC)[reply]

Are you referring to thermal conductivity or electrical conductivity? douts (talk) 13:52, 3 December 2012 (UTC) Not sure if it's of any use, but a quick Google search found this paper [[3]] entitled "Calculation of the thermal conductiwity and phonon mean-free path" douts (talk) 13:57, 3 December 2012 (UTC)[reply]

Thermodynamics of a microwave[edit]

Why is there no heat or work transfer in a microwave heating something or a microwave being used for radiotherapy or sterilising? Thanks. — Preceding unsigned comment added by 138.253.210.27 (talk) 16:32, 3 December 2012 (UTC)[reply]

I'm not sure I understand what you're asking here. Heat transfer does occur in a microwave - the water in the food is heated by the microwaves, cooking the food. Microwaves generally aren't used for sterilising objects - normally either gamma rays or temperatures above 200 degrees centigrade are used. If I've mis-understood the question please let me know. douts (talk) 17:03, 3 December 2012 (UTC)[reply]
Less than 200 °C is enough, as it's the case of an autoclave. OsmanRF34 (talk) 17:30, 3 December 2012 (UTC)[reply]
About 125 °C in a saturated steam environment, a bit more (I've used 160-180ish before) in a dry environment. Fgf10 (talk) 18:37, 3 December 2012 (UTC)[reply]

Watertight compartments[edit]

Watertight bulkhead compartments were written of by the Song Dynasty Chinese author Zhu Yu, in his book Pingzhou Table Talks of 1119 AD (written from 1111 to 1117 AD). Watertight compartments were frequently implemented in Asian ships, and had been implemented in the warships of Kubla Khan.[2][3][4] Chinese sea-going junks often had 14 crosswalls, some of which could be flooded to increase stability or for the carriage of liquids.

Anyone can build wood ships with watertight compartments. However, without watertight doors, the compartments are not very useful. You probably have to install many trapdoors on the deck for each compartment so you can climb up to the deck to go to another trapdoor.

I really don't think it's easy to build watertight doors on a ship made by wood. How did Chinese inventors solve this problem? -- Toytoy (talk) 16:36, 3 December 2012 (UTC)[reply]

I don't know about the Chinese but pitch can be produced from plant material and has been used for many watertight application in the past. The introduction of the linked articles talks a little about it. Dauto (talk) 17:07, 3 December 2012 (UTC)[reply]
Don't know if this is the case, but could it not be that the space between the keel (if they had one) and the lowest deck was compartmentalised? So as to compartmentalise at least part to the buoyant volume. In that case the habitable decks wouldn't need watertight doors. Lower compartments could easily be sealed with something like pitch, as suggested above. Fgf10 (talk) 18:36, 3 December 2012 (UTC)[reply]
Also note that the watertight compartments don't need to be 100% waterproof, from the top. Say the boat is swamped by a rogue wave. You only need to keep it afloat long enough to bail out most of the water. If some water leaks into the watertight compartment in that time, it's OK, so long as the boat doesn't sink before you can bail it out. StuRat (talk) 18:42, 3 December 2012 (UTC)[reply]
Yes, with very primitive watertight compartments, you can still buy time to plug the hole and let the sailors bucket brigade the compartment dry. Without compartments, you're dead with the fish.
However, doors made of cheap wood can be very imprecise. They they can absorb moisture. Unless you pay big money for really good woods and treat the parts with tung oil, I don't think it was a good idea to build doors between compartments. -- Toytoy (talk) 04:18, 4 December 2012 (UTC)[reply]

Neutron decay and atomic weapons[edit]

So I was just watching this video, http://www.youtube.com/watch?v=o_EBqZPCZdw

And the guy in the video (thunderf00t) said that a neutronium bomb would be very destructive but wouldn't release its energy all at once, but rather over the course of 10-14 minutes because that's the time it takes neutrons to decay (10 minute half-life, 14-15 life-time).

Well in a fusion reaction, about 80% of the energy released is in the form of high energy neutrons, and that made me think of the tsar bomb. According to the article, the bomb was tested without a uranium 3rd stage so almost 97% of the energy released in the test came from fusion alone. But yet the explosion came all at once, instead of being spread out, continuously exploding for 10-14 minutes. I thought about this, and I reasoned that it must have been due to the lead tamper they used to absorb the neutrons. So I thought about the neutron bomb. If the tamper were made out of nickel or chromium, allowing the neutrons to escape, would the explosion be spread out over the course of 10-14 minutes like the video suggests would occur? ScienceApe (talk) 17:47, 3 December 2012 (UTC)[reply]

All neutrons will quickly lose their energy due to scattering off atomic nuclei in the air/water/soil. Then a significant part of the neutrons will be absorbed by some nuclei. The energy that will be released as by radioactive decay of the remaining neutrons is quite small. Ruslik_Zero 19:28, 3 December 2012 (UTC)[reply]
Note that even if neutrons did behave that way — that is, they went until the end of their "life" and then decayed to release their energy — their energy would be incredible diffuse. D-T fusion neutrons travel at a speed of 52,000 km/s; if they survived for 10 minutes they'd be well, well outside the radius of the planet by the time they decayed! It doesn't work that way, obviously. As for the tampers, they do matter with regards to what happens to the neutrons; neutron bombs are just hydrogen bombs where the fusion neutrons are given a direct route outside of the bomb without being use either for fissioning or for just thermal energy. They still all come out in one burst, and still matter for only a fraction of a microsecond. --Mr.98 (talk) 00:25, 4 December 2012 (UTC)[reply]
Also note that even if 80% if the energy were in the radioactivity of neutrons (which is false; much of that energy is kinetic), the initial 20% would equal the first ~4 minutes of decay. Even then, the explosion would hardly be a continuous glow.
Also note that enhanced radiation weapons ("neutron bombs") were not designed to be less physically damaging. The physical effects are somewhat less than those of older nukes of that size, but not by an order of magnitude. They were designed to inflict higher overall damage against armored targets, and since those resist physical force quite well, they maxed their neutron radiation rather than their TNT rating.
Don't know for sure about mean free path of neutrons, though. I doubt many would escape Earth. I would think though that most of them would have been captured by one nucleus or the other, within seconds. - ¡Ouch! (hurt me / more pain) 08:01, 4 December 2012 (UTC)[reply]

Why issue placebos to control groups for objective treatments?[edit]

Reading placebo effect, I saw "The placebo effect is highly variable in its magnitude and reliability and is typically strongest in measures of subjective symptoms (e.g., pain) and typically weak-to-nonexistent in objective measures of health points (e.g., blood pressure, infection clearance)." (Yes, I saw "citation needed", but is the statement really indefensible?). I wonder, if it's an all-but sure bet that sugar pills are *not* really going to be effective against a given cancer, why do testers still do controls with them? It seems about as likely for a control group taking sugar pills to see effective cancer treatment in any of the control group subjects as would be if the control group held water balloons. Is the only purpose in objective (not asking patients how they feel) placebo-using trials to have double-blind studies to keep the doctors honest and from tampering with administration protocols, such as giving more to patients they know are on the real thing so there's a better chance to have success and continued funding? 20.137.2.50 (talk) 18:01, 3 December 2012 (UTC)[reply]

The idea is to make the differences between control and test group as small as possible. The placebo effect is a complex thing, for instance, speaking to a doctor will often also have a beneficial effect. By making the control group as close as possible to the test group, you can rule out as much as possible any effects not due to the drug being tested. An other aspect is that without placebos it would be easy to determine for the trial doctors which group is control, and it's then easy to subconsciously treat both groups differently in for instance bed-side manner. It has been shown (bear with me while I hunt for a ref) that a drug given with a positive talk by the doctor will have a better effect than the same drug given with a disparaging talk. Just to rule out effects like that, we do double-blind placebo-controlled trails. Fgf10 (talk) 18:33, 3 December 2012 (UTC)[reply]
Blood pressure is something I'd expect to respond to placebos, since just relaxing can change it, and somebody thinking they just took some good meds might be more relaxed. Also note that many meds are approved even though they only have a slight benefit, like in 10% of the patients. So, even a tiny placebo effect could have a major effect on such results. StuRat (talk) 18:37, 3 December 2012 (UTC)[reply]
A sugar pill with a smile can rival an alpha blocker with a frown in terms of effectiveness? 20.137.2.50 (talk) 18:53, 3 December 2012 (UTC)[reply]
Maybe - but it's certainly possible that a sugar pill given by a guy in a white lab coat with a stethoscope casually draped around his neck who says "This is an absolute wonder-drug - it's the latest thing and it'll fix you up for sure" - is more effective than an alpha blocker handed to you by an intern who says "I'm this probably isn't going to help, but please take it anyway.". In the case of blood pressure - where stress is a known cause - it's quite obvious that making the patient think that they'll be cured will reduce their stress and therefore help their blood pressure to go down. But there are other placebo effects where the link is not at all obvious. It's been shown that weight lifters can lift more weights if given a course of sugar pills that they are told are really steroids...people who are told they are drinking alcohol when they are not will show signs of intoxication. It's a weird set of effects going on here. SteveBaker (talk) 17:57, 4 December 2012 (UTC)[reply]
As others have stated, placebos can actually have an effect in patients even in cases of cancer. In fact the American Cancer Society has a long discussion of placebos [4] and while it doesn't mention cancer that much it's a good read if your understanding of the placebo effect is poor. [5] and [6] may discuss the placebo effect and cancer somewhat although I haven't actually checked the articles. As Fgf10 said, there are also other advantages to having a placebo, namely in preventing differences in the interaction between the patient and other people. In other words a placebo or something similar makes it possible to do a double blind which is generally considered the gold standard in medical research. However I would note that not all double blind trials are placebo controlled, as our Blind experiment and Clinical trial mention, it's also common to use existing medicines for the condition being treated. Nil Einne (talk) 20:16, 3 December 2012 (UTC)[reply]
I'll look at those sources promptly. But I'm impatiently curious; do those papers even hypothesize what could possibly be going on (in terms of chemistry) in any single case where someone taking a placebo sees a significant improvement in their cancer treatment? 67.163.109.173 (talk) 22:30, 3 December 2012 (UTC)[reply]
BTW the Declaration of Helsinki article which is linked from the clinical trial article goes in to a fair amount of detail about the controversy surrounding the usage of placebos when existing treatment exist. Nil Einne (talk) 20:44, 3 December 2012 (UTC)[reply]
In a case like that, I'd think using an existing med instead of a placebo would make sense. You would then compare the results to find out if the new med is better than the old med. StuRat (talk) 20:51, 3 December 2012 (UTC)[reply]
Yes definitely, when an effective treatment exists for a specific disease, the control group will most likely receive the current treatment, not just a placebo. You aren't necceserily trying to find out if a new treatment is just better then a placebo, you're trying to find out if it's better then the current treatment. Of course "better" might not just be efficacy, it could also be cost, side-effects, how invasive the treatment is, etc... But in most cases where a treatment already exists, it would be considered unetihical to withold it for the purposes of research. Vespine (talk) 21:58, 3 December 2012 (UTC)[reply]
As I mentioned in my post I was replying to with references to our articles, that is often done. But as the Helsinki article attests and Vespine mentioned even if it's generally regarded as unethical to withold an alternative existing treatment (although there is some dispute in cases when the consequences and risks aren't high), that doesn't stop it happening particularly in the developing world. (In case there's still some confusion that's the primary reason I brought the Helsinki article in to the mix after already mentioning that it's common to use an existing medication when one exists instead of a placebo.) Nil Einne (talk) 00:43, 4 December 2012 (UTC)[reply]
which brings up my nagging question; if we reject a treatment that is no better than placebo, but we don't give patients placbos, aren't we therefore undertreating them? Gzuckier (talk) 18:35, 4 December 2012 (UTC)[reply]
Because a measured placebo effect is NOT just 'one' thing. It's not, as most people naively assume, a real physiological change in the patient which improves their condition, or at least it's MOSTLY not that. A placebo is everything except the effect of the substance you are trying to test. That does include things like how the patient reports their condition, but it also includes things like regression towards the mean, statistical bias, reporting bias, experimenter's bias, etc.. Vespine (talk) 02:51, 5 December 2012 (UTC)[reply]
good point. Gzuckier (talk) 06:12, 5 December 2012 (UTC)[reply]
The placebo effect article raises two other important issues with administering placebos as medicine. One being that it would be pretty much impossible without a degree of deception on the doctor's part, which is problematic; the second being that placebo effect is neither consistent or reliable. Not that any medicine is 100% consistent or reliable, but placebo is, pretty much by definition, the very least consistent and reliable treatment available. Vespine (talk) 21:50, 5 December 2012 (UTC)[reply]

Global warming[edit]

Is there a way of identifying exactly whether a certain CO2 molecule was created by man or naturally made? Could this identify whether global warming is caused by man or occurring naturally? I've heard that Carbon isotopes can identify CO2 which has been produced by burning fossil fuels but is this an accurate test? Thanks. 138.253.210.27 (talk) 18:01, 3 December 2012 (UTC)[reply]

No. A particular CO2 molecule will contain the carbon-12, carbon-13, or carbon-14 isotope, with the first 2 being stable, and carbon-14 being unstable with a half-life of around 5,730 years. So, if it contains carbon-14, the chances are that it's not a fossil fuel from the age of dinosaurs. However, the molecules containing carbon-12 or carbon-13 aren't necessarily "old", as only 1 in a trillion carbon atoms is carbon-14, even in new CO2. But, if you look at a large sample, then the ratio of carbon-14 will tell you something about the average age of the carbon molecules.
But now for the complication: carbon-14 is being continually generated in the upper atmosphere: [7]. So, the carbon-14 we find up there might be generated in that way, from carbon from burnt fossil fuels, or it might be from wood somebody burned, or it might be from what a human or other animal exhaled. And, of course, the carbon-14 generated up there doesn't stay aloft, it moves throughout the biosphere. StuRat (talk) 18:19, 3 December 2012 (UTC)[reply]
(edit conflict) No, not by individual CO2 molecules. However, that doesn't mean that reliable correlations cannot be drawn to decide if the bulk of CO2 molecules is not from man-made sources. To take an analogy: Imagine you're watching a New York City street and every single person is an exact clone, with the exact clothes. Now, imagine you can only take snapshots of the street, so you don't see people moving, but you can count the number of people at any given time. You don't know where each individual person comes from, but you can correlate the number of people on the street with certain times of day, and you can also correlate other events that occur at those times, so you can say that the increase of people on a specific street at a specific time may be due to a train arriving at a nearby subway stop. You don't need to see the people get off the train to consider that a reasonable conclusion. You don't even need to know which specific people got off the train, the fact that there are more of them at the same time every day, immediately successive to the train arriving, is enough to establish the reasonable conclusion that some of those people got off the train. It's the same thing with CO2 increases. You don't need to know which CO2 molecules came from, say, a volcanic eruptions and which come from burning coal. We know that some increase in CO2 is going to come from burning fossil fuels, because it is patently obvious to anyone with a high school chemistry level knowledge of combustion that burning hydrocarbons makes CO2. We don't need to watch each CO2 molecule individually from the moment it formed and track each one to know that, in the bulk, burning more stuff makes CO2, any more than you need to track every pedestrian to know that the number of people on the street will increase after the train arrives. --Jayron32 18:25, 3 December 2012 (UTC)[reply]
  • There's another aspect to this. As our article on isotopes of carbon points out, plants take up 12C in preference to 13C, and that preference carries over to fossil fuels. The main sources of atmospheric carbon are fossil fuel burning, respiration, and volcanoes. Volcano-derived carbon has a different isotope ratio than biologically generated carbon -- this ratio is known as δ13C. There is also a smaller difference between carbon derived from fossil fuels and carbon derived from recent respiration, because of differences in the atmosphere when the fossil fuels were created. Looie496 (talk) 18:49, 3 December 2012 (UTC)[reply]
  • The effect of fossil fuel burning changing the isotopic ratio of carbon in the atmosphere is known as the Suess effect and was first described in 1955, when its influence on radiocarbon dating was noted. That said, we have many additional lines of evidence that the increase of CO2 is anthropogenic. The simplest is basic chemistry. We (roughly) know how much fossil fuels we burn, and hence how much CO2 we emit. We also can measure the amount of CO2 in the atmosphere. The increase in atmospheric carbon dioxide corresponds to roughly half of what we release (much of the rest is absorbed by the oceans). We can also see a corresponding decrease in atmospheric oxygen. Note that the question of where the CO2 comes from is different from the question of what its effect is. That said, the spectroscopic properties that give rise to the greenhouse effect have also been known for more than 100 years and the effect of adding carbon dioxide was first quantified by Svante Arrhenius around 1900. --Stephan Schulz (talk) 20:41, 3 December 2012 (UTC)[reply]

Travelling to star[edit]

Scenario: I pick a star 100 million ly away to travel to. And i can get there because i can live forever. the star is on the very ege of a galaxy 200,000 ly across. When i plug the coordinates of the star into my space craft, HAL, will i travel in a straight line and wind up possible 200,000 ly off my mark? or will i have to adjust my course as i get closer? or something else?165.212.189.187 (talk) 18:40, 3 December 2012 (UTC)[reply]

No, if you travel in a straight line at a star on the other side of the galaxy, when you get there the star will not be there, because the galaxy is in motion: it is moving relative to where it is now in several ways, it is rotating about an axis, and it is moving in a line away from other galaxies. You would need to aim for where the star will be when you plan to arrive, based on where the star is now and where it will move over the course of your trip. This is not trivial, because a) the star is not now where you see it (because light takes time to travel) and b) the star's location needs to be calculated taking into consideration a) your current motion b) the stars motion and c) the complete motion, including acceleration, deceleration, and top travel speed, of your spaceship. This is all hypothetically possible to calculate, so other than the tricky math, there's nothing wrong with calculating a straight line path that should make you arrive at the same point in spacetime as that star some place in the future. --Jayron32 18:52, 3 December 2012 (UTC)[reply]
And this is similar to planning trips within our own solar system, although the other planets are within a few minutes or hours of where they appear to be. If it's a round trip, that makes it even more complex. StuRat (talk) 19:26, 3 December 2012 (UTC)[reply]
Or you could just program your ship's computer to adjust your course every (insert period of time here), so that as you get closer, the error factor will be reduced with each course change. douts (talk) 19:03, 3 December 2012 (UTC)[reply]
You'll have to compensate continually for the near infinite number of disruptions from gravitational influences along the way. Fgf10 (talk) 19:21, 3 December 2012 (UTC)[reply]
Practically yes, but hypothetically, if you could keep track of the entire galaxy in a single computer, and reliably predict the entire gravitational field of the galaxy, as well as changes to that field over time, you could hypothetically have all those corrections made before the trip. That is, any corrections made to your trip would come from incomplete knowledge rather than anything which is physically impossible to know. Any corrections made along the course due to incomplete knowledge could be corrected before the trip if the information had been known ahead of time. --Jayron32 19:24, 3 December 2012 (UTC)[reply]
I don't think that's even theoretically possible. That is, storing the location and vector of every atom in the universe would require a computer made of more atoms than the universe. There's a thought experiment along these lines, but I forget the name. Of course, if your margin for error is large enough, like a solar system, no corrections may be needed. If, on the other hand, your goal is to dock with a spaceship in that solar system, then corrections will be required at some point. StuRat (talk) 19:31, 3 December 2012 (UTC)[reply]
Well, I suppose yes, you are correct. The question is when the corrections need to be made: constantly along the way, or only at the end. I think the analogy of the airliner is appropriate: Airplane pilots basically aim their plane at the airport and don't adjust much during the length of the trip: autopilot is capable of keeping the plane on the correct course, usually a fairly easy-to-calculate great circle course (the Earth equivalent of a straight line), and the pilots only need to be involved in the take off and landing portions. For most of the trip, the pilots don't do much except monitor the plane to make sure nothing goes wrong. For a properly working plane, 99% of the trip doesn't need any adjustments at all. Hypothetically, traveling to a distant star should involve a similar level of involvement: aim and fire, and then adjust the final destination when you get close. You aren't likely to end up ridiculously off course if the proper calculations are made ahead of time. --Jayron32 19:39, 3 December 2012 (UTC)[reply]
Are you sure that autopilot isn't making tiny corrections all along ? StuRat (talk) 19:51, 3 December 2012 (UTC)[reply]
At a more practical level, unless the target star has a circular orbit with precisely known velocity, making predictions will depend on the distribution of dark matter in the target galaxy, which we know only at poor resolution. Looie496 (talk) 19:34, 3 December 2012 (UTC)[reply]
Add to all this above that your target star maybe doesn't exist anymore or disappear during your journey. OsmanRF34 (talk) 22:02, 3 December 2012 (UTC)[reply]

So couldn't the galaxy also be moving not just directly away from us but also some other arbitrary direction? What do you mean "in Aline away from other galaxies" GeeBIGS (talk) 00:44, 4 December 2012 (UTC)[reply]

The expansion of the universe causing movement away from distant galaxies should be the dominant factor at that distance, but there would also be a smaller factor in another arbitrary direction. At shorter distances this factor is dominant. See for example Andromeda–Milky Way collision. PrimeHunter (talk) 01:22, 4 December 2012 (UTC)[reply]
No, gravitationally bound objects like galaxies do not expand due to metric expansion. -- Finlay McWalterTalk 01:32, 4 December 2012 (UTC)[reply]
The thought experiment could be expanded. If the person can live forever, they could travel to a star that is 50 billion light years away. Under that scenario metric expansion might come into play. Would that be correct? Would it still be possible to make almost all calculations before setting out on the journey? Bus stop (talk) 01:58, 4 December 2012 (UTC)[reply]
I don't think any stars (we know about) are that far away. Plus the star would be unlikely to exist when we got there. StuRat (talk) 02:30, 4 December 2012 (UTC)[reply]
[8] Bus stop (talk) 03:04, 4 December 2012 (UTC)[reply]

What I'm trying to get at is that in some cases the star is actually closer than the calculations predict because the arbitrary direction could be directly at us. Right?GeeBIGS (talk) 05:17, 4 December 2012 (UTC)[reply]

At shorted distances the arbitrary direction becomes dominant = the closer you get the more the arbitrary direction matters? So it seems like any path looks like a really long straight line with a sharp turn towards the end of the trip?GeeBIGS (talk) 05:30, 4 December 2012 (UTC)[reply]

How the path looks will depend on the kind of corrections applied.
The computationally least expensive way would be...
1-to fly where you see the star,
2-look for the star and turn towards it,
3-to fly where you see the star then (and so on)...
But that's not only wasteful of time and fuel, it's possible that you lose track of the star and cannot find it because it has changed while you were flying.
A more practical way which does not assume that the star is as unchanging and immortal as the pilot is to keep track of it, and to apply corrections after, say 5% of the distance, then after 5% of the remaining distance, etc.
The former method would look like GeeBIGs assumed, but the latter would be quite close to a least-time intercept course. It would still arc more tightly towards the end but not nearly as much as the first approach. You would be able to account for more and more of the arbitrary components, but quite gradually.
And finally, bright stars don't last for 200 million years. The speed of the ship becomes important, for even a faint star would not last that long if you were going no faster than the Pioneers/Voyagers. But these are probably a moot point, as they would not be fast enough to escape our galaxy... - ¡Ouch! (hurt me / more pain) 08:00, 4 December 2012 (UTC)[reply]

What I'm trying to get at is that in some cases the star is actually closer than the calculations predict because the arbitrary direction could be directly at us. Right?GeeBIGS (talk) 05:17, 4 December 2012 (UTC) — Preceding unsigned comment added by 165.212.189.187 (talk) [reply]

We can detect if a star is moving towards us or away, and measure the speed, using the blueshift or redshift. That's the Doppler effect applied to light. StuRat (talk) 20:41, 4 December 2012 (UTC)[reply]

Then why did others say that the arbitrary direction matters as you get closer? And that it might not be there when you get there? Aren't you only detecting the metric expansion with blue/redshift? It would appear that way but consider a star in some 200,000 ly orbit: we see the light while its at 12 oclock; we travel toward it; when we ge to where it was couldnt it have "come around behind us through 3, 6, 9" ?165.212.189.187 (talk) 21:04, 4 December 2012 (UTC)[reply]

Regarding the metric expansion of space: My understanding is that most galaxies are flying away from each other due to the inertia from the Big Bang, so have a redshift. We expected them to be flying away at a decelerating rate, due to gravitational attraction, but found they are actually flying away at an accelerating rate. The explanation for this is the metric expansion of space. StuRat (talk) 21:10, 4 December 2012 (UTC)[reply]

Mars Curiosity findings[edit]

There are reports of interesting chemicals being found on Mars. However, they caution that it might be contamination from Earth. Why didn't they take care of that possibility before they sent it? They had it in a clean room, why didn't they clean the damn thing to preclude the possibility of what they find being from Earth? Bubba73 You talkin' to me? 23:48, 3 December 2012 (UTC)[reply]

I'm sure they tried, but it's quite difficult to keep it sterile. What if an adjustment needs to be made after it's cleaned ? Do you clean the whole thing again ? And do you store it in a vacuum until launch, to prevent contamination by the air ? StuRat (talk) 23:57, 3 December 2012 (UTC)[reply]
This episode of NPR's Science Friday talks about the degree to which a planetary probe is cleaned and sterilised. -- Finlay McWalterTalk 00:18, 4 December 2012 (UTC)[reply]
Thanks for the info and link. But if they are going to send it off not clean, and they have those doubts about their initial results, will they ever be able to have confidence in them? Bubba73 You talkin' to me? 00:53, 4 December 2012 (UTC)[reply]
I recall reading that initial scoops of Martian soil are discarded because of the possibility that they will be contaminated. If I recall correctly, the operation of the mechanism is designed to clean itself with the first few scoops of soil. After that it is not expected to be able to still contain contamination from Earth. But I am just recalling this from memory (faulty). I think I read it in Scientific American. Bus stop (talk) 02:05, 4 December 2012 (UTC)[reply]
Here it is: "But before Curiosity fired up its CheMin (Chemistry and Mineralogy) instrument to analyze the soil, it first had to purify its sample-collection instruments using Martian sand as a cleansing abrasive."[9] Bus stop (talk) 02:21, 4 December 2012 (UTC)[reply]
They go for 50ng/g of sample contamination. If anybody here has a clue how clean that is? To get lower than that you have to built a polymere free instrument only using ceramics and metals. No cables with isolation no electronic boards .... This is impossible. The cleanliness they have is amazing.--Stone (talk) 03:21, 4 December 2012 (UTC)[reply]
To put 50ng/g in perspective, the standard for the purest form of water calls for less than 100ng/g of solid contaminants.Dncsky (talk) 03:58, 4 December 2012 (UTC)[reply]
Bubba: this is how science works. One must always consider alternate explanations for data. It's not that they didn't send the probe off clean, but rather that no matter how clean you think your probe is, if you get striking results that could be explained by contamination, you are going to say that contamination is a possible explanation for the results. Ideally, there will be other tests they can do to rule out the possibility of contamination.--50.196.55.165 (talk) 19:25, 7 December 2012 (UTC)[reply]

What i like best is the fact that curiosity find the same stuff Viking found in the 1970s. Klaus Biemann reported that they fond dichloromethne and chloromethane, but they argued that it was contamination from solvents used for cleaning. But now it looks more like the chloromethanes were produced by the reaction of organics wit the chlorine from the perchlorates.--Stone (talk) 03:25, 4 December 2012 (UTC)[reply]

Perhaps they were using "Earth contamination" as a catch-all for anything that could invalidate the results. Bubba73 You talkin' to me? 18:21, 4 December 2012 (UTC)[reply]