Wikipedia:Reference desk/Archives/Science/2010 February 7

From Wikipedia, the free encyclopedia
Science desk
< February 6 << Jan | February | Mar >> February 8 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


February 7[edit]

Scurvy - 133 year delay[edit]

The scurvy article says that in 1734 a book was published which said "scurvy is solely owing to a total abstinence from fresh vegetable food, and greens; which is alone the primary cause of the disease". But the Rose's lime juice article says "The Merchant Shipping Act of...[1867]... required all ships of the Royal Navy and Merchant Navy to provide a daily lime ration to sailors to prevent scurvy." Why was there a 133 year delay? 78.146.77.179 (talk) 02:01, 7 February 2010 (UTC)[reply]

The difficulty of carrying enough fresh veggies on a long ocean voyage (and keeping it fresh) meant that even though they knew the cause of scurvy, they didn't know what to do about it. It wasn't until MUCH later that it was realised that citrus fruit were sufficient to prevent this horrible disease - and not until it became possible to preserve that citrus juice for long enough that it became possible to prevent it in practice. Science in the 1700's wasn't what it is today! They didn't have the knowledge or tools to understand what it was in the composition of fresh veggies that prevented scurvy - and they didn't know how to find that in a portable, long-lasting form. SteveBaker (talk) 02:06, 7 February 2010 (UTC)[reply]
Also, there is a difference between knowing a remedy, and prescribing it. People knew that child labor was bad for kids forever, but the first laws restricting it were the Factory Acts in 1878. I think the Royal Navy started carrying and distributing limes and lime juice on long-time missions back in the Napoleonic wars. --Stephan Schulz (talk) 08:04, 7 February 2010 (UTC)[reply]
There is also a difference between prescribing a remedy and mandating it. As you point out, limes were used in the navy before the law was passed. alteripse (talk) 14:43, 7 February 2010 (UTC)[reply]
And of course the very first time they suspected that carrying limes would help with scurvy would have been a bad time to pass a law mandating it. First they needed more evidence - and since scurvy takes a long time to develop, it only appeared on the longest sea journeys. Even after it was well established that lime juice worked, it would have been reasonable to assume that Merchant navy ships would have adopted it voluntarily. No wonder that there was such a delay between suspecting the cause and passing the law. SteveBaker (talk) 15:51, 7 February 2010 (UTC)[reply]
On last week's edition of "Empire of the Sea" on the BBC, Dan Snow claimed that scurvy was virtually unknown among the officers on board ships, and this was due to their having a better diet than normal ratings. There was a resistance towards improving the diet of ratings, even though it had been proved to be of benefit, because of the negative attitudes towards the lower ranks from the Admiralty. However, since the BBC website on this doesn't contain any additional information, I can't cite his sources for this assertion. --TammyMoet (talk) 10:55, 7 February 2010 (UTC)[reply]
The Royal Navy experimented with stuff called Portable soup, which was dehydrated vegetable stock. Sadly, most of the Vitamin C (which they didn't know about) was lost in the processing, so it didn't help much. Alansplodge (talk) 15:05, 7 February 2010 (UTC)[reply]
Yeah - and that would be exactly the kind of thing that would leave them wondering whether they had the cause right in the first place. SteveBaker (talk) 15:51, 7 February 2010 (UTC)[reply]
From the mid to late 1700s Capitan James Cook used to take Sauerkraut as a preventative to scurvy. The article states that germany still took sourkraut even after england changed to limes, which gave the brits the nickname "Limey" and the Germans "kraut". Vespine (talk) 05:19, 8 February 2010 (UTC)[reply]

Weird TV interference[edit]

My wife is currently watching Before Sunrise on TVOntario via Rogers Cable. We have traditional analog TV. A few minutes ago, an unusual form of interference began appearing on the screen. It takes the form of narrow streaks appearing diagonally on the screen, sloping down from upper left to lower right, about 20° from horizontal. Each streak is slightly wiggly, not straight, and about 8 inches long. They vanish after a moment and new streaks appear. Perhaps due to persistence of vision, it seems as though there are about a dozen streaks on the screen at any particular time, spaced 2-3 inches apart. The overall effect is like rather watching the TV through a blizzard (for those in warm climates, that's a snowfall combined with a strong wind).

We switched on a VCR whose tuner is better shielded than the TV's, and using that tuner the interference disappeared. Switching back to the TV tuner, I see that the interference seems to have gone away while I've been typing this message (although TVO still seems to be more fuzzy than usual).

But I'm curious what sort of signal would generate this interference pattern, because I've been watching TV for decades and I've never seen this effect before. --Anonymous, 02:04 UTC, February 7, 2010.

Does this guide help? Mitch Ames (talk) 12:06, 7 February 2010 (UTC)[reply]
Nope, but thanks for trying. None of those has the sort of diagonal streaks I described. Whatever it was, a frequency close, but not identical, to the horizontal scan frequency must have been involved. --Anonymous, 20:21 UTC, February 7, 2010.
While you use cable TV you have no control over signal quality. However your cable company is large enough to have a complaints department that serves millions of subscribers and you can contact them via the website www.rogers.com. If you can send them a phograph of what you describe happening to your picture there is a good chance that they can explain and/or prevent it happening again. It may have been a disturbance caused by servicing their own equipment.Cuddlyable3 (talk) 20:08, 7 February 2010 (UTC)[reply]
Since changing to a tuner with better shielding eliminated the interference, it's clear that it wasn't originating from the cable system. (The signal was also weaker than normal, which is a cable system problem and I'll take it up with them if it persists.) --Anonymous, 20:21 UTC, February 7, 2010.
Multiple vertical streaks are given by interference at a multiple or "harmonic" of the horizontal scan frequency. Slight offset from a harmonic gives slanting streaks. Thus the interfering frequency may lie far from the horizontal scan frequency. Cuddlyable3 (talk) 13:24, 8 February 2010 (UTC)[reply]

Animal/plant hybrid[edit]

http://www.wired.com/wiredscience/2010/01/green-sea-slug/

Since it was an animal, but it's evolved to produce chlorophyll and carry out photosynthesis like a plant, which kingdom is it? --75.28.169.54 (talk) 02:30, 7 February 2010 (UTC)[reply]

It's still an animal, since the vast majority of its genes are inherited from its evolutionarily recent ancestors, but if the researchers are right (I haven't read the paper, so I don't know how solid their methods were) then it may have picked up a couple of algal genes by horizontal gene transfer. Exchanging genes in this manner is actually pretty common among microbes, but far less so among most complex organisms, so if this turns out to be real then it's incredibly cool. – ClockworkSoul 02:38, 7 February 2010 (UTC)[reply]
The animal undeniably originates from a less exotic species, so this is 100% animal. What makes a plant a plant is the cell structure, not just one gene. 67.243.7.245 (talk) 02:41, 7 February 2010 (UTC)[reply]
But how many other animals photosynthesize? None. --75.28.169.54 (talk) 02:42, 7 February 2010 (UTC)[reply]
Which is exactly why cladistics makes a better basis for taxonomy than single body characteristics. alteripse (talk) 02:52, 7 February 2010 (UTC)[reply]
Precisely. After all, is a bird no longer a bird if it can't fly? Does a mammal become a reptile when it lays eggs? – ClockworkSoul 02:59, 7 February 2010 (UTC)[reply]
Then the headline of the Wired article I linked to is wrong. --75.34.66.111 (talk) 03:03, 7 February 2010 (UTC)[reply]
They often (usually) are... that's what happens when science articles are written by non-scientists, which are headlined by even less science-savvy editors. It's good that you noticed that though; most people never do. – ClockworkSoul 03:17, 7 February 2010 (UTC)[reply]

Concrete/abstract knowledge[edit]

Why is it that students have so much trouble understanding concepts like electric/magnetic fields, algebra limits, and derivatives (to give only a few examples)? The universal answer I've heard is that children start by thinking in concrete terms and become more capable of abstract reasoning as they age. This message is echoed in Wikipedia's articles about child development.

I've never understood how this could make sense. Very young children understand concepts like love, emotion, time, and protection. Those are obviously more abstract than electric fields or the concept of using "x" to represent an unknown; I can explain what an electric field is to somebody who's never heard of it, but how do you even begin to explain time or love? Why is it that five-year-olds have no trouble at all understanding all these abstract things that they can't be felt, seen, or imagined, yet high school students struggle with "abstract" physics and math? --99.237.234.104 (talk) 04:10, 7 February 2010 (UTC)[reply]

Are you sure they understand love, emotion, time and protection? Well, perhaps they do. But those things have been around for millions of years - we've evolved to be able to pick up on the things we need in early life very quickly - the ability to learn language just by hearing the sounds made by other people for example. The other things you listed have been around for maybe 100 years - they are a recent invention - we certainly aren't evolved to be able to understand electric and magnetic fields. SteveBaker (talk) 04:34, 7 February 2010 (UTC)[reply]
Children may well have experienced love and protection from their parents, emotions of their own, and time. But do children understand those beyond "this is what it is?" Ask a child how time, love, emotions or protection actually work...By the same token, a child could probably understand magnetism in general (hold this up to the fridge and it sticks), but explaining how it works is a different matter. Vimescarrot (talk) 10:12, 7 February 2010 (UTC)[reply]
I believe the key is that mathematical equations, electrical fields, and things of this nature are neither compelling nor real to high school students (or most people for that matter). So long as the TV turns on and the coffee machine works, your typical person, myself included, is quite content to let the men of science worry about polarities and resistances. These other 'abstract' things you speak of that children do understand are not so abstract at all. Are you really telling me that love is not felt? Then it is not love at all. And time is of course not perceived, but everyone has experience with boredom, or at least having to wait. Protection you say is abstract, but to a five year old this is not abstract at all – surrounded by strange and sometimes gruesome adults, not to mention malefactors their own age, it is comforting to have adult parents looking after your interests. Vranak (talk) 12:42, 7 February 2010 (UTC)[reply]
You are comparing apples with bananas. For instance, three year olds can speak grammatically without much dificulty. These very kids will have trouble later with high school grammar. These are two very different forms of understanding you are talking about here. Dauto (talk) 15:06, 7 February 2010 (UTC)[reply]
I also would echo that the types of love, time, and protection, etc., that a child (or a person) understands are the practical types. Who doesn't understand protection from a practical standpoint? We know what it is to be threatened and to be out of it. Understanding it from an abstract or theoretical standpoint is much harder—requires knowledge of psychology, sociology, etc. We know love because of how it makes us physically feel (hence we associate it with hearts, redness, heat—all physical symptoms relating to our emotional state), not because we abstractly understand what it is (which we don't—unless we study a lot of abstract science). Time we only understand as it is lived. The physics and philosophy of time are mind-boggling abstract and very hard to teach (the idea that time and space are linked, for example, takes forever for even clever undergraduates to grok). --Mr.98 (talk) 15:10, 7 February 2010 (UTC)[reply]
I'm sure I understand love, emotion, time, and protection, but shall I ever understand electric/magnetic fields, algebra limits, and derivatives?
File:Ape shaking head.gif Cuddlyable3 (talk) 19:53, 7 February 2010 (UTC)[reply]
Bringing this back to the realm of reference desk , as in lets give some references for further reading, there are some classic psyshologists and sociologists and other similar minded people who have worked on this, and have provided some of the classic frameworks for understanding some of these ideas. Consider reading up on:
  • Jean Piaget's Theory of cognitive development and later theories based on this, or in refutation of this, proposes that people move through predictable modes of learning, and that one cannot learn abstract ideas (which Piaget calles the "Formal operational stage") if one is not developmentally ready for it.
  • Kohlberg's stages of moral development, which expands Piaget's ideas into the moral realm.
  • Erikson's stages of psychosocial development yet another "stages of development" based on Piaget's model.
  • Bloom's Taxonomy, a pedagogical tool for meeting children's cognitive needs in a variety of modes, both concrete and abstract.
  • Maslow's hierarchy of needs, which deals with some of this, noting that people need to have their basic physical needs met before they can work on meeting their metaphysical needs. So, if someone is worried about where their next meal is coming from, they aren't spending a lot of time pondering algebraic matricies...
IN general, one could learn a lot about these ideas by studying topics like pedagogy and Child development and the like. --Jayron32 05:00, 8 February 2010 (UTC)[reply]
Erikson is based on the Freudian model, not the Piagetan model
to the point, though: for piaget, it's not concrete vs. abstract, it's concrete vs. formal. concrete understandings are simple, direct, and usually physical/experiential. a child understands love or protection in concrete terms of what s/he feels at any given moment, and what his/her parents do (hug, entertain, approve, shield, prevent). It's why you get those classic adolescent "I hate you!" episodes - the child is caught up in the immediacy of a given feeling, and has (quite literally) forgotten the feeling of love. it takes an adult mind to be capable of viewing love in formal terms (as an object that exists independently of current situation and mind state). the reason it's called formal reasoning is that (in piaget's terms) the the concept can be conceived of in formulaic terms, to which formal laws of logic and reasoning can be applied. Thus, adults can adapt their relationships to all sorts of conditions and circumstances that adolescents literally can't even imagine (because if they did imagine them, they'd lose track of the feeling of love)
in terms of something like electromagnetism, most everyone can see the concrete effects of a magnet, but in order to understand the theory one needs to be able to forget about any concrete effects and imagine an entirely formal system of rules that define a universe of potential effects. even visualizing that is a serious exercise in formal reasoning; working with it and manipulating it to produce effects is even harder. --Ludwigs2 06:46, 8 February 2010 (UTC)[reply]
Hmm, that doesn't quite seem to fit the Piaget I read. Might this be a matter of different translations? On the emotional side, while it is dangerous to extrapolate from our own experience, I distinctly remember being about 4 and being told off by my mother. She then said "You know that doesn't mean I don't love you, don't you?" and I was puzzled, because of course she still loved me. Her telling me off or being cross with me didn't change the fact that she loved me, and I didn't know why she'd think she had to tell me. Now, 4 year olds are not adolescents, and not everyone is me, but I'd want to see some extremely convincing studies before I bought into this model of emotional development. 86.179.145.61 (talk) 12:34, 8 February 2010 (UTC)[reply]

How common is the lack of understanding of the probability concept?[edit]

They might even be capable of performing enough probability calculation to be able to pass an exam at a college level course: "Introduction to probability and statistics".
Still, many of them seem completely unable to really grasp the basic concept of probability.

Has there been any scientific research on exactly how common this lack of ability is?       Seren-dipper (talk) 06:03, 7 February 2010 (UTC)[reply]

lots. social psychology had a fair-to-middling-sized interest in this back in the 80s. In general, people over-estimate small probabilities, fail to appreciate conditional probabilities, and are largely incapable of appreciating stochastic processes or other weighted or patterned random events. --Ludwigs2 09:28, 7 February 2010 (UTC)[reply]
Hmm. I am not quite sure. Are we talking about the same thing?
One thing is making errors in the assessment of exactely how great or small a probability or likelihood of something is. Another thing entirely, is the inability to comprehend the probability concept itself.
It is this latter scenario that I am searching for numbers and hard facts about.        --Seren-dipper (talk) 10:35, 7 February 2010 (UTC)[reply]
So, can you explain what exactly you mean by "the probability concept itself" ? Or maybe give some examples of situations in which you think it is not comprehended ? Gandalf61 (talk) 10:45, 7 February 2010 (UTC)[reply]
Sure! I can give you an illustrating example of someone not comprehendig it:
I had a disagreement once, where I argued that if I were to end up in a situation where I had tossed a (regular) coin and five times in a row it had landed "heads up", then there would still not be any better chance of winning a bet by betting on that the next toss would give "tails up". My 'friend' would not believe me! After I had explained everything, to the best of my ability, then my 'friend' just mumbled: "But you can not KNOW that for SURE!". %-)
--Seren-dipper (talk) 11:46, 7 February 2010 (UTC)[reply]
I just realized that what I really meant was: "a fair coin" (i.e. not tampered with (complare: "Loaded dice")) (I did not mean: "a regular coin", which i mistakenly did say). (English is not my first language).
--Seren-dipper (talk) 06:33, 9 February 2010 (UTC)[reply]
What you describe is the Gambler's fallacy. Gambler's fallacy#See also lists related articles that might be of interest. Mitch Ames (talk) 11:57, 7 February 2010 (UTC)[reply]
inductivists would bet the opposite way. 81.131.60.13 (talk) 18:59, 7 February 2010 (UTC)[reply]

I already gave you a precise reference and answer above. Go to your nearest bookstore and look at The Drunkard's Walk by Leonard Mlodinow. As I said when you asked the question about magical thinking, it gives you a full answer to both questions and many citations. alteripse (talk) 08:05, 6 February 2010 (UTC)[reply]

There are many levels to this misunderstanding. The "coin lands on a head 5 times in a row - what is the probability of a 6th head" issue is one of them - but there are both deeper and shallower versions of this. Even people who think they understand this stuff quite well often fall foul of the classic Monty Hall problem...and at the opposite end of the scale, pointing out the astounding improbability of your Toyota's gas pedal sticking and killing you isn't helping Toyota's stock price out in the slightest! (During the last year only 19 people have died due to sticky gas pedals - yet 100 people die on the US roads every single day. Driving a Toyota is having an utterly negligable effect on your probability of having a car accident - so why are you worried?). People really suck at statistics and that's that. In all likelyhood this problem is hard-wired into our brains because we're strongly driven to make conclusions from scrappy bits of data. If I'm a stone-age hunter and I know that there have been nice fat deer down at the waterhole the last 5 times I visited - then I should certainly go there next time I need a meal. In that environment there would be no way to reason between random events and non-random ones - was it random that the deer happened to pick that waterhole instead of another one - or is there something statistically significant going on here. So our "common sense" leads us astray sometimes. Perhaps the coin isn't a fair one? Maybe it does come down heads more often than tails? It takes a conscious effort of willpower not to apply that instinct to the tossed coin problem - and if the coin had come up heads a thousand times in a row - I can't imagine any statistician, however anal, betting on tails for the 1,001th toss. So there is some number of coin tosses where everyone will 'break' with strict logic and pick heads - for your friend that number is 5 - for me, it's maybe 10 - the odds of this being random luck versus it being a biassed coin are very high because coins are overwhelmingly fair - but are they better than one in 1024? I suspect that even people who get that right are not doing so at the level of "gut feel" - but by knowingly overriding that gut feel. I would bet that even people who are good at statistics and understand it fully could be fooled into making the wrong decision if you wrap it up sufficiently that they stop thinking consciously about it. SteveBaker (talk) 15:38, 7 February 2010 (UTC)[reply]
To be fair, Toyota accounts for only about 15% of current U.S. auto sales. If we assume that the number of Toyotas currently on the road is a similar fraction (I couldn't find those numbers in a hurry), and if we assume that automobile deaths are roughly uniformly distributed across cars and manufacturers (kind of hand-wavey, but a passable first approximation) then about 15 deaths per day are in Toyotas. Adding an extra 19 annual deaths is an extra 0.3%. Therefore, if you're killed in a Toyota (conditional probability!) then there's about a 1 in 300 chance that it will be due to a sticky gas pedal.
Another way to look at the situation is to look at the number of recalled vehicles – about 2.3 million in the U.S. [1] – and compare that to the total number of registered vehicles in the United States: about 250 million. If the affected Toyota models represent about 1% of the cars on the road, then they should also represent about 1% of the deaths — call it about 1 per day, or three to four hundred per year. Nineteen deaths represent five percent or more of that total. I further assume for the purposes of this discussion that your risk of dying in all vehicles is otherwise roughly equal (an approximation that almost certainly needs tweaking). In that case, if you're killed in a Toyota covered by the sticky-pedal recall then there's about a 1 in 20 chance that it was the sticky pedal that killed you — and that's starting to get into the realm of legitimate concern. Ain't statistics grand? TenOfAllTrades(talk) 16:55, 7 February 2010 (UTC)[reply]

The following was written as a reply to

user:alteripse's comment of 08:05, 6 February 2010 (UTC) 

(But I spent so much time typing it in, that two other posts got in between).

Yes! and Thank you! (The book: [Drunkard's Walk: How Randomness Rules Our Lives] went straight to my reading list and I am really looking forward to getting hold of it! :-) (Unfortunately I will probably have to wait a couple of weeks, while the book is shipped overseas).
But your previous answers, abowe, and the wikipedia article about Mlodinow, and the reviews at amazon.com, they all seem to indicate that this book talks about how we, everybody, (to a greater or lesser degree) do make mistakes in our assessment of the probability and likelihood of this or that, in our daily lives.
Which is something slightly different from whether or not an individual is able to grasp the concept of probability! For example:
An older person who do not understand the probability concept, might still do better probability assessments in his unconscious mind, on account of helpful instincts combined with a long life's worth of experiences, than a young person who do understand the concept of probability, but lacks the lifelong experiences of the older one.
Well. You referred to my earlier posted question (which was about "magical thinking") (I wrote a clarification of it <- ). I believe I still have not gotten quite the answer I am looking for there either. That is because I am interested only in the conscious usage of magical thinking (and maybe half conscious after it becomes a habit).
This article (link below), from which I only have access to the abstract, seem to indicate that there are substantial differences between conscious and unconscious magical thinking.
© 2007 by JOURNAL OF CONSUMER RESEARCH, Inc. • Vol. 34 • April 2008; DOI: 10.1086/523288
"Conscious and Nonconscious Components of Superstitious Beliefs in Judgment and Decision Making" by Thomas Kramer and Lauren Block
So, I am still not quite confident that I have found scientific research telling of the prevalence of conscious use of magical thinking. Nor of the prevalence of what we now may refer to as "immunity to 'the gamblers fallacy'" (By comprehending the concept of probability).
--Seren-dipper (talk) 18:34, 7 February 2010 (UTC)[reply]
I made a clarifying slight rephrase in my above entry.
--Seren-dipper (talk) 01:24, 9 February 2010 (UTC)[reply]
(Directed a link, above, towards the archived version)--Seren-dipper (talk) 22:43, 11 February 2010 (UTC)
[reply]
Another book you might like (I certainly do) is The Black Swan by Nicholas Taleb. He talks a lot about the reverse of your observation, how people commonly apply the rules of probability when they absolutely shouldn't. For most things in real life, we have no idea what the "rules" are and so probability is useless - probability as we learn in highschool and university is only useful in casino games and the like. In fact he uses your coin flipping example exactly: he asks "Doctor John", a scientist, what the chances of him flipping another head after 99 tail flips, and he says "50% of course. The prior flips have nothing to do with the probability of the next." He asks "Fat Tony", a street-smart trader from Brooklyn (a place that holds a strange fascination for Taleb) the same question and he answers "tails. You're telling me that you flipped 99 tails in a row and got them every time, and that coin ain't loaded? Getouddahere" or something along those lines. TastyCakes (talk) 18:51, 7 February 2010 (UTC)[reply]
Well, the trouble with that example is that you have to specify in advance whether the coin is known to be fair or not. The probability of getting 99 tails in a row with a fair coin is 299:1 against. So if you aren't told that the coin is fair, you have to ask yourself which is more probable - a running streak of 99 tails - or an unfair coin? Now, I would reason thusly: Has there ever - in the entire history of coinmaking - been an unfair coin? The answer to that is obviously "Yes". Have more than 299 coins been minted since the dawn of time? The answer to that is clearly "No". So the probability of this being an unfair coin is clearly much larger than the chances of 99 consecutive tails...and Fat Tony is right to choose tails. The statistician is clearly a moron. On the other hand - if you are told that this is definitely a fair coin - then even though you've just witnessed the most astoundingly unlikely thing in the entire history of the universe, the probability of the next flip being a head is still 50:50. In that case, the statistician is right to say it's 50/50. Personally, I'd pick tails even though I was assured it was a fair coin because the probability of the rules of statistics themselves being correct is probably less than 299! SteveBaker (talk) 19:45, 7 February 2010 (UTC)[reply]
I think that is exactly what Taleb is getting at, and he uses the coin flipping as a simple example. He expands it to other, more consequential, areas like, say, economics. While the Dr Johns of the world will try to apply probability to things, the Fat Tonies of the world do not believe the underlying model and so disagree. The "scientist type" mistakes "the map for the landscape"; they put so much faith in their model that they leave themselves vulnerable to all the things that they could not possibly predict and incorporate into their model, the so called "unknown unknowns". He argues that this makes quants, for example, underestimate the risks in some investments, because they are applying probability to a system where they don't know the rules, they don't know the likelihood of terrorists crashing into downtown New York or a global banking collapse. They don't know what tiny event is going to be a "black swan" and radically change things, for the positive or negative, and overshadow (by many magnitudes) the type of probabilities they (and others) do expect and build into their models.
He's not arguing against the bell curve outright, he's just saying that it only describes some things, natural phenomena and the like. In systems where a small part of the population accounts for a large part of the total impact (one elephant can't be a million times bigger than another, but one human can be a million times richer than another), the bell curve does not suitably describe it, and can lead the user far astray.
Anyway, I don't think everyone likes or agrees with his ideas, but I thought it was an interesting read. TastyCakes (talk) 01:39, 8 February 2010 (UTC)[reply]
What about probability theory? ~AH1(TCU) 21:20, 7 February 2010 (UTC)[reply]
Like most of science and mathematics, probability works just fine - but only so long as you specify your initial assumptions correctly. The probability of a tail after a run of 99 tails FOR A FAIR COIN is indeed 50/50 just like probability theory says. However, without knowing that it IS a fair coin, you can use probability theory to show the overwhelming likelyhood that this is NOT a fair coin and hence the probability of getting a head on the 100th toss is more like 299:1 against...which (as near as dammit) means that the next toss will come up with a tail. There is nothing wrong with probability theory - PROVIDING you state your knowns and unknowns clearly up-front. Fat Tony doesn't believe that the odds of there being an unfair coin are that small - so he bets tails in perfect accordance with probability theory. The supposed stupid statistician (being a complete idiot evidently) somehow assumes that we're talking about a fair coin. If that is indeed the case then he's completely correct - but if that was not clearly stated up-front then he'd have to be a complete moron to assume it without evidence. While it's all good fun to make the street-wise Fat Tony look smarter than one of those stupid statisticians, I think it's really unlikely that a qualified statistician faced with a real question wouldn't look at the odds and doubt the premise of the question. This makes another of those "just so" stories like the one that says that scientists "proved" that a bee can't possibly fly which turns out to be utterly without foundation. SteveBaker (talk) 13:43, 8 February 2010 (UTC)[reply]
Again, this is obviously a much simpler example than one that would have any consequence, but Taleb's argument is that it's a simple depiction of a somewhat common mistake: sometimes people assume a model for things taking into account certain probabilities of certain things. But in the real world, such models sometimes are completely invalidated (or their predictions overwhelmed) by an event the model creator didn't consider, indeed couldn't reasonably consider, an event that Taleb calls a black swan. In the coin case, the black swan was an unfair coin, in real life it could be anything. The "stupidity" of the scientist, according to Taleb, is not that he doesn't recognize an obviously rigged coin - it's that he assumes he could ever know he has a fair coin to start with. Of course with a coin you can do a pretty good check and judge if it's fair or not, but with things more complicated (and more important) verifying "initial assumptions", as you put it, is not possible. I don't think that invalidates models, but it is certainly good reason to take many of them with a grain of salt. For example, if banks were to model the likelihood of a bunch of people defaulting on their bonds at the same time, they might find it a very unlikely event - based on the probability of individual failures combined with the probability of linked factors causing a bunch of failures at once. But their estimate could be completely invalidated by a "game changing" event, such as September 11th, a global financial collapse, an epidemic, a big war or countless other "unknown unknowns". If they have put enough faith into this incorrect model, they could have exposed themselves to too much risk and could be devastated - losing a century of profit in one event. In my superficial understanding of the financial industry, that seems to have been the situation for many banks at the end of 2008. TastyCakes (talk) 19:08, 8 February 2010 (UTC)[reply]
You just have to think about how people react to things like not finding WMDs in Iraq. Did they think, ah we've tested the hypothesis and now we have to decrease our original estimate of the probabilities? Not a bit of it, he must be more evil than we originally thought and has cunningly hidden then so he is more dangerous was how many people dealt with it. And that was a simple case compared to the dreadful miscarriages of justice that have happened from a practically universal failure to handle probabilities right in courts. Dmcq (talk) 23:56, 7 February 2010 (UTC)[reply]
Years ago I read of an experiment conducted on participants at a statistician's conference. They were given a thought problem, in which six people were selected with replacement from a population specified to be 5.5 feet tall, and the average of the six was 5.0 feet. The statisticians on average thought that the next 6 selected would be expected to have an average higher than 5.5 feet, to make up for the shortness of the first 6. This showed that they had a "book understanding" of statistics, but "their gut" subscribed to the gambler's fallacy. Theoretically, the results from the first sample of six would not influence the expected mean of the next sample of six. Edison (talk) 20:20, 8 February 2010 (UTC)[reply]
I would really like to get my hands on that report! It would be very helpful to me!
Could you please try to think of any clues to help me find it?
  1. Where might you have read about it? Or at least what kind of publication (scientific journal?) did you read it in?
  2. If it was in a Newspaper: Could you remember anything about (the wording) how they punch-line-ized the heading? (I apologize for my lacking eloquence here).
  3. Approximately how long time ago did you read about this?
  4. Had the conference just endedd at that time or had it happened years earlier?
  5. In which country did the conference take place?
  6. Do you remember any names of the participants?
  7. Anything!?
--Seren-dipper (talk) 06:59, 10 February 2010 (UTC)--Seren-dipper (talk) 20:05, 10 February 2010 (UTC)[reply]
But again, did the questioner specify an infinitely large "population" from which these people were being drawn? If not, then in a finite population with an average height of 5.5, removing six people who are shorter than that will indeed increase the probability that the next six will be taller than 5.5. That's the problem with this kind of thing - unless you very carefully specify the conditions (is the coin fair? is the population infinite?) then you may get answers different than the ones you want. SteveBaker (talk) 23:50, 9 February 2010 (UTC)[reply]
The "sampling with replacement" part removed any contingency where the people left in the population were the tall ones. Having a large population (thousands) would likewise mean that every sample of 6 would have the population mean as the expected value. The "gambler's fallacy" is a powerful phenomenon: if I get heads 4 times in a row from a fair coin, it is very hard to keep believing that heads and tails are equally likely on the next toss. Of course in the "real world" one might eventually wonder if the claim of "fair coin" or "population mean height is 5.5" is untrue. Edison (talk) 01:38, 10 February 2010 (UTC)[reply]

keeping roses in tip top shape[edit]

It's Singles Awareness Day again but I'm planning to change that by giving someone 12 red roses :) Anyways, so how does one make the roses survive for at least a day? I plan to buy the flowers on Feb 14 but I can only see the girl on Feb 15.--121.97.236.134 (talk) 10:56, 7 February 2010 (UTC)[reply]

Buy ones that are not quite open. The main thing is to keep them cool and in the shade, and in water of course. As long as they are fresh they should last at least a week anyway. Good luck!--Shantavira|feed me 11:14, 7 February 2010 (UTC)[reply]
Some more tips here[2]. How much science is involved I don't know. Alansplodge (talk) 15:00, 7 February 2010 (UTC)[reply]
Just curious: could you perhaps buy them on the 15th? You might well save money and not have to worry about keeping them fresh as long. Nyttend (talk) 20:25, 7 February 2010 (UTC)[reply]
Indeed. Prices on roses might drop significantly on the 15th. Dismas|(talk) 06:47, 8 February 2010 (UTC)[reply]
Or there might be none left and then it'll be single awareness day again :-P More seriously while I agree with the general point it may be worth seeing if you can order or reserve them Nil Einne (talk) 23:00, 8 February 2010 (UTC)[reply]

(reset) Thanks for the tips. I'll also consider ordering them in advance as well though there are no nearby flower shops here.--121.54.2.188 (talk) 05:35, 9 February 2010 (UTC)[reply]

radioactivity[edit]

Do radioactive things really glow green in the dark? --Nick —Preceding unsigned comment added by 76.230.229.140 (talk) 16:37, 7 February 2010 (UTC)[reply]

Glow, yes. Radium glows blue Click HERE Don't know what radioactive material (if any) glows green though--220.101.28.25 (talk) 17:16, 7 February 2010 (UTC)[reply]
You can also get other (non-radioactice) materials to glow by bombarding them with radiation. Some of these do glow green, but it's not the radioactive substance itself that's glowing. See Radioluminescence. Buddy431 (talk) 17:23, 7 February 2010 (UTC)[reply]
(edit conflict)The actual colour can be changed, but this is not the 'natural' colour directly produced from radioactive decay see Tritium illumination. "The electrons emitted by the radioactive decay of the tritium cause phosphor to glow", from Tritium. --220.101.28.25 (talk) 17:33, 7 February 2010 (UTC)[reply]
the whole 'glowing green' thing is actually a comic book invention based on the effect of phosphorescence - common phosphorescents have a green hue, and react to radiation. they were, in fact, important to the discovery of radiation, and people may have confused them as a product of radiation rather than an indicator of radiation. --Ludwigs2 18:04, 7 February 2010 (UTC)[reply]
....so Springfield is safe then.--Shantavira|feed me 18:14, 7 February 2010 (UTC)[reply]
With Bart around and Homer in the Reactor? No one else has said it so I will, standard Kryptonite glows green! :-)220.101.28.25 (talk) 04:32, 8 February 2010 (UTC)[reply]
See: "Phosphor#Radioactive_light_sources" About green light from radioactive paint on dials of watches.
  --Seren-dipper (talk) 18:59, 7 February 2010 (UTC)[reply]
Some radioactive things can glow orange or red (because they are hot!), some glow blue in water (for more complex physical reasons). I don't think anything glows green on its own, without a phosphor, though. --Mr.98 (talk) 20:28, 7 February 2010 (UTC)[reply]
Don't forget about neon lights, which are not radioactive, and aurora (astronomy)e which can glow green. ~AH1(TCU) 21:07, 7 February 2010 (UTC)[reply]
The tradition of "glow green in the dark" radioactivity presumably derives from the radioactive luminous paint mentioned in the article to which Seren-dipper refers above: ("The formula used on watch dials between 1913 and 1950 was a mix of radium-228 and radium-226 with a scintillator made of zinc sulfide and silver (ZnS:Ag).") I recall playing with this paint about fifty years ago. I wonder how much radiation I absorbed! Dbfirs 22:17, 7 February 2010 (UTC)[reply]
Dbfirs, this information |HERE about the paint may help. (Hopefully to allay any fears!) Has anyone accused you of glowing? --220.101.28.25 (talk) 07:26, 8 February 2010 (UTC)[reply]
What, nobody likes mentioning Cherenkov radiation anymore? Comet Tuttle (talk) 17:37, 8 February 2010 (UTC)[reply]
I liked that one! :-) and added it on the Glow-in-the-dark disambiguation page.
--Seren-dipper (talk) 21:24, 10 February 2010 (UTC)[reply]
That reminds me of radium dials and the hitherto linked radioluminescence. ~AH1(TCU) 23:11, 8 February 2010 (UTC)[reply]

Another Avatar question - helicopters[edit]

I finally had to time to check out Avatar. I am wondering about the helicopters (or whatever the flying machines are).Could such a design really fly? They are similar to the machines in the Terminator movies except that they had thrusters rather than rotors. I assume that the machines from Terminator could likely fly as depicted as evidenced by the Harrier Jump Jet. —Preceding unsigned comment added by 99.250.117.26 (talk) 20:19, 7 February 2010 (UTC)[reply]

It's supposed to be a low-gravity planet, so a flying machine could have smaller rotors/propellers/whatever than would be needed on Earth. This design from the movie looks a lot like the real-life V-22 Osprey, except for the armament and the use of ducted fans, and it looks to me as if it would indeed be practical.
Now, airborne mountains, on the other hand... --Anonymous, 20:30 UTC, February 7, 2010.
Pandora is supposed to be an Earth-like moon, not a planet. ~AH1(TCU) 21:05, 7 February 2010 (UTC)[reply]
Right you are, but that makes no difference to anything I was talking about. --Anon, 06:26 UTC, February 8, 2010.
That design unlike the Osprey appears to have stacked contra-rotating propellers. Any idea why? Cuddlyable3 (talk) 13:02, 8 February 2010 (UTC)[reply]
The Osprey has a single engine - so (a) it is guaranteed that the two rotors turn at the exact same speed and (b) if the engine craps out, you're in a rather heavy glider. A better design might have two engines - but then you'd have to match their RPM's exactly in order to avoid torque-steer and in the event that one engine died, you'd spun around too violently to keep flying without shutting down the other engine too. With contra-rotating propellors, there is no torque steer and you could even use throttle control to bank the aircraft without needing a cyclic pitch control for the rotors...and even with one engine out, you might maybe have enough lift to carry on flying....Either that or it just looked cool to the movie's art director. SteveBaker (talk) 13:26, 8 February 2010 (UTC)[reply]
Sorry SteveBaker, Specifications (MV-22B) "Powerplant: Rolls-Royce Allison T406/AE 1107C-Liberty turboshafts". There is a 'cross shaft' so that one engine can power both rotors if either engine loses power. --220.101.28.25 (talk) 18:38, 8 February 2010 (UTC)[reply]
A single pair of contra-rotating propellers must turn on an axis that passes through the center of gravity of a helicopter otherwise torque-steer is inevitable. Cuddlyable3 (talk) 14:54, 8 February 2010 (UTC)[reply]
They are on a common shaft like almost every helicopter the Russian "Kamov" company built. SteveBaker (talk) 17:10, 8 February 2010 (UTC)[reply]
Even in our gravity, something like that could be made to fly if you had a sufficiently large source of power and sufficiently strong materials from which to build the rotors. We're also not told about the density of the atmosphere on that planet and that also makes a big difference. Worse still, this planet has rocks that float in the air, anchored to the ground only by the plant life. There are hints that this is due to the mysterious material called "unobtainium" - which is the reasons humans came there to do mining operations in the first place. If unobtainium has this mysterious antigravity effect - then perhaps that is what lightens these craft to the point where they can fly with improbably tiny rotors and thrusters. The problem is - this is science fiction - and whatever the author says works is what works - regardless of currently known science. Just relax and enjoy the movie. SteveBaker (talk) 13:21, 8 February 2010 (UTC)[reply]
James Cameron, from this PopSci interview: "What can cause a mountain to float? Well, if it was made out of an almost-pure room-temperature superconductor material, and it was in a powerful magnetic field, it would self-levitate. This has actually been demonstrated on a very small scale with very strong magnetic fields. Then my scientists said, 'You’ll need magnetic fields that are so powerful that they would rip the hemoglobin out of your blood.' So I said, 'Well, we’re not showing that, so we may just have to diverge a little bit from what’s possible in the physical universe to tell our story.'" —Akrabbimtalk 13:39, 8 February 2010 (UTC)[reply]
Yeah - well, the thing is, if you want flying rocks to make a good story - or to make it look cool - then have a bunch of flying rocks and leave us to wonder what causes that. But the moment you try to impose a sciencey sounding explanation, you make it very clear that you don't have a clue what you're talking about and that's bad. Worse still, you may inadvertently teach kids something that flat out isn't true - and that makes me angry. I thought they did well to call the mysterious stuff they are doing all of this for "Unobtainium" because that's a term that scientists have used for a long time for materials with magical properties that we don't expect ever to find. That simply clued us science geeks into the "OK - don't worry about this bit - just sit back and enjoy the pretty graphics". Which is fine. SteveBaker (talk) 17:06, 8 February 2010 (UTC)[reply]