Wikipedia:Reference desk/Archives/Science/2012 March 14

From Wikipedia, the free encyclopedia
Science desk
< March 13 << Feb | March | Apr >> March 15 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


March 14[edit]

NIST aluminium ion clock[edit]

I was watching a popular science program on TV and it said that the aluminum ion experimental clock at the National Institute of Standards and Technology is the world's most precise clock, and is accurate to one second in about 3.7 billion years.

What do they mean by that? If I say my watch is accurate to 1 second a day I mean that it gains or loses no more than 1 second a day relative to GMT or some other standard. In other words, accuracy can only be measured relative to a standard.

But if the NIST clock is truly the most accurate then it is the standard, since there is nothing more accurate to compare it to. In effect, the NIST clock defines time. To check its accuracy you would have to measure 3.7 billion years by some other, more accurate, means, which contradicts the premise.

So, what do they mean by saying it’s the most precise clock? — Preceding unsigned comment added by Callerman (talkcontribs) 00:37, 14 March 2012 (UTC)[reply]

They are translating the clock resonator's Q factor, which is a very technical measurement (of phase noise, or frequency stability), into "layman's terms." Expressing the frequency stability in "seconds of drift per billion years" is a technically correct, but altogether meaningless, unit conversion. Over the time-span of a billion years, it's probable that the Q-factor will not actually remain constant. It's similar to expressing the speed of a car in earth-radii-per-millenia, instead of miles-per-hour. The math works out; the units are physically valid and dimensionally correct ([1]); but we all know that a car runs out of gas before it reaches one earth-radius, so we don't use such silly units to measure speed. Similarly, physicists don't usually measure clock stability in "seconds drift per billion years" in practice.
Here's some more detail from NIST's website: How do clocks work? "The best cesium oscillators (such as NIST-F1) can produce frequency with an uncertainty of about 3 x 10-16, which translates to a time error of about 0.03 nanoseconds per day, or about one second in 100 million years." And, they link to "From Sundials to Atomic Clocks," a free book for a general audience that explains how atomic clocks work. Chapter 4 is all about Q factor: what it is, why we use it to measure clocks, and how good a Q we can build using different materials. Nimur (talk) 01:04, 14 March 2012 (UTC)[reply]
Alright Nimur, a question to your answer so you know you're not off the hook yet. Does anyone in science or society-at-large benefit from the construction of a clock that is more accurate than 0.03 nanoseconds per day, or is this an intellectual circle jerk? (which is also fine, by the way, because science is awesome.) Someguy1221 (talk) 01:56, 14 March 2012 (UTC)[reply]
Indeed, there are practical applications. If I may quote myself, from a seemingly unrelated question about metronome oscillations, in May 2011: "This "theoretical academic exercise" is the fundamental science behind one of the most important engineering accomplishments of the last century: the ultra-precise phase-locked loop, which enables high-speed digital circuitry (such as what you will find inside a computer, an atomic clock, a GPS unit, a cellular radio-telephone, ...)." In short, yes - you directly benefit from the science and technology of very precise clocks - they enable all sorts of technology that you use in your daily activities. The best examples of this would be high-speed digital telecommunication devices - especially high frequency wireless devices. A stable oscillator, made possible by a very accurate clock, enables better signal reception, more dense data on a shared channel, and more reliable communication. Nimur (talk) 06:43, 14 March 2012 (UTC)[reply]
Nimur has not correctly understood the relationship between Q and stability. Q is a measure of sharpness of resonance. If you suspend a thin wooden beam between two fixed points and hit it, it will vibrate - ie it resonates. But the vibrations quickly die away, because wood is not a good elastic material - it has internal friction losses. If you use a thin steel beam, the vibrations die away only slowly - it is a better elastic material. Engineers would say the wood is a low Q material and the steel a High Q material. All other things equal, a high-Q resonator will give a more stable oscillation rate, because if other factors try to change the oscillation rate the high-Q resonator will resist the change better. But there are other things and they aren't generally equal. Often a high-Q material expands with temperature - then the rate depends on temperature no matter how high the Q is.
In the real world (ie consumers and industry, as distinct from estoric research in university labs) the benefit of precise clocks is in telecommunications - high performance digital transmission requires precise timing to nanosecond standards, and in metrology - Time is one of the 3 basic quantities [Mass, Length, Time] that all practical measurements of any quantity are traceable back to. Precise timing is also the basis of navigation - GPS is based on very precise clocks in each satelite. So folks like the NIST are always striving to make ever more precise clocks so they DO have a reference standard to which they can check ever better clocks used in indutry etc.
It is quite valid to state the accuracy of clocks as soo many nanoseconds error per year or seconds per thousand years or whatever. It's often done that way because it gives you a good feel for the numbers. You don't need to measure for a year or 100 years to know. Here's an analogy: When I was in high school, we had a "rocket club". A few of us students, under the guidance of the science teacher made small rockets that we launched from the school cricket pitch. We measured the speed and proudly informed everyone - the best did about 300 km per hour. That does not mean our rockets burned for a whole hour and went 300 Km up. They only burned for seconds and achieved an altitude of around 400 m, but we timed the rockets passing two heights and did the math to get km/hour. If we told our girlfriends that the rockets did 80 m/sec, that doesn't mean much to them, but in km/hr they can compare it with things they know, like cars. Keit120.145.30.124 (talk) 03:02, 14 March 2012 (UTC)[reply]
I respectfully assert that I do indeed have a thorough understanding of Q-factor as it relates to resonance and oscillator frequency stability. I hope that if any part of my post was unclear, the misunderstanding could be clarified by checking the references I posted. Perhaps you misunderstood my use of frequency stability for system stability in general? These are different concepts. Q-factor of an oscillator directly corresponds to its frequency stability, but may have no connection whatsoever to the stability of the oscillation amplitude in a complex system. Nimur (talk) 06:59, 14 March 2012 (UTC)[reply]
Does "Q-factor of an oscillator directly correspond to its frequency stability" as you stated? No, certainly not. An oscillator fundamentally consists of two things: a) a resonant device or circuit and b) an amplifier in a feedback connection that "tickles" the resonant device/circuit to make up for the inevitable energy losses. This has important implications. First, you can have a high Q but at the same time the resonant device can be temperature dependent. As I said above it doesn't matter what the Q is, if the resonant device is temperature sensitive, then the oscillation frequency/clock rate will vary with temperature. Same with resonant device aging - quartz crystal and tuning forks can have a very high Q, but still be subject to significant aging - the oscillation rate varies more the longer you leave the system running. Second, the necessary feedback amplifier can have its own non-level response to frequency. This non-level response combines with the resonant device response to in effect "pull" the resonance off the nominal frequency. Real amplifiers all have a certain degree of aging and temperature dependence in teir non-level response. Also, practical amplifiers can exhibit "popcorn effect" - their characteristics occaisonally jump very slightly in value. When you get stability down below parts per 10^8, this can be important. All this means that it HELPS to have high-Q (it make "pulling" less significant), but you CAN have high stability with low Q (if the amplifier is carefully built and the resonant device has a low temperature coefficient), and you can have rotten stability with very high Q. I've not discussed amplitude stability in either of my posts, as this has little or no relavence to the discussion. Keit120.145.166.92 (talk) 12:16, 14 March 2012 (UTC)[reply]
Several sources disagree with you; Q is a measure of frequency stability. Frequency Stability, First Course in Electronics (Khan et al., 2006). Mechatronics (Alciatore & Histand), in the chapter on System Response. Our article, Explanation of Q factor. These are just the few texts I have on hand at the moment. I also linked to the NIST textbook above. Would you like a few more references? I'm absolutely certain this is described in gory detail in Horowitz & Hill. I'm certain I have at least one of each: a mechanical engineering text, and a physics textbook, and a control theory textbook, on my bookshelf at home, from each I can look up the "oscillators" chapter and cite a line at you, if you would like to continue making unfounded assertions. Frequency stability is defined in terms of Q. Q-factor directly corresponds to frequency stability. Nimur (talk) 18:34, 14 March 2012 (UTC)[reply]
(1) If you read the first reference you cited (Khan) carefully it says with math the same as I did in just words: High Q helps but is not the whole story. It says high Q helps, but for any change, there must be a change initiator - so if there is no iniator, there's no need for high Q. Nowhere does Khan say stability directly relates to Q. In fact, his math shows where one of the other factors gets in, and offers a clue on another. (2) I don't have a copy of you 2nd citation, so I can't comment on it. (3) The Wikipedia article does say "High Q oscillators ... are more stable" but this is misleading, as Q is only one of many factors. (4) I don't think you'll find anywhere in H&H where it says Q determines frequency stability. With respect to you good self Nimur, you seem to be making 3 common errors: a) you are reading into texts what you want to believe, rather than reading carefuly, (b) like many, you cite Wikipedea articles as an authority. That's not what Wikepedia is for - the articles are good food for thought and hints on where to look and what questions to ask, but are not necessarily accurate. c) you haven't recognised that Khan, as a first course presentation, gives a simplified story that, while correct in what it says, doesn't not cover all the details. Rather than dig up more books, read carefully what I said, then go back to the books you've already cited.
A couple of examples: Wein bridge RC oscillator - Q is 0.3, extreemly low, but with carefull amplifier design temperature stability can approach 1 part in 10^5 over a 30C range, 1 part in 10^4 is easy. 2nd example: I coulkd make up an LC oscillator with the inductor a high Q device (say 400) and the C a varicap diode (Q up to 200). The combined in-circuit Q can be around 180. That should give a frequency stability much much better that the Wein oscillator with its Q of only 0.3. But wait! sneaky Keit decided to bias the varicap from a voltage derived from a battery, plus a random noise source, plus a temperature transducer, all summed together. So, frequency tempco = as large as you like, say 90% change over 30 C, aging is dreadfull (as the battery slowly goes flat), and the thing randomly varies its frequency all over the place. I can't think why you would do this in practice, but it does CLEARLY illustrate that, while it HELPS to have high Q, Q does NOT directly correspond to frequency stability lots of other factors can and do affect it. Keit121.221.82.58 (talk) 01:25, 15 March 2012 (UTC)[reply]
Keit, your unreferenced verbage is no more than pointless pendantry. And nobody believes you had a girlfriend in high school either. — Preceding unsigned comment added by 69.246.200.56 (talk) 01:57, 15 March 2012 (UTC)[reply]
What Keit is saying makes sense to me. According to our article, "[a] pendulum suspended from a high-quality bearing, oscillating in air, has a high Q"—but obviously a clock based on that pendulum will keep terrible time on board a ship, and even on land its accuracy will depend on the frequency of earthquakes, trucks driving by, etc., none of which figure into the Q factor. If you estimate the frequency stability of an oscillator based on the Q factor alone, you're implicitly assuming that it's immune to, or can be shielded from, all external influences. I'm sure the people who analyze the stability of atomic clocks consider all possible external perturbations (everything from magnetic fields to gravitational waves) in the analysis. Those influences may turn out to be negligible, but you still have to consider them.
Also, it seems as though no one has really addressed the original question, which is "one second per 3.7 billion years relative to what?". The answer is "relative to the perfect mathematical time that shows up in the theory that's used to model the clock", so to a large extent it's a statement about our confidence in the theory. I don't know enough about atomic clocks to say more than that, but it may be that the main contributor to the inaccuracy is Heisenberg's uncertainty principle, in which case we're entirely justified in saying "this output of this device is uncertain, and we know exactly how uncertain it is." -- BenRG (talk) 22:59, 15 March 2012 (UTC)[reply]
I'm afraid I don't think anyone has addressed my original question. Answers in terms of frequency stability, etc., do not seem to work. How do you know a frequency is stable unless you have something more stable to measure it against? How do you measure a deviation except with another, more accurate clock? But if this is the most accurate clock, what is there to compare it against? Just looking at my watch alone, for example, if I am in a room with no other clocks and no view of the outside daylight, I cannot say whether it is running fast or slow. I can only tell that by comparing it with something which I assume is more reliable. — Preceding unsigned comment added by Callerman (talkcontribs) 06:32, 16 March 2012 (UTC)[reply]
I think BenRG has answered your question, but I'll see if I can help by expanding on it a bit. If you are in a closed room with only one watch, and you don't know what's inside the watch, then yes, you can't tell if its keeping correct time or not. But if you have two or more watches made by the same process, and you understand the process, you can reduce the risk of either or both keeping incorrect time. And if you have two or more watches each working in a different way, you can do better still, even if each watch is of different accuracy. That is, it is possible to use a clock of lesser (only a little) accuracy to prove the accuracy of the better clock, to a (imperfect) level of confidence. This is counter-intuitive, so I'll explain.
Any metrology system, including precision clocks, has errors that fall into 2 classes: Systematic Error (http://en.wikipedia.org/wiki/Systematic_error), and Random Error. Systematic errors are deterministic/consistent errors implicit in the system. If you 100% understand the system (how it works), you can analyse, correct for, reduce, and test for such errors. For example, a clock may have a consistent error determined by temperature. By holding one clock constant temperature, we can (say) test second clock over (say) over 10 C range, comparing it with the first. If the second changes (say) 1 part in 10^8 compared to the const temp clock, we could reasonably infer that it will stay within 1 part in 10^9 if kept within 1 C of a convenient temperature. The trouble is, there may be a systematic error you didn't think of - you can't know everything. The risk is reduced (but certainly not eliminated) if you have two or more clocks working on entirely different principles of roughly similar performance. Random errors (eg errors due to electrical noise, hiesenburg uncertainty etc) are easy to deal with. One builds as many clocks as is convenient, and keeps a record of the variation of each with respect to the average of all of them. There is a branch of statistics (n-sigma analysis, and control charts/shewart charts see http://en.wikipedia.org/wiki/Control_chart) for handling this. After a period of time, the degree of random error, the "accurate" mean, and any clock that is in error due to a manufacturing defect (even tiny errors), will emerge out of the statistical "noise".
It's quite true to say that, at the end of the day, how long a second is, is not decided by natural phenomena but by arbitary human decision. From time to time the standard second is defined in terms of a tested best avaliable clock. As better clocks get built, we can confine the error with respect to the declared standard to closer and closer limits, but the length of the second is whatever the standards folks declare it to be.
Perhaps my explanation will anoy Nimur and some turkey who has a problem with girls, but I hope it satifies the OP. Essentially I'm saying much the same as BenRg - as folks build better and better clocks, they have better and better error confidence, based on both theory and testing multiple examples of a new clock against previously built clocks that are nearly as good. But the duration of the standard second is arbitary. Keit124.178.61.36 (talk) 07:33, 16 March 2012 (UTC)[reply]
Thanks for taking the trouble to explain. I think I understand, at least in general if not in the detail. — Preceding unsigned comment added by Callerman (talkcontribs) 02:23, 17 March 2012 (UTC)[reply]

Hospital de Sant Pau Barcelona Spain.[edit]

Is this hospital open for medical care to patients today? 201271.142.130.132 (talk) 03:53, 14 March 2012 (UTC)[reply]

Have you seen that we have an article on Hospital de Sant Pau? it seems to suggest that it ceased being a hospital in june 2009. Vespine (talk) 04:04, 14 March 2012 (UTC)[reply]

Mean Electrical Vector of the Heart[edit]

Hello. When would one drop perpendiculars from both lead I (magnitude: algebraic sum of the QRS complex of lead I) and lead III (magnitude: algebraic sum of the QRS complex of lead III), and draw a vector from the centre of the hexaxial reference system to the point of intersection of the perpendiculars to find the mean electrical vector? Sources are telling me to drop a perpendicular from the lead with the smallest net QRS amplitude. Thanks in advance. --Mayfare (talk) 04:46, 14 March 2012 (UTC)[reply]

I don't have medical training, so I can only guess, but if I understand correctly:
  • the contraction of different parts of the heart have accompanying electrical signals that move in the same direction as the contraction. Movement towards one of the electrodes of an electrode pair will give a positive or negative signal, while movement perpendicular to that direction would have little influence because it would effect both electrode potentials the same way, increase or decrease.
  • All these movements can be represented by vectors and the mean vector of these is what you're after.
  • For each electrode pair you have measured the positive and negative deflection voltages, the sum of those give you a resulting vector for each electrode pair, and these correspond to the magnitude of the mean vector in each of those directions.
  • If one of these vectors is zero or very small, you know that the mean vector must be perpendicular to that direction, leaving you only one last thing to determine, which way it points.
  • If you have two smallest vectors with the same magnitude, then the mean vector will be on one of the angle bisectors. The info I got has lead I at 0° (to the right), lead II at 60° clockwise rotation, lead III at 120°. If I and III are equal in magnitude (don't need the same sign), then the mean vector can be 150° or -30°, but in those cases lead II will be smallest, so the only possibilities left are +60° or -120°, depending on the sign of the lead II result. That's how I understood it, but all the different electrodes made it a bit confusing. So far only arm and leg electrodes seemed involved?? A link to a site with the terminology or examples could help. More people inclined to have a look if the subject is just a click away instead of having to google first. Hmmm, would there be a correlation between links in question and number of responses... 84.197.178.75 (talk) 19:37, 14 March 2012 (UTC)[reply]

All basically correct! Our article on Electrocardiography has some information about vectorial analysis, but I'm not sure if that's sufficient for you. In the normal heart the mean electrical vector is usually around about 60 degrees (lead II), but anywhere between -30 and +90 is considered normal. The mean vector, of course, will be perpendicular to the direction of the smallest vector, and in the direction of the most positive vector. Mattopaedia Say G'Day! 07:10, 16 March 2012 (UTC)[reply]

Electricity prices[edit]

In this Scientific American article, US Energy Secretary Chu says natural gas "is about 6 cents" per kilowatt hour, implying it is the least expensive source of electricity. However Bloomberg's energy quotes say on-peak electricity costs $19.84-26.50 per megawatt hour, depending on location. Why is that so much less? 75.166.205.227 (talk) 09:44, 14 March 2012 (UTC)[reply]

The Bloomberg prices are prices at which energy companies can buy or sell generating capacity in the wholesale commodities markets. An energy company will then add on a mark-up to cover their costs of employing staff, maintaining a distribution network, billing customers etc. plus a profit margin. Our article on electricity pricing says that the average retail price of electricity in the US in 2011 was 11.2 cents per kWh. Gandalf61 (talk) 10:48, 14 March 2012 (UTC)[reply]
(Per the description,) The figures given in the Scientific American article are apparently the Levelised energy cost. This is a complicated calculation and the number depends on several assumptions and it's not clear to me what market the estimated break even price is for although it sounds like it depends on what you count in the costs. Also I'm not sure why the OP is making the assumption natural gas is the least expensive source, I believe coal normally is if you don't care about the pollution.Edit: Actually the source says coal is normally more expensive now although I don't think this is ignoring pollution. Nil Einne (talk) 11:21, 14 March 2012 (UTC)[reply]
The Bloomberg quote for gas is 2.31$/MMBtu, and 1MMBtu is 293 kWh, so that would be 0.7 cents per kWh. Maybe he was a factor 10 off? Or he quoted the European consumer gas prices, those are around 0.05€ per kWh... 84.197.178.75 (talk) 12:56, 14 March 2012 (UTC)[reply]
The article I linked above suggests the figures are accurate. For example the lowest for naturalised gas (advanced combined cycle) is given as $63.1/megawatt-hour. The cost for generating electricity from naturalised gas is obviously going to be a lot higher then just the price of the gas. Nil Einne (talk) 13:24, 14 March 2012 (UTC)[reply]
Oops, you're right of course. I was thinking 10% efficiency was way too low for a power plant, I saw your comment but the penny didn't drop then... 84.197.178.75 (talk) 15:27, 14 March 2012 (UTC)[reply]
Talk of gas-generated electricity being cheaper than coal is puzzling. In the 1970's baseload coal and nuke were cheap electricity, and natural gas was used low efficiency fast-start peakers, typically 20 megawatt turbine units, which could be placed online instantly to supplement the cheaper fueled generators, when there was a loss of generation or to satisfy a short-duration peak load. The peakers might be 10 or 15 percent of total generation for a large utility. The coal generators were 10 times larger (or more) than the gas turbines, and took hours to bring on line. The fuel cost for gas was over 4 times the fuel cost for other fossil fuels, and 18 times as much as for nuclear. The total cost was over 3 times as much for gas as for other fossil and 8 times as much as nuclear.. Is gas now being used in large base-load units, 300 to 1000 megawatt scale, to generate steam to run turbines, rather than as direct combustion in turbines? Edison (talk) 15:33, 14 March 2012 (UTC)[reply]
I think so. This 1060 MW plant built in 2002 is very typical of the new gas plants installed from the late 1990s to present. They are small, quiet, relatively clean except for CO2, and have become cookie cutter easy to build anywhere near a pipeline. 75.166.205.227 (talk) 18:25, 14 March 2012 (UTC)[reply]

I still do not understand why a power company would quote a wholesale contract price less than 30% of their levelized price. Even if it was only for other power companies, which I don't see any evidence of, why would they lose so much money when they could simply produce less instead? 75.166.205.227 (talk) 18:25, 14 March 2012 (UTC)[reply]

In some areas (for example, in California), energy companies do not have a choice: they must produce enough electricity to meet demand, even if this means operating the business at a loss. Consider reading: California electricity crisis, which occurred in the early parts of the last decade. During this period, deregulation of the energy market allowed companies (like the now infamous Enron) to simply turn the power-stations off if the sale-price was lower than the cost to produce. Citizens didn't like this. To bring short summary to a very long and complicated situation, the citizens of California fired the governor, shut down Enron, and mandated several new government regulations, and several new engineering enhancements to the energy grid. The economics of power distribution are actually very complicated; I recommend to "proceed with caution" any time anyone quotes a "price" without clearly qualifying what they are describing. Nimur (talk) 18:43, 14 March 2012 (UTC)[reply]
You should remember what levelised cost is. It tries to take in to account total cost over the lifespan and includes capital expenditure etc. It's likely a big chunk of the cost is sunk. Generating more will increase expenditure e.g. fuel and maintenence and perhaps any pollution etc taxes and may also lower lifespan, but provided your increased revenue is greater then the increased expenditure (i.e. you're increasing profit) then it will still likely make sense to generate more. The fact you're potentially earning less then needed to break even is obviously not a happy picture for your company, but generating less because you're pissed isn't going to help anything, in particular it's not going to help you service your loans (which remember are also part of the levelised cost). It may mean you've screwed up in building the plant although I agree with Nimur, it's complicated and this is a really simplistic analysis (but I still feel it demonstrates why you can't just say it's better if they don't generate more). (Slightly more complicated analysis may consider the risk of glutting the market although realisticly, you're likely only a tiny proportion of the market.) You should also perhaps remember that the costs are (as I understand it at least) based on the assumption of building a plant today (which may suggest it makes no sense to build any more plants, but then we get back to the 'it's complicated part). Nil Einne (talk) 20:13, 14 March 2012 (UTC)[reply]
Ah, I see, so they have already recovered (or are recovering) their sunk and amortized costs from other contracts and/or previous sales, so the price for additional energy is only the marginal cost of fuel and operation. Of course that is it, because as regulated utilities their profits are fixed. That's great! ... except for the Jevons paradox implications. 75.166.205.227 (talk) 22:08, 14 March 2012 (UTC)[reply]
Natural gas is the cheapest energy source delivered directly to the home, at about 1/3 the cost of electricity per BTU, for those of us lucky enough to have natural gas lines. Sure, our homes explode every now and then, but oh well. StuRat (talk) 22:38, 14 March 2012 (UTC)[reply]
If you extrapolate these numbers for cumulative global installed wind power capacity, you get this 95% prediction confidence interval.
When the natural gas which is affordable (monetarily and/or environmentally) has been burned up by 1000 megawatt power plants, then what heat source will folks use who now have "safe, clean, affordable" natural gas furnaces? I have always thought (and I was not alone) that natural gas should be the preferential home heating mode, rather than electric resistance heat. Heat pumps are expensive and kick over to electric resistance heat when the outside temperature dips extremely low (unless someone has unlimited funds and puts in a heatpump which extracts heat from the ground). Edison (talk) 00:08, 15 March 2012 (UTC)[reply]
I agree that we are rather short-sighted to use up our natural gas reserves to generate electricity. Similarly, I think petroleum should be kept for making plastics, not burned to drive cars. We should find other sources of energy to generate electricity and power our cars, not just to save the environment but also to preserve these precious resources. We will miss them when they are gone. StuRat (talk) 07:42, 15 March 2012 (UTC)[reply]
I completely agree, but honestly think there is nothing to worry about. Wind power is growing so quickly and so steadily that it has the tightest prediction confidence intervals I have ever seen in an extrapolation of economics data. Also, there is plenty of it to serve everyone and it's going to get very much less expensive and higher capacity on the same real estate and vast regions of ocean very soon. Npmay (talk) 22:01, 15 March 2012 (UTC)[reply]
Who did that extrapolation and what are the assumptions? It appears to be based on an exponential growth model—why not logistic growth? -- BenRG (talk) 00:35, 16 March 2012 (UTC)[reply]
I agree a logistic curve would be a better model, but when I tried fitting the sigmoids, they were nearly identical -- within a few percent -- to the exponential model out to 2030, and did not cusp until long enough that the amount of electricity being produced was unrealistic. Npmay (talk) 01:20, 16 March 2012 (UTC)[reply]
God, eventually the missionaries of noise will have to travel to the frozen tundra and the Arctic Ocean to prospect for the last pockets of natural sound to ruin with one of their machines. Wnt (talk) 16:13, 18 March 2012 (UTC)[reply]

Polyethylene[edit]

Is the dimer for polyethene butane? If not, what? Plasmic Physics (talk) 12:25, 14 March 2012 (UTC)[reply]

Butene? --Colapeninsula (talk) 12:57, 14 March 2012 (UTC)[reply]
Butane is the saturated hydrocarbon C4H10, and cannot be a dimer for polyethene (more corectly known as polyethelene), a saturated hydrocarbon H.(C2H4)n.H. Perhaps you meant "Is the dimer for butane polyethelene?". For the polyethelene with n=2, H.(C2H4)n.H reduces to C4H10, ie it IS butane. But for all n<>2, the ratio of C to H changes, so the answer is still no. To form dimers, you need two indentical molecules combined without discarding any atoms. Keit120.145.166.92 (talk) 13:12, 14 March 2012 (UTC)[reply]

The only solution is to that would be cyclobutane? Plasmic Physics (talk) 22:21, 14 March 2012 (UTC)[reply]

That is very well not the answer either, the only solution to preserving the elemental ratio is to have a diradical, polyethylene is not a diradical. Plasmic Physics (talk) 22:30, 14 March 2012 (UTC)[reply]

@Colapenisula:Butene would require a middle hydrogen atom to migrate to the opposite end of the chain, not likely - the activation energy would be pretty high. Plasmic Physics (talk) 23:51, 14 March 2012 (UTC)[reply]

Dimerization to form butene is apparently easy if you have the right catalyst to solve the activation energy problem:) Googling (didn't even need to use specialized chemical-reaction databases) finds many literature examples over the past decade or so, giving various yields and relative amounts of the alkene isomers. The cyclic result is a pretty common example in undergrad courses for effects of orbital-symmetry: is it face-to-face [2+2] like a Diels–Alder reaction, or crossed (Moebius would be allowed whereas D–A is Huckel forbidden/antiaromatic), or is it even a radical two-step process (actviation energy?) or electronically-excited [2+2] (Huckel-allowed) in the presence of UV light? DMacks (talk) 19:29, 17 March 2012 (UTC)[reply]

OK, so both forms are allowed. Which one has the lower ground state? What if it was a icosameric polymer? Plasmic Physics (talk) 22:19, 17 March 2012 (UTC)[reply]

Dissuading bunnies from eating us out of house and home...[edit]

... literally. My daughter's two rabbits, when we let them roam the house, gnaw the woodwork, the furniture, our shoes... Is there some simple means by which we can prevent this? Something nontoxic, say, but unpleasant to their noses that we can spray things with?

Ta

Adambrowne666 (talk) 18:36, 14 March 2012 (UTC)[reply]

From some website: The first thing to do is buy your bunny something else to chew on. You can buy bunny safe toys from many online rabbit shops. An untreated wicker basket works well too. They also enjoy chewing on sea grass mats. To deter rabbits from chewing on the naughty things, try putting some double sided sticky tape on the area that is being chewed. Rabbits will not like their whiskers getting stuck on the tape. You can also try putting vinegar in the area too, as rabbits find the smell and taste very very offensive. Bitter substances tend not to deter rabbits as they enjoy eating bitter foods (ever tried eating endive? very bitter.)
Or google "rabbit tabasco" for other ideas .... 84.197.178.75 (talk) 19:59, 14 March 2012 (UTC)[reply]
Why would you let bunnies run free indoors ? Don't they crap all over the place ? Not very hygienic. Maybe put them in the tub and rinse their pellets down after an "outing". StuRat (talk) 22:26, 14 March 2012 (UTC)[reply]
Fricaseeing might help. Supposedly they taste just like chicken. ←Baseball Bugs What's up, Doc? carrots→ 23:38, 14 March 2012 (UTC)[reply]
Rabbits tend to go to the toilet in the same spot and they're pretty easy to litter box train. It's harder, but not impossible to train some rabbits to not chew things, but we never managed to do it with our two dwarf bunnies. For that reason we just don't leave them in the house unsupervised. We do let them run around the house sometimes and they won't go to the toilet on the floor, but we did catch one of them once chewing on the fridge power cable which was the final straw of giving them free reign of the house. Vespine (talk) 23:53, 14 March 2012 (UTC)[reply]
As a final point, I do remember reading that some bunny breeds are just more suitable as "house" pets then others. There are plenty of articles on the subject if you google "house rabbit". Vespine (talk) 23:57, 14 March 2012 (UTC)[reply]

Thanks, everyone - yeah, they crap everywhere; we're not masters of the household at all - I don't know how they get away with it: they drop dozens of scats all over the house, but if I do one poo in the living room, people look askance! Will try the doublesided tape and other measures; wish me luck!

Resolved