Wikipedia:Reference desk/Archives/Science/2017 August 17

From Wikipedia, the free encyclopedia
Science desk
< August 16 << Jul | August | Sep >> August 18 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


August 17[edit]

From static electricty to current[edit]

Is there any wiki article about how to convert static electricity into usable current? Could this be productively implemented, obtaining the energy from the wind or something like that? --Hofhof (talk) 13:45, 17 August 2017 (UTC)[reply]

Static electricity is stored capacitativly. You just have to provided a conductive path to get a current. However there is often only a very small charge, so you may only get a few microamps for a few microseconds. You may wish to read Leyden jar which has a capacitance of about 1 nF and can store charge. If you put a voltage of 10,000 volts on this you would store 10 microCoulombs. Graeme Bartlett (talk) 14:08, 17 August 2017 (UTC)[reply]

(edit conflict)

A static electricity device, used in electric circuitry? Seems to me you are talking about capacitor. Very common.
wind do not produce any amount of static electricity, so, converting a non-existent energy will bring nothing.
lightning is indeed based on static electricity, so it may be the kind of "something like that" you have in mind. Unfortunately harvesting lightning energy require out of proportion work as of today
Gem fr (talk) 14:10, 17 August 2017 (UTC)[reply]
See Harvesting lightning energy. Blooteuth (talk) 14:50, 17 August 2017 (UTC)[reply]
I tested a moderate sized Van de Graff generator once, by discharging it, turning it off, and connecting an analog microammeter from the dome to ground. (If the generator had been running, a spark discharge through the microammeter might have fried it.) The generator produced about 5 microamperes continuously. This was after I fine-tuned it by adjusting the metal points near the belt. Obtaining 5 microamperes continuously, amounting to microwatts of energy, at a cost to run the generator of several watts,makes very little sense from a utility perspective. The low current charges up the low capacitance of the dome in a few seconds to tens of thousands of volts, so it makes an impressive and painful but generally nonlethal spark. If the capacitance were higher, it would absorb enough charge to produce a lethal shock. A typical sratic generator such as a Van de Graaf or Wimshurst generator would have an vanishingly low efficiency.. Benjamin Franklin and others in the 18th century were able to convert static electricity into mechanical motion easily by having a swinging metal ball which was attracted then repelled by an object with a static charge, then attracted to a grounded metal bell, then attracted to the charged object again. It is also easy to have a metal pinwheel spin on top of a static machine as the air at the tips of the rotating piece ionize the air. But for a powerful electric motor, there would be some costs and inefficiencies in insulating moving parts to perhaps hundreds of thousands of volts, then having them carry small amunts of current. Something which was powered by the occasional powerful discharge of static electricity, like lightning, would seem to be not cost effective due to the need for massive conductors and a low duty cycle. Edison (talk) 21:18, 20 August 2017 (UTC)[reply]

Is duckweed edible?[edit]

Is the type of duckweed found growing in pounds in Europe edible for humans? The Lemnoideae article says it "is eaten by humans in some parts of Southeast Asia" but is that a specific Asian variety or is all duckweed edible? — Preceding unsigned comment added by 62.213.76.106 (talk) 14:04, 17 August 2017 (UTC)[reply]

Common duckweed Lemna minor is edible, see here [1] for more info. It's very rare to have an edible species in a wild genus with toxic congeners (I don't know of any examples). The whole genus Wolffia is edible according to this [2] source, and I'd strongly suspect that the whole subfamily is too, though I don't have a ref that clearly states that, and I'm not sure anyone has tested every single species in the subfamily for safety of human consumption. I am not your forager, I am not giving foraging advice. But most likely you seeing common duckweed in your EU ponds, and I would eat it if I wanted to try it out, making sure to wash it thoroughly. SemanticMantis (talk) 14:12, 17 August 2017 (UTC)[reply]
Would potato and belladonna qualify as being closely related, where one is edible and the other is highly toxic? The nightshades have examples of both. --Jayron32 14:21, 17 August 2017 (UTC)[reply]
Not really, because in both cases the leaves and flowers are toxic, while the roots are not. Don't try eating potato flowers, and don't try poisoning anyone with belladonna root.
Also, potato and belladonna are not congeners, though they are confamilial, both being Solanaceae. SemanticMantis (talk) 14:57, 17 August 2017 (UTC)[reply]
Your statement about edible species not having toxic cogeners is absolutely untrue. The Amanita genus is rife with counter examples to this claim. Amanita velosa and Amanita calyptroderma are two examples of choice edible members of the Amanita genus while Amanita phalloides and Amanita ocreata (the death cap and destroying angel respectively) are two lethally toxic members of the Amanita genus. 204.28.125.102 (talk) 19:15, 18 August 2017 (UTC)[reply]
@204.28.125.102: Thanks. A few things. First, I never said it didn't happen. I said "It's very rare ", and that I didn't know of any. Secondly, I was speaking of plants, and I thought my meaning was clear from context, though perhaps I should have said "edible plant species with toxic congeners. So, while your fungus example of Amanita is apt and interesting, I stand by my claim that toxic/edible congeneric species are rare among plant genera, and that I personally do not know of any examples. I would indeed welcome any plant examples, as while I am confident that it is rare, I also think it unlikely that there are no examples whatsoever. SemanticMantis (talk) 16:46, 19 August 2017 (UTC)[reply]
Lemnoideae german article [3] states that
"Wasserlinsengewächse werden weltweit von Fischen, höheren Tieren und auch von Menschen als Nahrung verwendet und sind die am schnellsten wachsenden höheren Pflanzen weltweit. Als Nahrung werden sie besonders geschätzt, weil sie alle essentiellen Aminosäuren enthalten (im Trockengewicht bis zu 43 % Proteine; zudem bis zu 6 % Fett und 17 % Kohlenhydrate). "
  • "Water lentils are used worldwide by fish, higher animals and also by humans as food and are the fastest growing higher plants worldwide. As a food they are particularly appreciated because they contain all essential amino acids (dry weight up to 43% proteins, up to 6% fat and 17% carbohydrates)."
French's [4] states that
"Il est arrivé qu'on les donne en complément alimentaire aux cochons, qui, en été, dans le nord de la France, en Belgique ou aux Pays-Bas descendaient parfois eux-mêmes dans les watringues manger les lentilles à la surface de l'eau, ainsi que les escargots et animaux qui peuvent y être fixés. La solution la plus écologique et passive contre la prolifération de la lentille d'eau sur un plan d'eau ou dans une mare est la mise en place de quelques canards d'ornement qui en raffolent."
  • "Sometimes they have been given as a dietary supplement to pigs, which in summer, in the north of France, in Belgium, or in the Netherlands sometimes descended themselves in the watringues, eating the lentils on the surface of the water, as well as snails and animals that can be attached to them. The most ecological and passive solution against the proliferation of the duckweed on a body of water or in a pond is the setting up of some ornamental ducks that love eating it."
So it seems to be edible for humans.
Now, it is not used, even in part of Europe (marsh and swamp area) where it is common, would require less work than common food, and people are poor enough to eat things you wouldn't think of (urtica, for instance) so there is some issue. Through a hint at oxalate in another site, I came to [5] which indeed state a high oxalate content.
Gem fr (talk) 14:42, 17 August 2017 (UTC)[reply]
It tastes like watercress according to Duckweed – The Food of the Future. See also Nutritional value of duckweeds (Lemnaceae) as human food and Duckweed: A promising new source of plant-based protein? which says that a company in Florida is developing a plant-based protein product from a member of the duckweed family. Alansplodge (talk) 21:18, 17 August 2017 (UTC)[reply]
People advocating a new [something] tend to downplay the downsides, which, for plants, usually imply
1) a too high level of some toxic that give it some bad taste (bitter, harsh, ...)
2) too high a price, that is, too much work or investment required, as compared to other food
yield or nutrient content are not important in themselves, they are just a part of the price
(lupinus is another example of a "promising" plant with not really solved issues, that in fact doesn't compete with other beans that got better improvement over time). Now, i see no reason why humans wouldn't succeed in turning duckweed into a decent food, for them or for livestock (actually most of the plants we grow are turned into food for livestock, not directly for humans) as they did for current edible plants, if they really had some interest in doing it; the question is rather: do they? That's not a question of high protein content, it's mainly a question of price. Gem fr (talk) 07:10, 18 August 2017 (UTC)[reply]

The moon is moving away from the earth[edit]

Here's something I remember from a 2000 Doodle & Fact calendar:

The bad news: the moon is moving away from the earth.

The good news: it's only moving away 1.5 inches a year.

If my knowledge about this statement is correct, it still adds up. It will take 42,240 years for the moon to move a mile away.

Multiply 42,240 by 238,900 and get 16,431,600,000, which is the number of years it should take for the moon's distance to be twice as far away as it currently is. This is in fact only a little longer than the universe's age, but also note that the solar system is not that old; it's only 4.6 billion years old and the amount of time this is would be nearly 3 times as long as this. So this clearly shows that there never was a time when the moon was immediately next to the earth; the amount of time this results in is 3.5 times the solar system's age.

But how about when it comes to the future of the earth and moon?? What do you think the status of the earth and moon will be like 16,431,600,000 years from now?? Please avoid answering this question with "your numbers have too many significant figures." Georgia guy (talk) 14:31, 17 August 2017 (UTC)[reply]

The most relevant articles appear to be Moon#Tidal effects, tidal force and tidal acceleration. The Moon's orbit is increasing because it is exchanging angular momentum with the Earth via the tides. The rate at which this occurs is not constant but varies over time. In the past the Earth rotated faster, the Moon was closer, and the Moon was moving away at a somewhat larger (though still very slow) rate. If the solar system could last long enough (50 billion years or so), the Earth and Moon would eventually both become tidally locked (i.e. a synchronous orbit) and the distance between the two would stop increasing. One can calculate the hypothetical distance for a tidally locked Earth-Moon system, though I forget how the numbers play out. Dragons flight (talk) 14:57, 17 August 2017 (UTC)[reply]

(edit conflict)

It is hard to avoid "your numbers have too many significant figures.", when
1)it is estimated that in ~5,000,000,000 years from now sun will have turned in a giant red, so large as to engulf the earth and moon.
2)even before that, in ~2,300,000,000 years from now, the sun will have evaporated the oceans, where lies most of the braking force that push the moon away (see Orbit_of_the_Moon#Tidal_evolution )
Besides, currently the best hint at moon formation is Giant-impact hypothesis. Just read it
And, of course, your reckoning is utterly wrong. While you can turn 1.5 inches a year into 15 inches a decade or 150 inches a century, you cannot turn it into 1.5 billion inches a billion year, because a significant change in distance (as experienced in a billion year) also turn into a significant change of the involved forces.
Gem fr (talk) 15:17, 17 August 2017 (UTC)[reply]
So give us your reckoning, Gem fr. {The poster formerly known as 87.81.230.195} 94.12.90.255 (talk) 18:15, 17 August 2017 (UTC)[reply]
you don't seem to realize how much work it would require, and how pointless i find it. Gem fr (talk) 18:57, 17 August 2017 (UTC)[reply]
This statistic is popularly understood to mean that the Moon is "drifting" away from us, but that's not true at all. You can't "drift" into a higher orbit. You need energy to move upwards. As Dragons flight points out above, that energy is coming from tides. The moon is bleeding off Earth's rotational energy, and it'll keep doing that until the Earth/Moon system reaches a tidal lock. (Or is destroyed somehow.)
This article suggests that when this finally happens, Earth's day will be 47 times longer than it is now, and the Moon's distance will be "135% of its current value. "
ApLundell (talk) 21:30, 17 August 2017 (UTC)[reply]
And the way you derive those numbers is to work out the angular momentum (A) of the Earth due to its own rotation, (B) of the Moon due to its own rotation, which is once per orbit, and (C) of the Earth-Moon system as a whole. When you work out C, the answer depends on the distance between Earth and Moon. When the Earth and Moon become tidally locked, most of the angular momentum in A will have been transferred to C, so you rewrite the equations so that A and B are both once per orbit and A+B+C is the same as before. Now you solve the equations to get the new distance between Earth and Moon and the new period of rotation. The actual computation I'll leave (as they say) as an exercise for the reader. (There is actually a further complication in that angular momentum is also being transferred to the Sun by a similar mechanism, but this effect should be significantly slower since tides raised on the Earth by the Sun are significantly less than those raised by the Moon.) --69.159.60.147 (talk) 23:05, 17 August 2017 (UTC)[reply]
  • Gem fr hinted to the fact that assuming the current drift speed of 1.5in/year will remain constant is ludicrous (because it depends on tidal forces etc. that will be modified by the change of distance), but it bears repeating. Knowing that extrapolation far outside the intended range may give spurious results is a much more general and valuable lesson than the exact result of calculations of tidal locking in that particular instance. TigraanClick here to contact me 15:22, 19 August 2017 (UTC)[reply]

Computer languages vs human natural languages[edit]

Computer languages seem to be a set of specific logical instructions. Natural language seems to be a form of communication, and this form of communication apparently does not require logic."I want to eat" is a sentence. But a person can just say, "want eat" and point to food and then mouth. What makes natural language different from computer languages? How do computers and humans process information? 140.254.70.33 (talk) 15:50, 17 August 2017 (UTC)[reply]

This book [6] explains computer languages and they are nothing like natural languages. 92.8.219.206 (talk) 16:00, 17 August 2017 (UTC)[reply]
(edit conflict) Your brain does not process information. That's the first major difference. Human languages do also follow rules. They are just not as rigid as computer languages. But that doesn't mean human language consists of random utterances with no set format or rules. If that were the case, communication would be impossible, because you could say literally anything at all, and I would have no way to decode it to give it meaning. The major work in this field was done by Noam Chomsky, who developed concepts like Universal grammar to explain the sort of deep underlying rules all languages follow. There's also Structural linguistics, and studies of Syntax all of which deal in some way with analyzing the rules of language. --Jayron32 16:21, 17 August 2017 (UTC)[reply]
I recommend Structure and Interpretation of Computer Programs and Origin of language. Artificial languages including those used in computer science must be precise and allow to define specific structures and tasks; you are right that natural languages are very different. Complex enough software can do Natural language processing and avances in Artificial Intelligence with large semantic databases are quite impressive; that is still only processing, optimization, classification and simplification inside. Also interesting are constructed languages which are languages designed for humans but constructed, rather than having evolved over time with culture. These often are better defined with less exceptions than natural languages, some also have been designed to be accessible by people of various native languages. Unfortunately their successs in the real world is limited. For the neurological aspects of human language processing, I'll let those more familiar with neurology comment... —PaleoNeonate – 00:28, 18 August 2017 (UTC)[reply]
For information about what happens when the human brain's language processing abilities go crazy, you may check out Broca's aphasia and Wernike's aphasia. SSS (talk) 04:21, 18 August 2017 (UTC)[reply]
@User:Jayron32: "The brain processes the raw data to extract information about the structure of the environment. Next it combines the processed information with information about the current needs of the animal and with memory of past circumstances. " From our brainy brain article. The link you posted is fringe science.--Hofhof (talk) 23:23, 17 August 2017 (UTC)[reply]
One can actually take that essay and build pretty much the same flawed argument to demonstrate why a computer isn't a computer :). Count Iblis (talk) 23:41, 17 August 2017 (UTC)[reply]
I'm familiar with that text and also think it's misleading. —PaleoNeonate – 00:13, 18 August 2017 (UTC)[reply]
Computer languages, assuming you are referring to programming and markup languages, must be parsed by a computer program. While the computer program can be designed to handle more and more complication, it usually isn't. Instead, the person writing the computer language must meet the rules of the parsing program. Natural languages are parsed by the human brain. As such, there is a lot of room for ambiguity. As mentioned above, too much ambiguity makes it difficult for the human brain to parse properly. This leads to another difference. In computer languages, the parsing program is expected to understand the meaning of the language without ambiguity. In natural language, if something is not understood, it is acceptable and questions for clarity are asked. 209.149.113.5 (talk) 17:58, 17 August 2017 (UTC)[reply]
The main obvious difference is that computer languages are one way from the writer to the computer, and only convey positive orders, that is, they have only non-negative Imperative mood (you don't say to a computer "don't do that").
other differences are that computer languages can convey only a single, plain, truthful meaning. While human language are very, very, contextual, may convey doublespeak or plain lies (it make no sense to "lie" to a computer, it doesn't care and will just send back out the garbage you fed him).
Computers using computer language don't change them. Human do change human language they use.
And that's only a small part of the differences
Gem fr (talk) 18:53, 17 August 2017 (UTC)[reply]
Human language can best be compared to a very high level computer language, so high that it doesn't yet exist. One can consider smartphones as an analogy. The source code of an android app looks nothing like a human language, but this is good enough to program smartphones such that they are capable of following your voice commands. So, you can use a low level language to implement a higher level language. And this is exactly what is going on in the brain. The low level information processing happens at the level of the neurons via electrical and chemical signaling. This low information processing which can be simulated using neural networks on computers, gives rise to higher level systems which in turn give rise to yet higher level systems, in humans ultimately to what we call language. The difficulty of letting a computer understand English is then due to the fact that you would end up cutting short many of the layers in-between the lowest level of the neurons and the highest level. A modern supercomputer can just about simulate the brain of a bee, therefore a computer (as they exist today) capable of interacting with human beings is as impossible as a bee being able to communicate with us via some translating device. Count Iblis (talk) 20:23, 17 August 2017 (UTC)[reply]
I'll just point out that we have very good articles on natural language and programming language. We also have very good articles on formal language and constructed language. I suggest that reading these articles and links therein will be a good primer on how these concepts are similar and different. SemanticMantis (talk) 20:57, 17 August 2017 (UTC)[reply]
To expand a bit on the excellent answer above, with a computer language you have to get everything right, with no spelling, punctuation, or grammar errors. With a human language you can decide to use non-standard fleemishes and the reader can still gloork the meaning from the context, but there ix a limit; If too many ot the vleeps are changed, it becomes harder and qixer to fllf what the wethcz is blorping, and evenually izs is bkb longer possible to ghilred frok at wifx. Dnighth? Ngfipht yk ur! Uvq the hhvd or hnnngh. Blorgk? Blorgk! Blorgkity-blorgk!!!! --Guy Macon (talk) 22:51, 17 August 2017 (UTC)[reply]
I marklar exactly what you marklar. Sagittarian Milky Way (talk) 02:10, 18 August 2017 (UTC)[reply]

Metal dust versus metal shavings when breathed in[edit]

This is not a question asking for medical advice.

The place that I currently work at is able to to make keys for customers. The machine that replicates keys has a drawer that all of the metal shavings fall into. I had an image in my head of a cloud of metal shavings being kicked up by a burst of air and this to me seems like this would be very dangerous for anyone who breathes it in. I think it would be instantly disabling.

I did a search with Google for the effects of breathing this stuff in, but I used the term "dust" instead of "shavings" or "filings", and I didn't realize until now that those terms may not be interchangeable. One of the search results was a thread on Reddit. In it, the commenter stated that iron nanoparticles are dangerous when they get absorbed into the bloodstream through the lungs (and presumably any other way into the bloodstream). That's a lot smaller than shavings or filings. My question is what is the difference in the effects of metal dust versus filings when inhaled? Would dust be able to cause bleeding through any abrasions it causes? — Melab±1 00:52, 18 August 2017 (UTC)[reply]

I suspect the distinction or the qualifier is determined by the size of the particles. To become problematic, that is to say that a dust mask or other type of respirator is necessary suggests that the particles are small enough to be dispersed in the air. Shavings by definition should not be capable of becoming airborne.
What is not clear is whether or not the key machine not only makes shavings but is a small amount of dust. I did several searches to see if I could find safety data sheets on key making machines but was unable to find any.Andrew124C41 (talk) 02:33, 18 August 2017 (UTC) — Preceding unsigned comment added by Andrew124C41 (talkcontribs) 02:31, 18 August 2017 (UTC)[reply]
  • As noted, size is crucial. Metal shaping processes can work by either milling (sharp metal cutters) or abrasives (bulk minerals in a disc). As key cutting is done with milling-type cutters, the particles produced are fairly large. Also the key cutting machines work dry, without cutting fluid. Compared to most metalworking environments, these are not a hazard. Fine dusts are, and aerosol clouds of cutting fluid droplets are even worse - but they're not (to a first approximation here) produced by typical key cutting.
Some metal dusts are not only a respiratory hazard, they're potentially flammable or even pyrophoric. Aluminium and titanium are known for this, although aluminium has to be very fine to show it. Andy Dingley (talk) 11:39, 18 August 2017 (UTC)[reply]
The size of the particles is very important. Your nose, nasal cavity and throat are designed to filter out nearly all the particles larger than 10 micrometers (0.01 mm or smaller than a human hair). They get stuck in the mucus of your upper respiratory tract and are ejected from the body if you sneeze, cough or spit. Dust between about 10 micrometer and 0.5 mostly settle out in the bronchial area. These are harder to eject from your body, but generally don't do much damage there (exceptions are radioactives, infectious diseases, etc). Particles smaller than 0.5 micrometer (0.0005 mm) settle in the aveoli, which is bad. The dust just stays there and blocks gas exchange or irritate the lungs (for example asbestos causes Pulmonary fibrosis). Take a look at this on particulates for some more information. The vast majority of the dust and cuttings produced from cutting keys is going to be greater than 0.01 mm and so will be trapped in your nose if you breathe any of it in.Tobyc75 (talk) 17:37, 18 August 2017 (UTC)[reply]

Hello all, I was having a discussion with a editor earlier about the role/influence of Hugo Gernsback in the invention of radar.

Thanks I don't understand what you mean by saying that Ralph124C41+ is not a primary source. The book was initially serialized in my grandfather's magazine, Modern Electrics. It was later published as a book. There have been many reprints. I cited one of them that contains the description of both the Actinoscope and has a diagram. What is interesting about this is that if you search online you will find references to my grandfather's predictions, not just RADAR, but many other things as well such as Skype...Videophones....in Ralph, they are called a Telepot. FYI, there are things missing from the entry about my grandfather such as the fact that he was the second person to broadcast television signals. (I am not interested in dealing with that at the moment....just the RADAR issue. AndrewAndrew124C41 (talk) 16:35, 17 August 2017 (UTC)[reply]

Please look at these references....go to 1911: Microwave Journal Live Sciences Steam Man At end of page: New Groups

PS: Am I going to be unblocked? Andrew124C41 (talk) 16:59, 17 August 2017 (UTC)[reply]

Again, this reflects the Wikipedia:No original research policy. Please also take a look at Wikipedia:Verifiability#Self-published sources. The book is simply not appropriate to be used in this context. If you cannot find other proper reliable sources to support a claim, it probably shouldn't be included (see Wikipedia:What Wikipedia is not#Wikipedia is not a publisher of original thought). Alex ShihTalk 17:07, 17 August 2017 (UTC)[reply]
To be unblocked, you would have to agree with the policies that I have linked for you, and also agree to not re-insert the claim arbitrarily, rather to suggest the edit by using the {{request edit}} template. If you have specific questions about these policies, please do so afterwards so you can discuss them with the community, but now is not the time. Alex ShihTalk 17:10, 17 August 2017 (UTC)[reply]

Alex, what you have requested is fine. However, aren't the links I gave you to other sources for the claim. For instance, The book on Remote Sensing, it was published by a reputable publisher. He makes reference to it beginning at the end of page 14 and continuing on page 15.

I am not naive with respect to writing articles. I have published articles in journals before. If I understand you correctly, the problem with Ralph itself is that it is a primary reference. I get that. However, if the Actinoscope is mentioned in a book that has been published, not by the author, but by a reputable company, I should think that would suffice.

The salient concept here is simply that Gernsback envisioned RADAR before it came into being. That is true. That is factual. I have given you a reference.

In addition, I now have a reference for the claim that the US patent office initially denied Sir Watson-Watt's patent application. First Reference Reference 48: The Rocket: The History and Development of Rocket and Missile Technology Baker, David Crown Publishing. Andrew124C41 (talk) 18:36, 17 August 2017 (UTC)[reply]

@Andrew124C41: Thank you for your response, Andrew. You don't have to prove your point to me, what I am offering is merely the general practices of this community. Can I take your response as a confirmation that you won't make any contested edits until it has been fully discussed, so that we can proceed here?
Just a final comment to wrap up my thoughts on the subject. Based on what I have read from the sources you have provided, it appears to me that Gernsback envisioned/predicted a concept of radar, not the actual radar itself. And readers may point out this can hardly be considered as an "established" claim (see Wikipedia:Fringe theories). Lastly, you have to be careful with the term "truth", as it is not a popular term to be used here (see Wikipedia:Verifiability, not truth). Of course you may disagree with this opinion, but I would respectfully ask you to discuss this subject later at another venue (like the Wikipedia:Teahouse that was suggested earlier). Alex ShihTalk 19:03, 17 August 2017 (UTC)[reply]

Alex...I do agree with you of course. I never though of my grandfather as having "invented" RADAR. He invented quite a few things, some of them rather strange...but not RADAR. I look at it the same way you do, that he envisioned the concept rather than the nuts and bolts of the actual device. The question that I have to ask you, is simply this. How significant is this? Do those who eventually create things benefit from those who previously had an idea but just did not know how to make the idea work. At the moment I do not have any reference to suggest that Sir Watson-Watts had ever read my grandfather's description. (Parenthetically, I have heard that he had but do not know for certain.) Ultimately, is this information something of historical import that should be shared with the public...or is it trivial. I suspect that is a matter of opinion.

As for "truth" ah, Alex, you are certainly on the mark about that. What we think is true today may not be true tomorrow. Those who of late suggest that the earth is flat have some interesting arguments. This subject has particular import today with respect to our foreign policy with Russia. (I am a US citizen.) Sometimes I think that I must have fallen asleep at which time the US became at war with Russia. Our astronauts are circling the Earth in the ISS depending upon the Soyuz to bring them home while meanwhile the government has twice imposed harmful sanctions upon the country for no valid reason. So, what is the truth. Are Russia and the US allies, partners, or enemies.

Pardon my digression...just to let you know I understand the issue.

And, yes, I will not give editors a hard time. I hope you will unblock me now. Andrew124C41 (talk)

I would like to ask other opinions on two things, 1) Is it possible to incorporate this kind of information, and 2) If so, how would you incorporate the information? Regards, Alex ShihTalk 01:53, 18 August 2017 (UTC)[reply]

It's curious how nationalistic these things can be: an American book, Air and Spaceborne Radar Systems: An Introduction (p. 1) by Philippe Lacomme, mentions "American Hugo Gernsback", but ignores Robert Watson-Watt who every British schoolboy knows is the "inventor of radar". [7] Alansplodge (talk)
The Wikipedia article about Gernsback's 1911 book Ralph 124C 41+ (you read it correctly as One-to-foresee-for-one) acknowledges a subsequent comment that it contains "the first accurate description of radar". I don't believe Gernsback's incomplete quantitative understanding of "pulsating ether wave"s coupled with the infancy of any techniques for wideband pulse processing and waveform display he might have had allow him to be credited with invention of a practical RAdio Detection And Ranging system, for which the term RADAR was coined in 1940. Gernsback's observation that radio waves can be reflected was I think only reporting Signal reflection that had been measured by Heinrich_Hertz Blooteuth (talk) 17:03, 18 August 2017 (UTC)[reply]
Radio pioneer Lee De Forest described Gernsback's description of radio detection in Ralph as "...perhaps the most amazing paragraphs in this astonishing Book of Prophesy" (see Hugo Gernsback and the Century of Science Fiction (p.136) by Gary Westfahl) but they were old friends, so perhaps there was some lack of objectivity. Alansplodge (talk) 17:19, 18 August 2017 (UTC)[reply]