Talk:Opinion poll/Archives/2012

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Added to disambiguation page

I added a link to this article from the disambiguation page for the word "poll." I can't believe no one's ever put it there before now!

History of Opinion Polls

The first known example of an Opinion poll was a local straw vote. It seems to be very likely that particular types of polling may have existed as far back as the medieval times or even ancient Egyptian eras. Awareness of local mass sentiment and social conscious seems to have an affinity for most types of participatory systematics, ancient and modern in terms of a general consensus.

Margin of Error

"They are designed to represent the opinions of a population by asking a small number of people a series of questions and then extrapolating the answers to the larger group."

This is one use of opinion polls. The article ignores the use of opinion polls as propaganda eg Bandwagon effect.--harry 17:12, 14 Sep 2004 (UTC)

Margin of Error

A poll with a random sample of 1000 people has margin of sampling error of 3% for the estimated percentage of the whole population. A 3% margin of error means that 95% of the time the procedure used would give an estimate within 3% of the percentage to be estimated.

Where does this come from? Margin of error should depend not only upon the sample size, but also the population size. If there were exactly 1000 people in the whole population, a sample of 1000 counts them all and and therefore should have no error. If there were 1005 people in the population, maybe the sample missed the 5 people who disagreed with the others, so there should be a small amount of error. And so on. I'd like to remove this part from the article unless someone comes up with a source for it. --zandperl 16:57, 8 Jun 2005 (UTC)

Why not give figures? What is the range of population sizes for which a random sample of 1000 subjects will give a 3% margin of error? I have seen a similar article elsewhere which stated that 1000 respondents, if carefully chosen, could reliably represent the entire US; so it is simply a matter of making the information universally useful. Axel 23:47, 15 October 2005 (UTC)

Details on the margin of error can be found on the margin of error page linked from the article. Duncan Keith 19:11, 16 October 2005 (UTC)

More serious objections... I'm not certain of these, so I'm not making a change to the page, but:

(1) doesn't the margin of error depend on how the people voted? For instance, if 10 people vote for candidate A in the poll (1%), then the uncertainty on candidate A's percentage is certainly not +/- 3%, because he couldn't have 3% less support than 1% (which would be -2%). I believe that the 3% number comes from taking the square root of 1000 and dividing by 1000 (which gives 3.1%), but I think the relevant square root is not of 1000 but of however many votes each candidate gets. If the poll reveals 500 votes for A and 500 for B, then the margin of error on each would be square root of 500 divided by 1000 (2.2%).

(2) If the margin of error is just the square root of the number of votes for a typical candidate, then it would be an estimate of the standard deviation of the sample means. But the 95% number is for 2 standard deviations, not 1. So which is it? Is the margin of error 1 standard error or 2?

The formula for the standard error for a proportion p with sample size n is . In your example we have p=0.01 and n=1000, so the standard error would be . As you point out the 95% confidence interval is about ±2 standard errors, giving a value of 0.6% for the 'margin of error'. The 'margin of error' reported by the media is the maximum 95% confidence interval for any possible proportion, and the maximum occurs when p=0.5. Putting this value into the formula with n=1000 and using the slightly more accurate multipler of 1.96 gives a 'margin of error' of
.
So when the media report 'the margin of error in this poll is 3%' this should be interpreted as 'the 95% confidence interval due to sampling error for any proportion is no more than ±3 percentage points around the reported proportion.' You should also bear in mind that these calculations assume a random sample, and pollsters rarerly use pure random samples. Duncan Keith 11:23, 6 September 2006 (UTC)

Voodoo polls/Online and phone polls

Our article on Voodoo polls, which includes online and phone-in polls, is undersourced. Some editors have objected to the title, so it may be moved to online and phone-in polls, or something similar. If anyone familiar with the topic can add sources or perspective then please help out. ·:· Will Beback ·:· 18:42, 4 September 2007 (UTC)

Sample and polling methods

I noticed that this section had a short discussion on Tracking polls but not of other kinds of polls that campaigns may use. I added a short description of Benchmark and Brushfire polls. I am still working on adding citations...Bob98b3 (talk) 18:44, 9 November 2010 (UTC)

This is pretty opaque:

Opinion polling developed into popular applications through popular thought

and something is missing from here:

Some polling organizations, such as and Angus Reid Strategies, YouGov, Rate Your Politician LLP and Zogby use Internet surveys

AdeMiami (talk) 16:53, 23 November 2009 (UTC)


Also, this is horrendous:

Verbal, ballot, and processed types can be conducted efficiently, contrasted with other types of surveys, systematics, and complicated matrices beyond previous orthodox procedures.

(Emphasis added)... Could somebody please fix this so that it can be understood by people? The last part is gobbledegook to me. Destynova (talk) 13:51, 13 July 2010 (UTC)


Some polling organizations, such as and Angus Reid Strategies, YouGov, Rate Your Politician LLP and Zogby use Internet surveys
After a cursory examination of the article's edit history, I didn't see any reason for the "and". It seems to me that its just a typo. Is it okay if I just get rid of it? Quae legit (talk) 22:41, 19 July 2010 (UTC)
Yes, a cursing examination of the edit history reveals it to be a typo (or sloppy pasting). Be bold; you may have the honor. — JohnFromPinckney (talk) 00:28, 20 July 2010 (UTC)
Okay! Thanks! Quae legit (talk) 18:52, 20 July 2010 (UTC)

Fraudulent Polling

I was trying to do some reading on Strategic Visions LLC and I noticed that the entire section on fraudulent polls was blanked on 15 February. The same user also removed another reference to Strategic Visions on another page. There doesn't seem to have been any discussion on removing that section and I'm inclined to put it back in but since nothing has been down with it in the last few months I thought I'd make sure I hadn't missed some reason for its removal. Alexaxas (talk) 16:08, 18 June 2010 (UTC)

Well, I see that I happened to revert the same user's other edit, in which mention of the alleged fraud was deleted from United States Senate election in Washington, 2006. My reversion is here, and it appears to have kicked off a small edit war, as the mention was ping-ponged in and out of the article for a while. Of course, nobody made any comment about the substance of the deleted/restored paragraph. I guess I somehow didn't notice its removal here, which seems unlikely, or I decided not to revert its removal, for reasons which escape me now.
I don't know if they were eventually exonerated of these accusations (do you?). If so, I guess we'd be better served to leave them out (they certainly wouldn't belong in the Washington Senate article); at the minimum we'd have to carefully phrase it with some form of " was once accused of but was later found to be A-OK", with good sources. In fact, now that I think about it, it's probably appropriate to update the section either way, if you restore it, because by now there must be some sort of consensus about whether they stretched facts or were on the up-and-up. — JohnFromPinckney (talk) 23:15, 18 June 2010 (UTC)

External site redirect

Anyone know why Research 2000's site, www.research2000.us, redirects to this page? The site has also been removed from the Wayback Machine at archive.org, providing the error message "We're sorry, access to http://www.research2000.us/ has been blocked by the site owner via robots.txt." I know they've been under some scrutiny lately, but redirecting to Wikipedia seems odd. Newsboy85 (talk) 03:08, 9 July 2010 (UTC)

See Research 2000. Basically, it is/was a polling outfit that appears to have been fabricating (at least) some of its results. Eventually the owner pulled the plug on the site blaming "hackers". The Politico had a story about it. -- Bfigura (talk) 18:17, 27 July 2010 (UTC)

Polling organisations

Why is there a large list of polling organisations worldwide in the article? It's getting spammy and seems unnecessary. Bearing in mind that Wikipedia is not a link farm, can we just remove the whole section or is some of it especially important? Destynova (talk) 17:29, 7 March 2011 (UTC)

It seems to me there is some useful information there, but it does disrupt the flow of the article. How about we split it out into a separate list? --Avenue (talk) 16:10, 8 March 2011 (UTC)

What on earth is a 'scientific' poll?

I'm sick of news organizations throwing around the phrase 'scientific poll'. What aspects of a poll's methodology make it scientific? 140.180.247.2 (talk) 03:25, 23 October 2012 (UTC)

  • Generally, 'scientific' in the context of polling refers to Survey sampling. A random sample is representative of the population from which the sample was drawn. Unscientific polls are, potentially among other things, often conducted on samples of convenience rather than random samples of the target population. Thosjleep (talk) 05:35, 23 October 2012 (UTC)

1023 (sample size)

Probably worth mentioning the magic number 1023 which is the standard sample size used by polling organizations (at least in the U.S.). I don't know of a specific source for this, though ...

--Mcorazao (talk) 19:24, 24 May 2010 (UTC)

I didn't think this to be a standard number; I just thought it was 1,000 (see the mention in the article, esp. here as well as some of the sources) plus a few more. So we might see 1,016; 1,032; 1,029; etc. — JohnFromPinckney (talk) 20:12, 24 May 2010 (UTC)
I am not an expert in these things but I always see 1023 as the standard used. Poking around Google, for example
As I say, I've not seen a source that explicitly says this as a standard number but for whatever reason it appears to be popular. --Mcorazao (talk) 16:26, 25 May 2010 (UTC)
Seems like original research to me, unless we can cite a source discussing the practice. Three of your links are Gallup polls, so it might be an artefact of something they do. By the way, I get more Google hits for 1022 and 1024 than for 1023 (and many many more for 1000). --Avenue (talk) 23:13, 25 May 2010 (UTC)
Google hit counts are meaningless. They are rarely indicative of anything. --Mcorazao (talk) 02:33, 26 May 2010 (UTC)
Okay, the specific numbers are not that important, but that wasn't my point. You brought up some examples you found on Google that mention a sample size of 1023; I'm saying you can find plenty of examples for other sample sizes the same way.
Anyway, I stand by my first point: we shouldn't mention this putative phenomenon until we can cite a reliable source discussing it. I've had a quick look for this on Google, without success. --Avenue (talk) 12:18, 26 May 2010 (UTC)
Did I say otherwise? I would've simply edited the article if I had a source. --Mcorazao (talk) 13:59, 26 May 2010 (UTC)
The number 1,024 is the square of 32, and when I took statistics, the mumber 32 was the 'gold standard' for the number of individuals needed in a sample size, in order to produce reliable statistical results. Janice Vian, Ph.D. (talk) 16:06, 31 October 2012 (UTC)