Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Trump won the popular vote by 1.5%, which is a few percent off from the polling average. If the polling on Mangione is similarly off by a few percent (within the margin of error), it hardly affects our interpretation of the results.


Give me a break. The polls weren't based on who will get the popular vote. They were based on electoral votes, with swing states polls being that exact toss-up (you can check them again).

He ended up winning every single swing state, muuuch farther than the projections of ANY polls that was getting time-of-day.

Even if you want to die on that purposely misinterpreting hill, it still shows they were inaccurate and unreliable.


Let’s take PA since it’s the tipping point state. Polling average had Harris at a 0.2% lead. Actual result was Trump with 1.7% lead. That’s a 1.9% error.

AtlasIntel released a poll which predicted the actual lead: https://projects.fivethirtyeight.com/polls/20241102_SwingSta...

So, “muuuch farther than the projections of ANY polls” is wrong.

Correlated error between states is expected, and not relevant (ie. if you think Mangione poll is affected by this few % correlated error, again it doesn’t change the interpretation).

Overall, election results tend to come pretty close to the polls. They are often wrong by a little bit, occasionally by a lot, which is what you’d expect based on the theory of polling and statistics. The conclusion is asking a large random sample of people what they think, using the best methodologies the polling industry has developed, is a pretty good way of finding out what most people think (within a few % margin of error), certainly more objective than a vibe check of your personal echo chamber. If your argument depends on completely dismissing polling results, you’re probably wrong.


I thought you were someone else, but now that I know you're not - I want to make something clear: I think that social desirability bias affected the answers of respondents much more [1].

To be clear, that's not the MARGIN of error in those statistics, but rather the difference between predicted and actual. In a five-tier scale, separated between groups, there is already statistical manipulation by having fewer respondents per-category, and therefore a higher margin of error between each. A margin of error even in low single-digit percentages can completely compromise the outcomes of data presented in this fashion.

1 - https://news.ycombinator.com/item?id=42643027


How wrong do you believe this poll is? Can you use some methodology to come up with a modified margin of error that encompasses what you believe to be more realistic?


"How wrong" is something I can't quantify. It’s tough to pin down a single “modified margin of error,” but I’d say it’s bigger than what the poll shows. Multi-tier questions automatically have smaller sample sizes per option, which inflates the margin of error. Throw in social desirability bias (people not wanting to admit an unpopular opinion) and you get systematic skews on top of random error.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: