Voting is still bad in 2024

Pollers seem Finally corrected it in 2024. After years of bad missed, they say the presidential election will be close, fact.

In fact, the industry couldn’t solve the problem last year. In 2016, pollsters underestimated Donald Trump on average about 3.2 points. In 2024, after eight years of introspection, they underestimated Trump's 2.9 points. Last year, many of the most accurate pollsters were partisan Republican costumes. Many of the most inaccurate ones are rigorous university polls conducted by political scientists.

Polls are imperfect; after all, they come with room for error. But they shouldn't lose the same direction over and over again. The problem is that the problem goes beyond election polls, and more generally the opinion polls. When Trump dismissed his low approval rating as a "fake poll," he might have only one point.

The media has been covering the suffering of the voting industry for years, always a prerequisite that may be different next time. This premise is becoming increasingly difficult to accept.

Voting was once very simple. You pick up the phone and dial the random number. People answered landlines and answered your survey. Then, you posted the results. In 2000, almost every national pollster used this method, called random number dialing, with an average error of about two points. In the subsequent elections, they were getting closer and wrong, and the mistake changed from overestimating Bush in 2000 to underestimating him in 2004, which was a good sign that mistakes were random.

Then came the best poll mistakes of 2016. National polls actually almost predict the final popular total, but at the state level, especially in the swing country, they missed a serious omission, evoking Hillary Clinton’s victory was inevitable.

Miss in 2016 was widely blamed on educational polarization. College graduates prefer Clinton and are more likely to respond to polls. So, looking forward, most pollsters start to adjust or “weight” to offset underrepresentation of non-college education voters. In 2018, the polls were nailed to the mid-term and the pollsters were ecstatic.

It turns out that this reaction is too early. The 2020 elections are even more serious in the voting industry than the 2016 elections. On average, pollsters underestimated Trump again, this time four points. Joe Biden won, but was much less than expected.

This makes the poll search for solutions again. If the trade-off through education doesn’t work, there must be something specific for Trump voters (even for college-degree Trump voters), which makes them unlikely to answer polls. Therefore, many pollsters believe that the best way to solve this problem is whether the defendant has voted for Trump before or was identified as a Republican. In the polls, it was a controversial move. The proportion of voters in Democratic or Republican parties changes from election to election; that's why there are polls in the first place. Can such careful modeling make polls more like predictions than surveys?

"This is somewhere some art and science are mixed a little bit," Michael Bailey, a Georgetown professor who studies polls, told me. If you refer to the sample as 30% Republicans, 30% Democrats and 40% Independents, because people are roughly self-identifying when asked - you are assuming the way three groups behave, not just matching polls to people like age, gender, gender, and education.

These assumptions vary by poller and usually reflect their unconscious bias. For most pollsters, these biases seem to point in the same direction: underestimating Trump and overestimating his opponent. “Most pollers, like most others in the expert class, may not be a big Trump fan,” Nate Silver, an expert on the recording of the election, told me. This personal dislike doesn't seem to matter - everything should be a science - but every decision about weighting is a judgment. Will suburban women vote in 2024? Young people? What about those who voted for Trump in 2020? All three respondent groups had different weights in the adjusted sample, and the weights selected by the pollsters reflected the pollsters, not the respondents’ perceptions of the election. Some pollsters may even see in fact what they find incredible results, and may even adjust their weights. The problem is that sometimes, things that are hard to believe happen, such as Latino voters moving 16 points to the right.

This dynamic may explain a strange exception to last year's trend. Overall, most polls missed again: the average error was Trump’s three-point underestimation, the same as in 2016. But Republican-aligned pollsters performed better. In fact, according to Silver’s model (the other models have similar results), there are four of the most accurate polls in 2024, and seven of the top ten are right-leaning companies, not because their approach is different, but because their biases are different.

The most basic question In 2024, the same as in 2016: No response bias, which is because of the errors cited by people who are polled with different people.

If the difference between responders and people with unobservable demographic characteristics such as age and gender, then pollsters can get rid of this problem. If the difference is not easily observed and is related to how people vote, then the problem becomes extremely difficult.

On average, Trump voters trust institutions and participation in politics tend to reduce a fact. Even if you perfectly sample the right proportion of men, the right proportions of each age group and education level, and even the right proportion of past Trump voters, you will still pick up the most engaged and trusting voters within each of those groups—who else would spend 10 minutes filling out a poll?—and such people were less likely to vote for Trump in 2024. So after all that weighing and modeling, you still wind up with an understanding of Trump. (This may explain why pollsters performed well in 2018 and 2022: Disconnected voters tend to lower voters during the midterm elections.)

Almost certainly, the issue has also suffered from president-approved polls, although there are no elections to test its accuracy. Once the election is over, low-trust voters who do not answer polls will not suddenly become reliable respondents. According to Nate Silver's Silver Announcement Poller, Trump's approval is currently 6 percentage points underwater. But if these approval polls are plagued by the same unresponsive bias as last year’s election survey (which is likely to be the case), then he has only 3% negative. This doesn't seem like a big difference, but it will put Trump's recognition rate historically at about this moment rather than a historical low in the presidency.

Jason Barabas, a political scientist at Dartmouth University, has some understanding of unresponsive bias. Last year, he directed the new Dartmouth poll, which the college calls “an initiative aimed at establishing the best polls in New Hampshire.” Barabas and his students mailed over 100,000 postcards in New Hampshire, each with a unique code to complete polls online. This method is not cheap, but can provide randomness, such as old-fashioned random number dialing.

The Dartmouth poll also uses all the latest statistics techniques. It is weighted by gender, age, education, party, county and congressional district, and then fed through turnout model based on more biographical details of the respondents. The method is set in advance and is consistent with scientific best practices, so Barabas and his research assistants don't mess up their weight after the facts to make the results fit their expectations. They also tried ways to improve response rates: Some respondents were out of a chance to win $250, some were reminded to respond, and some were given a version of the poll, which was built on "problems" rather than an upcoming election.

In the end, it doesn't matter. The vote in Dartmouth was a disaster. Its final survey showed Kamala Harris rose 28 points in New Hampshire. This is the order of magnitude of error. The next day, she will win 2.8 points. A six-figure budget, complex approach, the integrity required to pre-register its methodology and the courage necessary to still publish its abnormal polls – all in order to produce what seems to be the most inaccurate polls in the entire 2024 cycle and one of the worst results in U.S. poll history.

Barabas is not sure what happened. But he and his students do have a theory: the names of their polls. Trust in higher education is politically polarized. According to this theory, the new Hampshires who voted for the Trump-Vote saw a postcard from Dartmouth, an Ivy League school with most liberal faculty and student groups, but did not respond - the state's anti-Trump voters jumped on the chance to answer emails from their favorite institutions. The Dartmouth poll is an extreme example, but basically the same thing happens everywhere: The person who conducts the survey is someone who trusts more in the institution, and people who have more trust in the institution are unlikely to vote for Trump.

Once the pollsters tangle their heads around this point, their choices become slimmer. They can pay pollers to attract those who don’t want to answer. The New York Times Try this in partnership with voting company IPSOS, up to $25 per respondent. They found that they had contact with more modest voters, who usually didn’t answer the phone and were more likely to vote for Trump, but said the difference was “relatively small.”

Or pollsters can become more creative. Jesse Stinebring, co-founder of Blue Rose Research, a democratic voting company, told me that his company asked respondents if they believed that sometimes children needed a good hard spanking” (this belief was disproportionately held by Americans who responded to the survey and used it with normal weight.

Professor Georgetown Bailey made a bigger proposal. Suppose you conducted a poll with a 5% response rate that showed Harris won four points and the second poll had a response rate of 35%, showing her winning rate of one point. In this case, you can infer that every 10 response rate increases Trump's profit margin every 10 points, Bailey said. So if the election's turnout is 65%, that should mean Trump's two points. Bailey admits it is an "a new way of thinking" and it is an understatement. But can you blame him?

Be clear, Even if Republicans are underestimated a few points, political polls may be valuable. For example, if the poll doesn't show that he lost to Trump for insurmountable profits, Biden may stay in the 2024 game, which is almost certainly underestimated.

The problem is that when the election is over, people expect the most from the polls, but given the inevitability of error, that is when the polls are the most inseparable. And if the behavior of answering a survey or being fully involved in politics is so closely related to one party, then the pollster can only do a lot.

Legendary Iowa pollster Ann Selzer has long hated the idea of ​​baking her assumptions into polls, which is why she uses weights for only a few variables (all populations). For decades, this stubborn refusal to presumably guess that she can win both accurate poll results and the admiration of those who study polls: In 2016, a 538 article called her "the best poller in politics."

Selzer's final poll for 2024 showed Harris was three percentage points ahead of Iowa. Three days later, Trump will win the state by 13 points, an amazing 16 points.

Weeks after the election, Selzer released an investigation into what might be wrong. "For the pursuit, I found nothing that I had made the point of elucidation missing." The same day the analysis was published, she retired from the election vote.