Hi everyone. I'm part of The Economist's data team and analyse large data sets, create models and write data-driven articles about the US election, among other things. I helped create our US presidential forecast, which currently sees Joe Biden in the lead to win the American presidency. I can answer questions about how I build election models, why this year is different to 2016 and any other questions you might have about data journalism at The Economist.

Proof: https://twitter.com/gelliottmorris/status/1319312310125678595

EDIT: Hi everyone. Thank you for all your questions! I might come back later to respond to some stray queries, but unfortunately I don't have unlimited time to answer them all right now. If you follow me on Twitter, I actively answer follower questions there as well.

Also, you might want to sign up to get Checks and Balance, The Economist's newsletter on US politics, in your inbox. It's delivered weekly and includes a post from me and the best of our analysis of the election (and it will continue after the election, too).

So long!

Comments: 1330 • Responses: 60  • Date: 

bbbsssjjj681 karma

If we could run hundreds of presidential elections, we would be able to get a pretty good idea of whether the 538 model or the Economist model is better calibrated. But we only get one every four years, and the models get refit between those elections anyway.

So suppose Biden wins, as both models currently say is quite likely. What evidence would we look for, ex post, to help make the call that one performed better than the other? To put it another way, what evidence (conditional on a Biden win) would convince you that 538 had actually done a better job?

Keep up the awesome work.

theeconomist277 karma

You're right to suggest that we can't really know which model is better based on one event. But I answered another question already about how we might evaluate which did "better" in this specific reality that we ended up observing this year. Cntrl + F for "brier score".

slakmehl334 karma

Hi Elliott. Thank you for all the work you do, and in particular the transparency of your modeling (available on github for those interested!).

There appears to be a significant disagreement between your model and others about the degree of uncertainty that should be simulate based on polling distributions. Your model tends to have greater certainty, such that you tend to ascribe significantly higher probabilities to pink states going to Trump and light blue states going to Biden. Obviously there are lots of ways to probe this, but I wanted to flag one in particular that seems on it's face like it might indicate too much certainty in your model:

On your histogram of Electoral College outcomes, your model appears to ascribe something like a 5x to 10x greater probability that Biden will win with precisely 374 Electoral Votes than the combined probability that Trump will win by any margin. Even when your model was at like 92% for Biden, this specific case was around 2x to 3x the total Trump probability.

Other models seem to have shorter, more numerous peaks in their histograms, so I wonder if this is an artifact of too little uncertainty at lower levels propagating in chunky ways up to overall outcomes. How would you tend to explain this aspect of your model's prediction?

theeconomist185 karma

It also has something to do with the between-states correlations in our model. We tend to simulate stronger relationships in polling error in like states than other models, producing those higher peaks.

mrmanager237272 karma

Thoughts on Fivey Fox? And do you think the Economist model would be helped by a mascot?

theeconomist1324 karma

If we did have a mascot, it probably wouldn't have as fat a tail as 538's.

theaspiringchimp244 karma

Something that blew my mind was that you're only 24. What was your path to becoming a data journalist at The Economist?

theeconomist442 karma

The Economist recruited me while I was still in school after I published successful forecasts of the 2017 French and UK elections.

shoe7525198 karma

Do you expect the polls to be off in any particular direction?

As an example, Dave Wasserman wrote a piece showing that Democrats have been undervalued in the Southwest and overvalued in the Midwest the last two cycles - https://www.nbcnews.com/politics/2020-election/polls-could-be-wrong-may-help-biden-not-just-trump-n1244753

theeconomist266 karma

Good question! Our training revealed that polling error is usually randomly varied around 0, meaning that knowing the direction of the error in the past election doesn't help you predict error in the next. That being said, there are some reasons why we might be more suspect of polls in Hispanic-heavy states (the population is generally hard to poll, even if you weight by all the right variables), so Dave might be on to something here!

Wild_Marker34 karma

What makes them harder to poll?

theeconomist54 karma

Nate Cohn has a good explanation here.

thegrooseisloose18187 karma

538's model has the race at a near tossup if Biden loses PA IIRC, so from your perspective how likely is a Biden victory if he loses PA but still wins MI and WI? What is his best backup plan in that scenario?

theeconomist284 karma

I just ran the math for you. With the caveat that this is a relatively rare outcome in our model, conditional on losing PA and winning MI and WI we would give Biden a 60% chance of winning.

jagershark185 karma

If you could only follow one state on election night, and we’re completely blind to any info about the others, which would it be?

theeconomist555 karma

Florida — if Trump loses, he has a 0% chance to win, according to our model.

Idejder174 karma

How panicked (or not?) are you about the fact your model is going to crest 95%? Any lingering worries?

theeconomist342 karma

I have a lot of faith in our model. It has produced very good predictions in the past and I have no reason to think that polls are worse now than they were in the past (eg, 2016).

However, there is still the chance that something outside the scope of our model — like voting-counting shenanigans, voter suppression or something covid-related — causes a big difference between how people say they'll turnout and vote and who actually does. That's the type of error we can't model.

murphysclaw1148 karma

If I was a pollster in, say, Pennsylvania who got burnt in 2016 I would make damn sure that I didn't undervalue Trump this time.

Is this a good argument for why polls might be biased in favor of Trump in 2020?

theeconomist163 karma

I think it's good intuition, though I don't see any empirical evidence that pollsters who got 2016 wrong are more bullish on Trump than the replacement-level pollster.

cbsteven117 karma

Once the election is over, what will you be looking at to judge your model’s performance vs the performance of competing models?

jagershark38 karma

This is a good question. How, if at all, can you judge how ‘good’ a model was?

theeconomist158 karma

We will be evaluating our forecast using two measures of predictive accuracy: first, the root-mean-square error in Democratic vote share in each state, and second the brier score on each state's prediction.

lifeinaglasshouse104 karma

After several errors in the Rust Belt in 2016, election forecasters re-worked their methodologies and were largely accurate during the 2018 midterms. The notable exception? Florida, where polling indicated both Ron DeSantis and Rick Scott would lose their respective races. This time around, The Economist states that Joe Biden has a 79% chance of winning Florida, despite the fact that he has a narrower polling lead than both Andrew Gillum and Bill Nelson had on the eve of the 2018 midterms.

After forecasters missed Florida in 2018, how can you be so confident that Joe Biden will win the state?

theeconomist86 karma

Our model operates under the assumption that polling error randomly varies form election to election, which has historically been the case. It includes the chance of a 2016 or 2018-style error in the polls.

Cobalt_Caster88 karma

What do you make of David Wasserman's claims that district level polling is the most accurate, and where it points this election? Is there a way to incorporate district level polling into your model?

theeconomist134 karma

Dave has access to a lot of private polling at the district level that is (a) more numerous and (b) seemingly more accurate than the stuff we get publicly. So while I don't use the data in our model because it's pretty noisy, I have no reason to doubt the insights he is gleaning from it.

That said, I don't know about the comparative accuracy of eg state v district v national polls. Our models have done a pretty darn good job in the past using only state and national polls.

ajibajiba81 karma

Hi Elliot,

I’ve been a follower and huge fan for several years now. Your polling analysis is consistently the best out there.

Here’s my question. David Shor was recently asked in a Q&A about some of the “forecasting wars,” specifically about your model vs. Nate Silver’s / 538’s. He said Silver comes at forecasting from more of a “sports/gambling prediction for predictions sake” perspective, while you come with more of a poli sci perspective. Do you agree with that being the fundamental difference between your approaches or is it something else?

Would also love to hear what books, research papers, etc. have contributed most to your understanding of US politics, polling, etc.

Thanks!

theeconomist104 karma

I think that's a pretty good basic description of the differences, and it's what led us to bet on polarization and stability earlier this election cycle. I think that bet has worked out well for us.

In terms of the academic work that has shaped my understanding of politics and polling, please see work from: Chris Wlezien (my former prof), Andrew Gelman (the statistician and polling expert who helped build our model), Pamela Johnston, Lilliana Mason and Nate Silver.

vikinick75 karma

Considering the fact that both you and your opponent give Biden a 97% and an 88% shot to win, do you think people are doomposting too much on Twitter about the presidential election?

Obviously there should be articles about how it's possible Trump wins (see Harry Enten's article right before Trump won), but it seems to have consumed almost every journalist to write the take that Trump will win. It's even convinced a majority of voters that Trump will win, which is dangerous if/(probably when) the bubble gets popped!

theeconomist87 karma

Yes I do.

NewshoundDad75 karma

If you had to venture a guess, when do you think the election is called?

A.) 11 p.m. - 12 a.m. Election Night

B.) 12 a.m. to 2 a.m. Nov. 4

C.) 2 a.m. to 6 a.m. Nov. 4.

D.) Days later after absentee ballots are fully counted.

Enjoying your analysis on the website and Twitter. As a digital content news person myself, I have a tremendous amount of respect for people who do stuff like this for a living. It takes a special person.

theeconomist140 karma

A, if not earlier -- but by "call" I mean what I will be able to call, not necessarily what media outlets will feel comfortable doing.

blindboydotcom56 karma

Are you going to tweet your "call" or anything of the sort?

theeconomist106 karma

You bet

EricGreen123471 karma

How do to account for the discrepancy b/t national polls (very good for Biden) vs state polls (merely good for Biden) vs district polls (holy shit amazing for Biden)?

theeconomist148 karma

¯\_(ツ)_/¯

jagershark63 karma

Are the betting markets (Biden 4/9, Trump 7/4) completely insane?

If your model is saying 95%, would you bet your house on Biden?

theeconomist192 karma

I don't have a house.

murphysclaw162 karma

I once had a literal nightmare that I went to a news stand and I was told that The Economist was out of print. I woke up slightly unnerved.

A few months later I had a date with a girl who worked for The Economist. I told her at great length about my nightmare and implored her to reassure me about the safety of the Economist's finances. Instead of doing this she just looked kinda concerned and started answering questions quite briefly.

Since she was unable to, could you confirm for me that The Economist is doing ok?

theeconomist92 karma

If you want The Economist to live on forever like I do, the best thing you can do is ask your friends and family to subscribe! https://www.economist.com/subscribe/

fabulousfantabulist57 karma

Which state or congressional district outcome are you most looking forward to seeing? Is there one in particular that you think would prove your model superior to the others on offer?

theeconomist110 karma

Two answers. First, the MT senate race will tell us a lot about the potential diminishing returns to fundraising data in small media markets. And second, early voting numbers in Texas make it look a lot bluer than the polls suggest. That could be a point of weakness for our model, since it also hedges against the polls with the implied result based on the the national swing since 2016.

PigNasty51 karma

Do you vote in elections? And do you think pollsters and/or modellers should vote? It was interesting to learn that Ann Selzer (an excellent pollster) never votes in elections!

theeconomist117 karma

Yeah, I see no reason why we can't separate our political opinions from our modeling work.

dirtystacks48 karma

I know you didn't have a model in 2016 so this question might not have an answer. But what chances would your model give Hillary in 2016. And that question flipped... what do you think a 2016 model would say about the 2020 election?

theeconomist40 karma

Check out our GitHub repo, which has predictions for 2008-2016 and a lot of calibration/diagnostic info too! https://github.com/TheEconomist/us-potus-model

Calgakus43 karma

This election is unique because of (1) the pandemic itself, (2) increased use of mail-in ballots, and (3) a partisan divide with regards to mail-in ballots. To what extent do these novel factors add uncertainty and increase the chance of a large, systematic polling error?

theeconomist84 karma

I am going to say something that I think other forecasters might disagree with, but for which we have a ton of evidence: Covid-19 has not mattered at all for modeling the fundamental dynamics of variance in the polls. The assertion that politics are more unpredictable right now because of covid-19 seems plainly wrong. See these charts for more.

But we might still run into problems with vote-counting, which our models can't evaluate quantitatively. That's outside the scope of what we predict.

IamnotHorace42 karma

Hi Elliot, most election models are based on predictions on who will come out to vote.

What effect will the seemingly larger than historical normal numbers of people voting in this election, based on mail in and early voting, have on those models?

What, if anything can the higher participation levels tell us, and what different ways will this affect the election?

theeconomist83 karma

Horace — Good question. We have found that higher turnout benefits Democrats. See this piece from last year.

Polls that ask voters if they will turn out will already be accounting for this.

IamnotHorace109 karma

Thanks for answering.

But, I am NOT Horace.

theeconomist77 karma

Are you sure?

Three_Amigos41 karma

How does your model differ from Nate Silver’s 538 Bayesian model? What assumptions do you build in that he does not or vice versa?

theeconomist96 karma

The biggest difference this year was that Nate added 30% extra volatility and polling error because of various uncertainties from covid-19. Much of that extra error has disappeared from his model by now, but it still explains a lot of the residual difference between our models. Early on, we bet that political polarization and the "fundamentals" of the election would create a large and stable lead for Biden that, on average, would persist to election day. I think we were right, but there's no way to know for sure unless we repeat the election a million times over.

R_K_M67 karma

I'm not Elliott, but I have stalked them enough on twitter to delute myself into thinking that I can answer that.

First of, I think its Important to point out what it similar between both models. They both have somewhat sophisticated poll averaging models. They both try to estimate EV outcomes by looking at the outcomes of the individual states. They both have extensive correlation matrices to estimate how the results are related between the states. They both have a list of factors to introduce uncertainty in the models. Despite their "twitter forecasting wars" their models are more alike than different.

The differences are legion, but mostly minute. Sadly, 538 hasnt opened up his model so its impossible to compare everything. We can make a list of some of the most important or interesting differences by looking at some of nates/538s statements and some reverse engineering:

  1. Nate Silver generally uses much more uncertaincy in his models. This is partly due to him just using fatter tails in generall, partly because he introduced some "fuzzy" measurements such as the number of full width headlines in the NYT or special covid uncertaincies and partly because Elliott has introduced a "polarisation" term, which basically states that polls move less now than they did 50 years ago. In general their models should slowly converge the closer we come to election day as a lot of this uncertainty isnt about how wrong polls are, but about how much polls will move until election day.

  2. The 538 model has some negative correlations between states. I.e. if trump is underestimated 1% in state X, they think polls will underestimate biden by 0.y% in state Y. The Economists model caps the correlation at 0% instead because they say that negative correlations are nonsensical.

  3. They have different priors. E.g. Elliott uses presidential approval as an important measure to estimate the vote share months before the election but Nate says that approval is not usefull as a fundamental because it is itself subject to all the same downsides as head-to-head polls are. Again, this difference should get reduced the closer we are to the election because the fundamentals wont play a large role anymore. I think Nate was more bullish on trump than elliott from a fundamentals sense ?

  4. I think (?) Nate mostly uses state polls to estimate the vote in each state and mostly only uses national polls to fill in the gap in states where little polling is done. Elliott on the other hand does use national polls plus an "uniform swing" model to a small extend to estimate the vote shate in each state.

  5. The way their polling average is done is slightly different. Both use a model that takes into account "house effects" of the individual polling firms, but Elliott additionally punishs pollsters who dont weight by eductions and/or dont weight by past vote/party registration.

I think this is mostly it ? Please correct anything that is wrong.

theeconomist40 karma

This is a pretty comprehensive list. The only thing I would add is to #5, to clarify that our house-effects are dynamic and based on detecting consistent differences between a pollster's numbers and the average in any given state. This is different than 538's specification that house effects are a sort of residual from errors in predicting past elections results. This means that 538's are much smaller, and probably less able to pick up on huge house effects from the likes of Rasmussen, Trafalgar or polls for Trump's Super PAC

YumChickens40 karma

What tools (free ideally) would you recommend to use if I wanted to get into mapping or polling analysis?

All this stuff you do is so cool but I never know where to start.

theeconomist76 karma

I use the statistical programming language R the most often. It's free and there are tons of resources online — as well as a community for advice that is relatively welcoming of newcomers.

mmm_toasty37 karma

Hi Elliott, thanks for doing this AMA!

First, I wanted to let you know how deeply offended I am that you haven’t followed me on Twitter.

Second, how did you get into data journalism in the first place? Do you have recommendations for people interested in political forecasting who already have a significant background in statistics/data science?

theeconomist56 karma

Hey toasty — Don't take it personally, Twitter is a hellscape anyway.

On the second, more serious question, I usually give four recommendations to young people who want to get started in political data journalism:

  1. Practice writing
  2. Read a lot in your target domain
  3. Learn to code (I prefer R)
  4. Start a blog (which helps you practice all of the above).

EricGreen123434 karma

Do you think the unrest in Philly will move PA in one direction or another?

theeconomist49 karma

If it matters (and I don't think it will), it will get picked up in the polls.

cbsteven31 karma

Do you think the weird correlations between states in the 538 model are due to a bug?

This for people who don’t obsessively follow model wars on Twitter

theeconomist27 karma

Yes, I agree with Andrew on this and don't have much more to add than what we discussed in the comments of that post.

Belostoma14 karma

I'm not OP, but it would be pretty weird to have a bug that causes those issues but nothing more noticeable that got caught. In my experience as a modeler (in a different field) it's far more likely that this is an unanticipated artifact of some model design choices that make sense at first glance but combine to produce a quirky result. I've had things like that happen many times and had to dig deep into the model to figure out what to adjust.

theeconomist6 karma

Yeah, that's my suspicion as well.

Chance_Shape420126 karma

Your model shows Biden@70% in FL? However, most recent EV and VBM numbers plus recent polls indicate a much tighter race (or a Republican advantage). Any comments?

theeconomist57 karma

Our model has Biden up 3 in FL. I stand by that average.

quixotemoses25 karma

What would happen if there were no polls at all (Other than you being out of a job)? Not trying to be a troll. I genuinely wonder what value polling offers voters. I understand how it might help campaigns, but it doesn’t seem like it helps voters in any meaningful way.

theeconomist32 karma

u/Hobbes_Novakoff gave a really good answer below. I'm writing a book about this too. It's supposed to be out next year, if I can get enough sleep after the election to write it!

Topher199923 karma

What do you anticipate being the biggest "surprise" on Election Day? I'm sure the polls are missing something you're picking up on.

theeconomist39 karma

Topher — that's a good question. I think the turnout data in Texas are prompting a lot of justified enthusiasm about the Democratic odds there. And given that polls in TX in the past 2 elections have underestimated Democrats, I wouldn't be too surprised if Biden wins it, even though we have him at 30% there.

dialleft21 karma

40+ year subscriber to The Economist. Data journalism, like the daily charts, is one of its best features. Love the Big Mac Index. Is there an issue with the app version of The Economist that makes it difficult to include all the charts? Does your election forecast appear in the app? If so, I missed it.

Also, it appears that the Economist prefers folks with subject expertise (and maybe a quirky sense of humor) who can write versus journalist majors. Per Linkedin, you fit that description. Is my perspective about The Economist hiring practices correct?

You’re doing a terrific job. Thank you.

theeconomist15 karma

I'll forward your various questions about the Economist app, as I do not know the answer to any of them.

I would say that's partially. right, yeah. I was certainly hired with my expertise in US politics + polling in mind.

murphysclaw118 karma

Can you explain in a bit more detail your decision to remove Trafalgar Polling from your model?

With other forecast models, they basically only exclude pollsters if they think they are genuinely creating fake polls. Is this what you think has happened with Trafalgar?

theeconomist26 karma

Answered above. Ctrl + F

BroadCityChessClub18 karma

How do you balance the importance of incorporating historic data with the threat of overfitting your model to the past?

theeconomist28 karma

We use a train-validate-test split using leave-one-out cross validation to ensure we're not over-fitting. So far this is working out pretty well for us!

hondaacura2017 karma

Hi Elliot! I appreciate the work that you do. If many pollsters have improved their methodology and weight for education compared to 2016, do you expect less of a polling error this time? How does the model account for uncertainty relating to polling errors or potential October surprises?

theeconomist24 karma

The short answer is yes, I do expect polls to be a bit more accurate this year.

The more complicated answer is that it's hard to predict how polls will be biased from any one year to the next, so there could be a new source of error this year. I have talked about so-called "differential partisan non-response" bias a lot. You can google my tweets on this.

NelsonMinar9 karma

Do you have any thoughts on responsible election-night reporting of results as ballot counts come in? The rush to declare a winner is always a mess, but will triply be so this year because of all the mail-in ballots. Particularly curious if you have any ideas of how to visualize the statistical uncertainty. Other than an anxiety needle, that is ;-)

Also is The Economist going to do a live scoreboard election night? Seems off-brand for a weekly newspaper!

theeconomist10 karma

No live forecast at The Economist, though you can follow along on my Twitter for some live analysis and predictions.

Crownie6 karma

There's been a lot of... heat over the differences between the Economist's model and 538s, particularly with regard handling of uncertainty. Do you think there is a methodological problem leading to the divergence, a philosophical difference, or just two teams forced to make choices with limited information?

Also, is there any manual pruning or curation in your model, or do you publish the output as is?

theeconomist16 karma

I think you'd like this academic paper that we published on exactly this subject.

dwinddy6 karma

Why do you toss out firms like Trafalgar? Wouldn’t it provide some balance to an ABC +17 result in Wisconsin to get a more conservative view on what the electorate will look like?

theeconomist20 karma

We toss out data from pollsters that meet any of the following conditions:

  1. Run polls with biased questions that could prime respondents to answer a certain way
  2. That have a history of faking data or otherwise biasing/making up numbers
  3. Sample responses from sources that are so unrepresentative that the data can't be weighted to account for differences (eg, MTurk)
  4. Won't tell me (either publicly or over the phone) the methodology by which they reach their numbers

Trafalgar violates at least 2 of these conditions, so I toss them out.

TehWhiteRose3 karma

Does ABC lean Democratic the same way Trafalgar leans Republican? (this is a good faith question, btw)

theeconomist5 karma

Definitely not. Our model's house effects picked up a 6-7 percentage point (and statistically significant) bias toward Trump for Trafalgar, and a measly 1-point advantage to Biden for ABC/WaPo polls. So, this is a bit of a false equivalency.

Matonly1T6 karma

Scenario question - What are likely pathways for a Biden victory that don't include Pennsylvania?

Extra question, are you aware of any polling questions surrounding the suspension of the extended unemployment benefit?

theeconomist21 karma

AZ + NC + MI gets him past 270

miscsubs5 karma

Hi Elliott,

Three questions:

  1. Approval polls: they are definitely a good signal but do they amplify the existing signals (polls) or do they complement them and provide new information? How did you decide to include it in your projections? Do you think it would be similarly useful in other elections like senate, house, governor etc.?

  2. In one tweet it reply you said the economist hasn’t paid you a dime yet. Uhm, why?

  3. You’re pretty opinionated when it comes to politics. Do you worry it could colloid your judgement or introduce bias to your forecasts?

theeconomist13 karma

  1. Approval polls are definitely good at predicting vote shares. We include them in our model.
  2. I don't know what you're talking about, The Economist definitely pays me (or else, who has been putting that money into my bank account!?!?)
  3. We are an editorial paper with beliefs and opinions, but we are very careful not to let that color our empirical analyses. I actually think it's better for forecasters to be aware of their biases than to pretend their neutral savants of non-partisan, impartial, oracular prediction.

AwakenedEyes4 karma

How can you attempt a realistic prediction, with the incredible amount of shenanigans going on, from voter suppression to intimidation, to postal delays, lost ballots and fake ballot drop boxes, not to mention micro advertising, troll farms and Russian hacks?

theeconomist14 karma

To the degree that those things have happened in the past, our model takes them into account in the error term. But new efforts from some party actors to try to get ballots discarded or stoke fears of voting could be a new source of error that we can't model. Still, I have calculated that they are relatively small, and won't matter much in a Biden blowout.

gorbot3 karma

Does your model add error by region? I saw a tweet that showed the average polling error by Midwest, southwest, etc, for pst 2 elections. Does the model use previous error data to predict 2020 error and account for that? Thx!

Edit: I was referring to this https://twitter.com/redistrict/status/1321273340007534594?s=21

theeconomist5 karma

We detect regional differences in the states based on their demographic and geographical characteristics. Scroll down on this page to see more detail.

murphysclaw13 karma

What is Trump's most plausible route to victory?

theeconomist12 karma

He can afford to lose MI and WI and still win the electoral college, but even that doesn't seem very likely.

vogon1013 karma

You discussed (and detail on your GH) the back testing you did on your model. How did you separate your training and validation sets to perform this?

theeconomist3 karma

We use a leave-one-out cross-validation split, where we train the model for every year on all the other years of data, never letting it see any information about the target year.

Isaact7142 karma

Great job so far on this election cycle.

Question regarding FL early voting. We have seen a sizeable Democratic lead in returned ballots so far, but every day that passes the GOP is chipping into that number.

How did 2018's early voting results in FL (at this time 6 days prior to the election) compare to the results there so far?

theeconomist3 karma

This is easily explainable and not worth the alarms that others are sounding because of it.

We know from polls that mail-in ballots favor Dems massively this year and that early in-person ballots are closer to 50-50, maybe even favoring Trump.

So as we get more early in-person votes in Florida, the margin in early votes will trend back toward 50-50. This is not a sign of the tide turning in Florida but just a product of the way partisans have chosen different vote methods this year.

maxstolfe2 karma

Hello! Thanks for doing this, and for your constant stream of updates on Twitter. I have followed you and some of the other prominent analysts near-religiously for what feels like years now in this new world. I have two questions:

1) You tweeted this morning that the model is getting close to uncertainty interval separation. What does it mean exactly when if/it the model's electoral college projection does separate?

2) I remember back in 2016 and 2012, the election projection models stayed largely consistent up to Election Day, but changed significantly ON Election Day - especially 2016, of course. FiveThirtyEight and the New York Times models, I remember, went from something like 90% for Hillary the morning of to 50/50 and by evening had swung to Trump's favor. It was night and day, which definitely left me totally shocked and unprepared. There are so many unprecedented factors to try to account for in this election (the dueling crises, Biden's consistent lead, stability in the race, vote-by-mail, the list goes on), but could you explain whether a similar projection swing like we saw in 2016 is possible next Tuesday, what it could look like, why you think so / think not, your own expectations and what you are mentally preparing yourself for?

Thank you so much!!

theeconomist6 karma

I will answer the first question. What I mean when I say we're getting close to separation in the uncertainty interval is that Biden's probability of victory is approaching the traditional one-sided test for statistical significance, p = 0.025 or a 97.5% chance of winning. But p thresholds are kinda made up anyway so.

Schroederlaw1 karma

Hi Elliott. Based on the change in demographics alone, and everything else (turnout, vote share going R vs. D, etc.) staying the same, would the 2020 electorate elect Biden or Trump? What states would shift?

theeconomist2 karma

Biden, though I don't have a link at hand for you.

physicianmusician1 karma

How would you respond to the theoretical results from this paper that state even a large sample size won't save you?!

https://statistics.fas.harvard.edu/files/statistics-2/files/statistical_paradises_and_paradoxes.pdf

theeconomist5 karma

"Won't save you" is the operative phrase here -- we have a ton of empirical evidence that polls are good at measuring the "will of the people," on average, but aren't oracular predictions of the future. Once you start thinking of polls as measurements instead of crystal balls, you start to realize their limits.