A survey about future behavior is not future behavior

People lie their asses off all the time give incorrect answers to survey questions about future actions.  This is not news.  Any analysis that requires treating such survey results as reality should be careful to validate them in some way first, and when simple validation tests show that the responses are complete bullshit significantly inaccurate in systematically biased ways, well, the rest of the analysis is quite suspect to say the least.

Let’s start with a simple hypothetical.  You see a movie on opening weekend (in a world without COVID concerns).  You like it and recommend it to a friend at work on Monday.  He says he’ll definitely go see it.  6 weeks later (T+6 weeks), the movie has left the theaters and your friend never did see it.  Clearly his initial statement did not reflect his behavior.  Was he lying from the start? Did he change his mind?

Let’s add one more part to the hypothetical.  After 3 weeks (T+3 weeks), you ask him if he’s seen it, and he says no, but he’s definitely going to go see it. Without doing pages of Bayesian analysis to detail every extremely contrived behavior pattern, it’s a safe conclusion under normal conditions (the friend actually does see movies sometimes, etc) that his statement is less credible now than it was 3 weeks ago.   Most of the time he was telling the truth initially, he would have seen the movie by now.  Most of the time he was lying, he would not have seen the movie by now.  So compared to the initial statement three weeks ago, the new one is weighted much more toward lies.

There’s also another category of behavior- he was actually lying before but has changed his mind and is now telling you the truth, or he was telling you the truth before, but changed his mind and is lying now.  If you somehow knew with certainty (at T+3 weeks) that he had changed his mind one way or the other , you probably don’t have great confidence right now in which statement was true and which one wasn’t.

But once another 3 weeks pass without him seeing the movie, by the same reasoning above, it’s now MUCH more likely that he was in the “don’t want to see the movie” state at T+3 weeks, and that he told the truth early and changed his mind against the movie.  So at T+6 weeks, we’re in a situation where the person said the same words at T and T+3, but we know he was 1) much more likely to have been lying about wanting to see the movie at T+3 than at T, and 2) at T+3, much more likely to have changed his mind against seeing the movie than to have changed his mind towards seeing the movie.

Now let’s change the schedule a little.  This example is trivial, but it’s here for completeness and I use the logic later.  Let’s say it’s a film you saw at a festival, in a foreign language, or whatever, and it’s going to get its first wide release in 7 weeks.  You tell your friend about it, he says he’s definitely going to see it.  Same at T+3, same at T+6 (1 week before release).  You haven’t learned much of anything here- he didn’t see the movie, but he *couldn’t have* seen the movie between T and T+6, so the honest responses are all still in the pool.  The effect from before- more lies, and more mind-changing against the action- arises from not doing something *that you could have done*, not just from not doing something.

The data points in both movie cases are exactly the same.  It requires an underlying model of the world to understand that they actually mean vastly different things.

This was all a buildup to the report here https://twitter.com/davidlazer/status/1390768934421516298 claiming that the J&J pause had no effect on vaccine attitudes.  This group has done several surveys of vaccine attitudes, and I want to start with Report #43 which is a survey done in March, and focus on the state-by-state data at the end.

We know what percentage of eligible recipients have been vaccinated (in this post, vaccinated always means having received at least one dose of anything) in each state when I pulled CDC data on 5/9, and comparing survey results to that number.  First off, everybody (effectively every marginal person) in the ASAP category could have gotten a vaccine by now, and effectively everybody in the “after some people I know” category has seen “some people they know” get the vaccine.  The sum of vaccinated + ASAP + After Some comes out, on average, 3.4% above the actual vaccinated rate.  That, by itself, isn’t a huge deal.  It’s slight evidence of overpromising, but not “this is total bullshit and can’t be trusted” level.  The residuals on the other hand..

Excess vaccinations = % of adults vaccinated – (Survey Vaxxed% + ASAP% + After some%)

State Excess Vaccinations
MS -17.5
LA -15.7
SC -15.3
AL -12.9
TN -12.5
UT -10.5
IN -10.2
WV -10.1
MO -9.5
AK -8.7
TX -8.5
NC -7.9
DE -7.8
WA -7.6
NV -7.4
AZ -7.3
ID -7.2
ND -7.2
GA -7
MT -6.2
OH -5.1
KY -5
CO -4
AR -3.9
FL -3.7
IA -3.5
SD -3.4
MI -3.3
NY -2.7
KS -2.2
CA -2.1
NJ -1.8
VA -1.8
WI -1.7
OR -1.5
WY -1.2
HI -0.7
NE -0.4
OK 1.3
PA 2.3
IL 2.5
CT 3.4
RI 3.9
MN 4.1
MD 5.5
VT 7.8
ME 10.2
NH 10.7
NM 11.1
MA 14.2

This is practically a red-state blue-state list.  Red states systematically undershot their survey, and blue states systematically overshot their survey.  In fact, correlating excess vaccination% to Biden%-Trump% has an R^2 of 0.25, and a linear regression of survey results to the CDC vaccination %s on 5/9 is only 0.39.  Partisan response bias is a big thing here, and the big takeaway is that answer groups are not close to homogeneous.  Answer groups can’t properly be modeled as being composed of identical entities.  There are plenty of liars/mind-changers in the survey pool, many more than could be detected by just looking at one top-line national number.

Respondents in blue states who answered ASAP or After Some are MUCH more likely to have been vaccinated by now, in reality, than respondents in red states who answered the same way (ASAP:After Some ratio was similar in both red and blue states).  This makes the data a pain to work with, but this heterogeneity also means that the MEANING of the response changes, significantly, over time.

In March, the ASAP and After Some groups were composed of people who were telling the truth and people who were lying.  As vaccine opportunities rolled out everywhere, the people who were telling the truth and didn’t change their mind (effectively) all got vaccinated by now, and the liars and mind-changers mostly did not. By state average, 46% of people answered ASAP or After Some, and 43% got vaccinated between the survey and May 8 (including some from the After Most or No groups of course).  I can’t quantify exactly how many of the ASAP and After Some answer groups got vaccinated (in aggregate), but it’s hard to believe it’s under 80% and could well be 85%.

That means most people in the two favorable groups were telling the truth, but there were still plenty of liars as well, so that group has morphed from a mostly honest group in March to a strongly dishonest group now.  The response stayed the same- the meaning of the response is dramatically different now.  People who said it in March mostly got vaccinated quickly.  Now, not so much.

This is readily apparent in Figure 1 in Report 48.  That is a survey done throughout April, split to before, during, and after the J&J pause.  Their top-line numbers for people already vaccinated were reasonably in line with CDC figures, so their sampling probably isn’t total crap.   But if you add the numbers up, Vaxxed + ASAP + After Some is 70%, and the actual vaccination rate after 5/8 was only 58%.  About 16% of the US got vaccinated between the median date on the pre-pause survey and 5/8, and from other data in the report, 1-2% of that was likely to be from the After Most or No groups, so 14-15% got vaccinated from the ASAP/After Some groups, and that group comprised 27% of the population.  That’s a 50-55% conversion rate (down from 80+% conversion rate in March), and every state had been fully open for at least 3 weeks.  Effectively any person capable of filling out their online survey who made any effort at all could have gotten a shot by now, meaning that aggregate group was now up to almost half liars.

During the pause, about 8% got vaccinated between the midpoint and 5/8, ~1% from After Some and No, so 7% vaccinated from 21% of the population, meaning that aggregate group was now 2/3 liars.  And after the pause, 4% vaccinated, maybe 0.5% from After Some and No, and 18% of the population, so the aggregate group is now ~80% liars.  The same responses went from 80% honest (really were going to get vaccinated soon) in March to 80% dishonest (not so much) in late April.

Looking at the crosstabs in Figure 2 (still report 48) also bears this out.  In the ASAP group, 38% really did get vaccinated ASAP, 16% admitted shifting to a more hostile posture, and 46% still answered ASAP, except we know from figure 1 that means ~16% were ASAP-honest and ~30% were ASAP-lying (maybe a tad more honest here and a tad less honest in the After Some below, but in aggregate, it doesn’t matter)

In the After Some group, 23% got vaccinated and 35% shifted to actively hostile.  9% upgraded to ASAP, and 33% still answered After Some, which is more dishonest than it was initially.  This is a clear downgrade in sentiment even if you didn’t acknowledge the increased dishonesty, and an even bigger one if you do.

If you just look at the raw numbers and sum the top 2 or 3 groups, you don’t see any change, and the hypothesis of J&J not causing an attitude change looks plausible.  Except we know, by the same logic as the movie example above, the same words *coupled with a lack of future action* – not getting vaccinated between the survey and 5/8- discount the meaning of the responses.

Furthermore, we know from Report #48, Figure 2 that plenty of people self-reported mind-changes, and we have a pool that contained a large number of people who lied at least once (because we’re way, way below the implied 70% vax rate, and also the red-state/blue-state thing), so it would be astronomically unlikely for the “changed mind and then lied” group to be a tiny fraction of the favorable responses after the pause.  These charts show the same baseline favorability (vaxxed + ASAP + Almost Some), but the correct conclusion -because of lack of future action- is that this most likely reflected a good chunk of mind-changing against the vaccine and lying about it, coupled with people who were lying all along, and that “effectively no change in sentiment” is a gigantic underdog to be the true story.

If you attempt to make a model of the world and see if survey responses, actual vaccination data, actual vaccine availability over time, and the hypothesis of J&J not causing an attitude change fit into a coherent model, it simply doesn’t work at all, and the alternative model- selection bias turning the ASAP and After Some groups increasingly and extremely dishonest (as the honest got vaccinated and left the response group while the liars remained) fits the world perfectly.

The ASAP and After Some groups were mostly honest when it was legitimately possible for a group of that size to not have vaccine access yet (not yet eligible in their state or very recently eligible and appointments full, like when the movie hadn’t been released yet), and they transitioned to dishonest as reality moved towards it being complete nonsense for a group of that size to not have vaccine access yet (everybody open for 3 weeks or more).


P.S. There’s another line of evidence that has nothing to do with data in the report that strongly suggests that attitudes really did change.  First of all, comparing the final pre-pause week (4/6-4/12) to the first post-resumption week (4/24-4/30, or 4/26-5/2 if you want lag time), vaccination rate was down 33.5% (35.5%) post-resumption and was down in all 50 individual states.  J&J came back and everybody everywhere was still in the toilet.  Disambiguating an exhaustion of willing recipients from a decrease in willingness is impossible using just US aggregate numbers, but grouping states by when they fully opened to 16+/18+ gives a clearer picture.

Group 1 is 17 states that opened in March.  Compared to the week before, these states were +8.6% in the first open week and +12.9% in the second.  This all finished before (or the exact day of) the pause.

Group 2 is 14 states that opened 4/5 or 4/6.  Their first open week was pre-pause and +11%, and their second week was during the pause and -12%.  That could have been supply disruption or attitude change, and there’s no way to tell from just the top-line number.

Group 3 is 8 states that opened 4/18 or 4-19.  Their prior week was mostly paused, their first week open was mostly paused, and their second week was fully post-resumption.  Their opening week was flat, and their second open week was *-16%* despite J&J returning.

We would have expected a week-1 bump and a week-2 bump.  It’s possible that the lack of a week-1 bump was the result of running at full mRNA throughput capacity both weeks (they may have even had enough demand left from prior eligibility groups that they wouldn’t have opened 4/19 without Biden’s decree, and there were no signs of flagging demand before the pause), but if that were true, a -16% change the week after, with J&J back, is utterly incomprehensible without a giant attitude change (or some kind of additional throughput capacity disruption that didn’t actually happen).

The “exhaustion of the willing” explanation was definitely true in plenty of red states where vaccination rates were clearly going down before the pause even happened, but it doesn’t fit the data from late-opening states at all.  They make absolutely no sense without a significant change in actual demand.

 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

%d bloggers like this: