Wednesday, January 23, 2013

Public Opposition to Drones in Pakistan - A Question of Wording

Professors C. Christine Fair, Karl Kaltenthaler, and William J. Miller have a new article in the Atlantic on public attitudes toward U.S. drone attacks in Pakistan. The piece is a shortened version of a longer working paper that looks at the factors affecting knowledge of and opposition to the drone program among Pakistanis. They argue that Pakistani citizens are not as universally opposed to the drone program as is commonly believed and that public opinion is fragmented. According to Pew Research, only a slim majority report that they know anything about the drone program, and of those who do know "a lot" or "a little," only 44% say that they oppose the attacks.

Fair, Kaltenthaler and Miller valuably point out that Pakistani attitudes are not homogeneous and that only a minority of Pakistanis even know about the drone program. Making broad and sweeping claims about Pakistani opinion on the drone program is difficult because the average citizen tends to know little about foreign policy issues (this is just as true in the United States as it is in Pakistan)

However, I think Fair, Kaltenthaler and Miller may be going too far in asserting that "the conventional wisdom is wrong." Their argument that only 44% of Pakistanis oppose the drone program is highly sensitive to the choice of survey question. While Pew asks a series of questions on attitudes toward drones, Fair et. al. choose to focus on only one of these questions. In doing so, they place a lot of faith in the reliability of that question as an indicator of respondents' opposition to the drone program. This is a persistent issue in all forms of survey research. Scholars are interested in some unobservable quantity (public opinion) and have to use proxies (survey responses) to infer what cannot be directly seen. They must assume that their proxy is a good one.

Looking at the other survey questions suggests that this faith may be misplaced. Although only 44% of respondents say that they "oppose" drone attacks, 74% of respondents think that the drone attacks are "very bad" and 97% think that they are either "bad" or "very bad". If both questions were proxying for the same latent variable we would not expect this extreme gap. If 17% of respondents "support" drone strikes and only 2% of respondents think that drone attacks are a good thing, then a significant majority of those who say they "support" strikes also think that they are "bad" or "very bad" - a strange puzzle. While it's not inconceivable for people to say that they support policies that they think are bad, a more likely explanation is that respondents' answers are strongly affected by the way the questions are worded and that the question used by Fair et. al. may not be a good proxy for the quantity of interest.

So what's the problem with the question? The main issue is that it asks respondents to evaluate a hypothetical future scenario rather than reflect on the existing drone program.
I'm going to read you a list of things the US might do to combat extremist groups in Pakistan. Please tell me whether you would support or oppose...conducting drone attacks in conjunction with the Pakistani government against leaders of extremist groups. 
The question is not about what the US is currently doing to combat extremist groups, it is asking instead whether respondents would support a course of action that the US might take. Importantly, this course is framed in a way that is likely more appealing than the status quo.

First, it states that these drone attacks will be conducted in "conjunction" with the Pakistani government. While I don't know how this term was translated, it certainly suggests a lot more involvement by the Pakistani government than currently exists. The Pakistani government may "tacitly approve" the existing US drone program, but it is difficult to characterize drone attacks as being conducted in "conjunction" with Islamabad. Pew's survey respondents seem to agree - a significant majority (69%) believe that the U.S. is exclusively "conducting" the drone attacks, but a plurality (47%) believe that the attacks carry the approval of the government of Pakistan. Given that the unilateral nature of the strikes is often cited as a reason for their unpopularity, a respondent may support (or at least not oppose) the proposal in the question while still opposing the drone program as it is currently conducted.

Second, the question makes no mention of possible civilian casualties or other drawbacks while highlighting the benefits of combating extremists, a threat that many Pakistanis are concerned about. This may be minor, but it matters a lot. As Fair et. al. point out, the drone debate is a low-information environment. When respondents' attitudes about a policy are not well crystallized, subtle differences in question wording that highlight different costs or benefits can have a major impact on the responses given. For example, Michael Hiscox found [gated/ungated draft] that question framing had a sizable effect on Americans' support or opposition to international trade liberalization - another case where the average respondent is typically not well-informed. Respondents exposed to a brief anti-trade introduction were 17% less likely to support free trade.

Certainly, the question that Fair et. al. use is not explicitly presenting respondents with a pro-drone viewpoint. However, it is still very likely that framing effects are skewing the responses. Consider another question asked by Pew that is simpler and more direct

Do you think these drone attacks are a very good thing, good thing, bad thing, or very bad thing?
1% said "very good", 1% said "good", 23% said "bad" and 74% said "very bad."

I'm not arguing that this question is a superior measure. For example, it may not be measuring approval of drone strikes themselves, but rather a general sentiment toward the US (among the heavily male/internet-savvy subset interviewed). It may overstate "true" opposition, suggesting discontent with the way the program has been conducted but support for the idea of drone attacks. It might even be a consequence of social desirability bias. The point is that we don't know from the data that we have. Question framing has a substantial impact on survey responses and researchers should be careful about drawing conclusions without clarifying their assumptions about what the survey questions are measuring.

The question used in the study by Fair et. al. to measure support/opposition to the US drone program is not simply asking whether Pakistanis support or oppose the US drone program. It is asking whether Pakistanis would support a hypothetical drone program coordinated with the Pakistani government. Moreover, it exclusively highlights the benefits of such a program rather than the costs. This would not be a problem if support were invariant to question wording, but this is decidedly not the case. And even with the rather favorable framing, only 17% of respondents said that they would approve of drone strikes against extremists.

So I'm a little skeptical of the claim that commentators have been grossly overestimating the level of opposition to drone strikes. I certainly would not call Pakistani opposition to the drone program a "vocal plurality." This isn't to say that improved transparency and better PR on the part of the US would do nothing to improve perceptions, but it is a very difficult task that is constrained at multiple levels. Even if the Pakistani government were to become directly involved in the drone program (very unlikely given the political consequences), the survey results suggest that it would gather the support of about 17% of the subset of Pakistanis who are currently informed about the drone program.

The most important takeaway is that we simply don't know enough from the survey data. It's just too blunt. However, it does point to issue framing and elite rhetoric as important elements of opinion-formation on drones, suggesting interesting avenues for survey experiment work in the future.

h/t to Phil Arena for tweeting the link to the Atlantic article.

No comments:

Post a Comment