urban sustainability & resilience

A blog about governance for urban sustainability and resilience

Why I use Qualitative Comparative Analysis (QCA)

Wall in New York

Not much green paint in this artwork.

 

Sometimes people ask me why I use Qualitative Comparative Analysis (QCA) in my studies. Whilst this technique is quickly gaining traction in the social sciences, it is still unknown to some. Being unknown it is, I fear, sometimes looked upon with some suspicion.

So, here I take to the heart what are said to be Einstein’s words: “If you can’t explain it to a six year old, you don’t understand it yourself.”

 

Explaining it to the six year old

Imagine you see two painters working on a painting. They are both halfway painting a large part of their canvas in green. You wonder what green paint they are using and decide to inspect their paint boxes.

To your surprise none of the painters is using green! It turns out that one has blue, red and yellow in her box of paints and the other only has blue and yellow. What is going on?

Well, if they don’t have green paint in their boxes they somehow must have made green paint with the paints they have. But what paints are at least needed to make green?

This is probably where logic (and some years of playing around with paint at kindergarten) kicks in for the six year old.

If one painter could have mixed “blue” and “yellow” and “red” to get “green”, and the other could have mixed “blue” and “yellow” but not “red” to get “green” then “red” is not needed for making green.

In other words, red can be eliminated (although that word may be a bit too hard for the six year old) as a condition to come to green. Also, based on this information it is likely that the combination “blue and yellow” results in “green”.

 

Explaining it to a somewhat older audience

The above painters are an oversimplified example of why I use QCA to analyse the type of data in most my current study:

The object of my study: Replace “painter” with “non-coercive collaborative governance arrangement”, for instance of the benchmarking tools that I described in my previous blog.

Number of observations: Replace the number two (for painters) with 50 or more (for non-coercive collaborative governance arrangements).

The outcomes I am interested in: Replace “green” with “the number of participants in an arrangement”, “the number of buildings built or retrofitted under an arrangement”, and “spill-over effects such as media attention for an arrangement”.

The conditions that I think may cause these outcomes: Replace “blue”, “yellow, and “red” with “financial gain”, “market conditions”, “role of government”, and a whole series of other conditions that I think cause the outcomes of non-coercive collaborative governance arrangements.

See? That comes pretty close to the example of the painters.

 

Same principle, but more complicated

That having been said, it goes without saying that tracing (combinations) of conditions that likely (or not) cause particular outcomes is more complicated than in the example of the painters. After all, I am interested in (at least) three outcomes of non-coercive collaborative governance arrangements:

  • The number of participants they attract
  • The number of buildings that are built or constructed within them
  • Spill-over effects such as media attention

I am further interested in a large number of conditions that I consider to cause these outcomes. Some of these relate to their design:

  • The direct financial gain for participants (including cost savings)
  • Non-monetary gain for participants, such as obtaining knowledge or building networks
  • The ability for participants to be altruistic
  • The ability for participants to showcase leadership
  • The stringency of participation criteria
  • The stringency of enforcement of these criteria

Others relate to these arrangements’ context:

  • The existing regulatory context of these arrangements
  • Environmental awareness and environmental activism in the country context of these arrangements
  • Disposable income per capita in the country context of these arrangements

And yet others relate to the role governments take up in these arrangements:

  • Leading and initiating roles of government in these arrangements
  • Guarding roles of governments in these arrangements
  • Assembling roles of governments in these arrangements
  • Financial support from government for these arrangements
  • Administrative support from government for these arrangements
  • Active roles of government as launching customer or participant in these arrangements

In other words, with this set of conditions I am looking at a very large number of  combinations of conditions that may cause any of the three outcomes (that is, 2^6=64 combinations for the design conditions and roles of government; and 2^3=8 combinations for the context conditions).

Even more, where in the example of the painters “green” was a rather unspecified outcome, I am interested to understand why some arrangements have performed better than others. For instance, some arrangements have clearly resulted in many more buildings than initially expected by their administrators. Others however have resulted in some buildings, but not close to their expected numbers. Why this (qualitative) difference?

This could have to do with the fact that the conditions also show (qualitative) differences. For instance, in some arrangements governments financially support all activities participants undertake. In other arrangements governments only give financial token support.

Long story short, whilst the principle is the same, my study and the data I have is much more complicated than the example of the painters. I need a structured approach to analyse my data, and a systematic approach to compare the non-coercive collaborative governance arrangements I am interested in.

 

QCA: I’m lovin’ it

This is where QCA comes in. It has since the mid-1990s quickly evolved as an accepted research practice that allows for

  • a systematic exploration of (qualitative) data
  • checking the coherence of this data
  • checking hypothesis or existing theories
  • a quick test of conjectures
  • developing new theoretical arguments

I first started using QCA during my PhD research (2005-2009). During that research I realised that it is a both a technique and an approach to systematically compare qualitative case studies. In my PhD I have predominantly used the logic of QCA, but over the years I have more and more applied QCA as a technique as well (i.e., a structured approach to analysing my data).

So far my experiences with QCA have been great. It helped me to go ‘deeper’ into my cases and flesh out what makes them comparable and different, why they are comparable and different, and how these similarities and differences may be related to the outcomes of the cases I have studied thus far.

My current study on non-coercive collaborative environmental governance arrangements is the first truly large scale study that I have designed from the outset to be able to use QCA as a technique. I am looking forward to applying it soon to the data I have been collecting over the last years.

 

Further reading

The technique has quickly developed over the last years. Particularly the software that supports QCA has become much more user friendly.

Also, there now are a number of really good handbooks available. I particularly like the following because they all provide a stepwise approached for good QCA practice:

  • Ragin, C. (2008). Redesigning Social Inquiry: Fuzzy Sets and Beyond. Chicago: Chicago University Press.
  • Rihoux, B., & Ragin, C. (2009). Configurational Comparative Analysis. London: Sage (from which I have taken the above list).
  • Schneider, C., & Wagemann, C. (2012). Set-Theoritic Methods for the Social Sciences. Cambridge: Cambridge University Press.

These handbooks are good further references for those unfamiliar with the foundations of the method (which I will not dwell on here).

Finally, for those interested in QCA the COMPASS website is absolutely worth a visit. Over the years it has grown into a worldwide network of scholars that share their interest in and experiences with QCA.

 

Post-script:

QCA in my current study (no longer suitable for the six year old)

Early studies on non-coercive collaborative governance arrangements (such as building benchmarking tools) indicate that

  • Their outcomes (say, participants or number of buildings constructed or retrofitted) are likely caused by different interacting conditions (i.e., conjunctural causation) – e.g., the combination of good market conditions and a focus on financial gain.
  • Different (sets of interacting) conditions may cause the similar outcome (i.e., equifinality) – e.g., a tool with a focus on financial gains but without a focus on showcasing leadership may attract a similar number of participants as a tool with a focus on showcasing leadership but not on financial gain. Here the conditions “financial gain” and “showcasing” leadership both (but independently) cause “a number of participants”.
  • The presence of a (set of interacting) condition(s) in the causal role of the outcome is of limited help in explaining the inverse situation (that is, the causal role of the absence of the condition in the non-occurrence of the outcome; i.e., asymmetry) – e.g., it is not to say that when a set of tools that all attract a high number of participants because of their focus on financial gain, a set of tools that does not focus on financial gains will not attract a high number of participants.

I have chosen QCA as a data analysis methodology exactly because it allows for ‘unraveling causally complex patterns in terms of equifinality, conjunctural caustation, and asymmetry’ (Schneider & Wagemann, 2012, 8).

QCA differs from other data analysis methods in its focus:  ‘The key issue [for QCA] is not which variable is the strongest (i.e., has the biggest net effect) but how different conditions combine and whether there is only one combination or several different combinations of conditions (causal recipes) of generating the same outcome’ (Ragin, 2008, 114).

In my study I apply fsQCA as it allows for giving a rather precise insight in the qualitative difference in my empirical data – i.e., the degree of presence or absence of a condition or the outcome in the units of observation.

 

 

Good fsQCA practice, step 1: The choice of outcomes and the conditions that may cause these

Good fsQCA practice asks the researcher to clearly spell out the outcomes she or he is interested in and to spell out the conditions she or he assumes are related to these outcomes. In my study I bring together and test a range of expectations (hypotheses) that have been stated by other scholars in the field. To come to these I have carried out an extensive review of the literature, which is publicly available.

In the book (and articles) that result from this study I will add an appendix that spells out the various outcomes and conditions that I study. This appendix is already available from the project website. To give an examples for how I support my choice:

  • O3_MEDIA: Spill-over effects are more difficult to measure. After all, it would imply a focus on (a sample of all) non-participants in the arrangements in this study  and a focus on goods and services produced outside of these arrangements, which is too time consuming an activity (cf., Darnall & Sides, 2008; Lyon & Maxwell, 2007). To gain some insight in the potential spillover effects of the cases studied I have taken as a proxy measure the relative media-attention in major building sector practitioner journals (open access, online accessible) and on major sustainable building websites for all cases for each year from 2011 to 2013. This gives some insight into whether and to what extent non-participants could be exposed to arrangements under scrutiny.

 

Good fsQCA practice, step 2: Calibrating the data

The strength of fsQCA as compared to other forms of QCA is that it allows for giving a rather precise insight in the qualitative difference in the units of observation. In other words, it allows distinguishing among different stages of these observations and compare sets of observations of a particular stage with sets of observations of other stages.

For instance, one set of observed arrangements may have attracted a number of participants beyond the expectations of their administrators, another set of arrangements may have attracted a number that is in line with such expectations, yet another set may have attracted a number that falls short of such expectations, and a final set may have attracted a number of participants that is futile when considered from originally expected numbers.

Of course, when comparing arrangements such as I do here there is no precise number of participants that provides a cut-off point for these different categories across all the cases studied. One arrangement may have aimed for 10,000 participants, but only achieved 1,000; another may have aimed for 50 participants and have achieved 60.

From a qualitative point of view the latter is more successful than the former, from a quantitative point of view it is not.

Good fsQCA practice asks the researcher to explain such choices made. What are the various qualitative categories, what are the top and bottom boundaries, and what is the crossover point where an observation is considered to have maximum ambiguity?

I will support the way I have calibrated the data in the same appendix, which is already available from the project website. To give again an examples for how I have calibrated data:

  • O3_MEDIA: Spill-over effects. As discussed above, I have taken media attention as a proxy. I have aimed to adjust media attention to reflect the locality or scale of an arrangement. After all, the smaller the scale of an arrangement the less likely it is to be addressed (multiple times) in leading media outlets. For instance, one of the arrangements studied was a local Boston-based programme whilst another one is applied throughout the United States and even in other countries. I have added a weighting of 5 to each observed instance of media attention given to local arrangements, 3 to regional arrangements, 2 to national arrangements and 1 to international arrangement. This resulted in an average of 36 (weighted) media-attention counts per arrangements. Interestingly, it appears that if an arrangement generates media attention, it (quickly) attracts much media attention. Of the arrangements with positive scores (i.e., any media attention at all, 38% of total sample studied), only two arrangements showed average media attention, whilst nine of these had generated media attention that was often three to ten times the (weighted) average of all cases studied (i.e., 36 weighted counts of media attention). I therefore have specified full membership as having generated at least ten times or more the (weighted) average of media attention. Full non-membership represents no observed instances of media attention. The crossover point is set relatively low at halve of (weighted) average observations of media attention. This because of the relatively large pool of arrangements with lower than (weighted) average observations of media attention. I have applied time bonuses and penalties as per O1_PART.

 

Good fsQCA practice, step 3: Carrying out the analysis

Based on the above choices I now have to carefully score my data. I have studied 58 non-coercive collaborative governance arrangements and use three outcome and 16 outcome variables. This implies a total of 1,100 data points (1,102 to be precise).

That’s the task for this week. Luckily I have already scored large parts of my data for articles that are due for publication later this year.

Next week (or maybe even later this week) I will venture into the analysis. I will use QCA software that is freely available online.

From there on I will follow Schneider and Wageman’s chapter 11, “Recipe for a good QCA”. In a later blog I will report if my next experience with QCA is as positive as my earlier experiences with it.

Advertisements

2 comments on “Why I use Qualitative Comparative Analysis (QCA)

  1. Pingback: Why we should not expect too much from voluntary environmental programmes | urban sustainability & resilience

  2. Pingback: New insights on voluntary environmental programmes | urban sustainability & resilience

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Information

This entry was posted on April 21, 2014 by in the book(s) and tagged , , , , , .
%d bloggers like this: