The distinction between ‘qualitative’ and ‘quantitative’ data/methods irritates me. It is often presumed that the latter is indicative of – at least in principle – greater scientific or analytical rigour, which is not necessarily wrong in some cases, but is only pertinent to one particular methodology of (social) science, where others can be rigorous as well despite no quantification taking place.
I will now half-assedly defend this position.
By rigour, I mean the extent to which an analysis is coherent, complete, and precise. The highest level of rigour would be that of a deductively valid proof – i.e. formal, incl. mathematical.
1. The Speciousness of Qual vs Quant
Pragmatically speaking, ‘qual’ and ‘quant’ do generally help us differentiate between analyses or data that are represented or studied according to substantially different degrees of formalisation. But a problem arises when we start to think that this distinction is coherent at a deeper philosophical level. It isn’t. There isn’t actually a difference, at least not essentially, between these two supposedly different types of measurements or analyses. Qualitative data speaks to the qualities of some phenomena or property, and qualitative analysis is the study or explication of those qualities. But quantitative data is the same thing, only expressed mathematically. There’s no deep methodological difference between saying something is ‘big’ or ‘small’ or whathaveyou and describing it according to some unit of measurement because units of measurement are themselves simply conventions against which other things can be compared; to employ them is simply to make use of a more refined set of qualitative indicators.
I see fit to mention this because the presence or absence of maths should not be the criterion of demarcation between different types of data or analysis. Sometimes ‘quantify that!’ is a reasonable request for more rigour, but often it isn’t. In many cases the reason why a scholar has not quantified their data and analyses is not because they are maths-phobic or because their observations and concepts are too ill-defined to be amenable to quantification, but rather because they are employing a (social) scientific methodology in which it is not useful or possible to translate their information into a mathematical language; the questions they seek to answer and the ontological/epistemological/metaphysical wagers upon which their methodology rests limit the usefulness of mathematical formalisation.
This brings me to my next point.
2. The Neopositivist Chauvinism of Qual and Quant
Within one quite popular methodology of science, there is excellent reason to view quantitative analyses as, generally speaking, holding the possibility of more rigour. This is the methodology that has given us the power to predict and alter the natural world in startling and amazing ways: Positivism (and its descendents). Within the Positivist tradition, scientific explanation typically consists of law-like statements which take the following form:
If X, then Y follows.
For example, in my field one candidate law might be ‘if two countries are democratic, they will not go to war against one another’. In another field, it might be something like ‘the introduction of chemical X at time T1 will lead to a reaction R at time T2’.
These laws are tested by hypothetico-deduction, usually: the consequences that would obtain if the law were true are predicted in the form of a hypothesis, then tested by way of observation and experimentation. Sometimes the law itself is not discussed; a scientist may feel that it suffices to show that R did indeed occur, and allow the scientific community to infer what it will from this result. But explanation and causality within this methodology nevertheless reduces to laws of nature.*
This methodology pressures one into using increasingly larger samples, increasingly precise measurement (in order to test how much an increase in X leads to an increase in Y), and increasingly complex methods of analysis to test whether, across a wide range of cases, Y indeed does follow X when all other factors are controlled. Since the type of reasoning is inductive, an increase in the number of cases in which the temporal (read: causal) relationship between X and Y holds will lead to an increase of confidence in the truth of the law. It is easy to see how this will lead to the scientist wanting to use statistical techniques and formal analyses to determine with maximum confidence that it is not some intervening third factor or ‘fluke’ that is responsible for the apparent ‘X then Y’ relation (ie significance tests), and therefore that greater rigour is made possible through ‘quantification’, whereby the qualities of the data are made amenable to mathematical analysis.
In the social sciences, for various reasons of terminological clarity, this methodology has been well-referred to as ‘Neopositivism’.
However, Neopositivism – that is, social science enquiry via the Positivist tradition – is not very good for answering a whole host of interesting and important empirical questions, nor is it without strong competition in enabling a coherent conception of causality in complex systems.
When we study social behaviour, we are often interested not so much in finding predictive laws – ‘if we see X, we know Johnny will do Y’ – but rather to specify the motives that led someone to act as they did – ‘because of his perception X, Johnny feel it appropriate to do Y’. This is the core of explanation in ‘interpretive social science’: the scientist explains social outcomes by clarifying and explicating the Reasons why an actor took a certain action or set of actions. While it might be possible to test competing explanations across cases via Neopositivist methods, it is the near-consensus of Interpretivist social scientists that a far better, more rigorous method involves ‘thick description’ or other ways of developing a rich and nuanced picture of the cultural conventions and personal narratives that serve as the context that make action meaningful; that make action something other than a reflexive twitch.
Meanwhile, when we study complex systems, we often find that prediction is impossible, and we’re thus moved to seek an account of causality that doesn’t require us to reduce phenomena to laws of nature. Social scientists, as students of complex systems, often make use of single-case analyses in which one particular configuration of factors, entities, or processes is examined for how it caused a given outcome. Causation cannot be reduced to any particular factor or set of actors; rather, all factors ‘came together’ exactly in such a way as to produce the effect. This kind of analysis enables the scientist to determine what is possible and how that possibility can be realised. The social scientist may attempt to specify certain ‘causal mechanisms’ that constitute cross-case regularities of causation, but these mechanisms are not laws; rather than are types of patterns or processes that connect cause and effect via their instantiation, independent of the observable phenomena they produce. In this methodology, quantification and mathematical analysis is less useful than methods such as ‘process tracing’, because apparent regularities in conjuctions/correlations of initial conditions and outcomes indicate nothing about the causal sequences that lead to those outcomes.*
As I have tried to show, there are other methodologies for engaging in social enquiry that can and are conducted rigorously and which do provide interesting scientific conclusions. They are not well-aided by quantification and yet they still feature explanations that are formally coherent, deductively valid, and meticulously grounded in empirical analyses.
*This highly simplistic summary should not be considered adequate by anyone.