Said Simon

Inchoate thoughts on my stuff

Memes and Cultural Evolution

Many people are familiar with the concept of the ‘meme’: units of cultural material that are transmitted throughout populations and evolve in a manner somewhat analogous to genes. Richard Dawkins coined the term and the general idea [1], prompting some measure of scientific activity, including a journal [2], devoted to the study of memetics. However, memes and memetics never gained much traction amongst social scientists and philosophers, and ‘meme theory’ currently enjoys essentially no credibility as a scientific theory.

In this post, I will explain why that is, and I will point to some alternative, sounder approaches to thinking about and studying the way knowledge and practice diffuse and evolve throughout societies .

Evolutionary epistemology and social science seem to go well together. Theories spread, transform, grow, live and die – ‘are selected for’ – not just in science but in society as a whole. Nor is this pattern of diffusion, evolution, and selection restricted to substantive or propositional content (i.e. to claims about the world); music, language, food and fashion, technology all seem to fit this model throughout periods of history. If we want to understand why that is, we must answer two questions:

  1. What is the thing that is being transmitted/diffused, selected-for, and transformed? That is, what is the ‘genetic unit’ of cultural evolution?
  2. What mechanism(s) are responsible for cultural evolution? That is, what kinds of recurring processes lead to the units of cultural evolution spreading and changing?

Meme theorists seem to offer these answers:

  1. Memes are independent units of cultural information, such as ideas, behaviours, or theories, that move between human hosts and influence what those hosts do, thereby causing changes in their environment.
  2. Memes ‘leap from brain to brain’ [3] by somehow generating imitation.

These answers, as I will explain, are not very good.

The notion of independent, self-replicating units of cultural data is both conceptually and empirically problematic. Conceptually, it appears to rest on dubious ontological foundations; that is, it seems to be a very strange kind of thing. Memes are not cognitive phenomena, according to Dennett, although they clearly can produce something cognitive (beliefs). They are contained within human beings, and so they are not social structures or systems, unless we conceive of structures and systems in highly reductionist terms. So what are they? Perhaps they are just a convenient shorthand for a bunch of other stuff, and are not meant to refer to something real? But if that’s the case, then (i) they are not analogous to genes, which we probably think are real and (ii) we only have reason to use the concept of a meme at all if it provides considerable empirical value.

It doesn’t, though. Provide empirical value, that is. As any anthropologist or sociologist will attest, culture isn’t made up of little, discrete bits of behaviour or knowledge. It is this big, inter-subjective, inter-related mess of interacting and continually changing practices, tastes, dispositions, and interpretations oriented around social life. It exists in holistic ways, with one particular bit of culture only making any sense when placed within the context of the larger whole. It’s not just that culture could be a ‘memeplex’ [4], but that culture is a web of symbols and meanings [5] surrounding us and making us who we are even as we continually recreate it through our actions [6]. Hence dividing it into independent units deprives us of our ability to appreciate culture as something emergent, and completely ignores the way that culture is not only something that seems to dwell within us but also constitutes the actual social environment in which we live and act [7]. Whatever empirical value we get from keeping the concept of the meme must be counter-posed to the enormous empirical value we lose by adopting a concept that is unsuited for appreciating vast and relevant parts of culture and social life.

Not only that, but the mechanisms of evolution proposed by meme theorists seem either trivial or absurd. Nothing simply leaps from brain to brain; people imitate other people due to processes of socialisation and influence. These can be such things as direct peer pressure, in-group solidarity, coercion, persuasion, observation, or adaption. To name a few possible candidates. These mechanisms range from the level of individual psychology to society-level structural influences, and thus do not appear to correspond to anything remotely similar to the ways in which genes engineer the machinary of their own reproduction; again, we must appreciate cultural or social evolution by taking into account emergent structures and systemic wholes, as well as their component parts. The reductionist approach of memetics just won’t do the conceptual and empirical job.

Of course, if we were left with no alternatives, we might decide that memetics is good enough. Luckily, though, we have alternatives. Better alternatives. So many alternatives, actually, that there is a robust debate among actual social scientists over where and when one alternative is better than another for a given problem or area of social life. For example, one of the most popular approaches is to conceive of culture as made up of symbols. Associated with hermeneutic theorists [8] such as Wilhelm Dilthey, Max Weber, Clifford Geertz, and Paul Ricoeur, and semiotic theorists [9] such as Ferdinand de Sasseure and Roland Barthes, this approach offers a much more helpful way of thinking about what culture is, ontologically or cognitively. Nevertheless, this approach makes it difficult to understand how culture spreads and changes. To address this, theorists such as Anthony Giddens, Pierre Bourdieu, Margaret Archer, Jeffrey Alexander, and Charles Tilly have treated the symbolic structures of culture as existing in a mutually-constituting relationship with the individual actions and practices of people as they go about interacting with one-another and living their lives. Practices concatenate or chain together in various ways to produce emergent changes at the structural or system level, which circle back to the level of individuals as their environment changes accordingly, in a dialectic process stretching back into history.

Drawing more explicitly from evolutionary theory, evolutionary epistemologists such as Karl Popper or Donald Davidson have suggested that theories spread and are selected-for in a way somewhat analogous to Darwinian evolution, where more empirically successful (and perhaps more accurate) theories win out over less successful ones. And pragmatist philosophers such as Charles Sanders Peirce and John Dewey have used evolutionary metaphors to produce highly influential metaphysical theories [10] and theories of mind and action [11], which have formed the basis for more recent attempts to theorise change and innovation based on something like spontaneous mutation or novel synthesis, such as by Hans Joas [12].

It may seem daunting to look at this long list of names and theoretical traditions in thinking about what culture is and how it changes, but are you really going to stick with memes out of laziness? This is all material that can be covered in an introductory course in sociology or anthropology with enough detail to make it possible to talk about culture or society without resorting to unhelpful or incoherent Darwinian metaphors. And if taking such a course is not feasible, buying and reading a textbook surely is. I’ll even take questions by email.

Tl;dr almost everything conveyed by the term ‘meme’ can be conveyed by the term ‘practice’, ‘approach’, or ‘tradition’, and what cannot be conveyed with those terms can be conveyed with other slightly more complex vocabulary associated with an actually credible theory in the social sciences.














Which social science?

Unlike the natural/physical sciences, the social sciences show no settling of method. By this I mean that within the social sciences, scientists work from multiple and fundamentally disparate approaches to research and enquiry. This varies somewhat across disciplines; anthropologists almost entirely employ ethnography while economists almost all use formal models, though there are differences even within these fields as to what the ontological and epistemic status of data and theory actually is. But on the whole, there doesn’t seem to be any single way to define what social science is and how it is distinguishable from other sciences. Rather, there are several dominant views as to what it means to commit a social science, and there is no obvious way to select one as superior to the others in any categorical sense.

In this post I will provide a bit of an introduction to these different approaches. I’ll go over some of the main philosophical problems that confront philosophers of social science, and trace how different solutions to those problems have led to different social sciences (in the methodological rather than the disciplinary sense). I’ll offer a few critiques of each of those approaches, and finally I’ll situate my own views and preferences, providing a brief defence for them.

What does it mean to commit a social science?

There are three key questions that social scientists must, at least implicitly, answer in order to get on with the business of doing social science.

  1. What is the domain of social science? In other words, ‘what is the social world?’ This is a deceptively complex question in that multiple answers seem plausible. Is social science about theorising how humans behave around other humans? Is it about studying meaning – about situating people within ‘webs of significance that [they themselves] have spun‘? Is it about studying all action? Is the domain of science continuous with the domain of language? Is it the study of rules? Or is there no rigorous ontological way to demarcate the social from the non-social, such that the domain of social science is really no more than whatever social scientists want it to be?
  2. What is an explanation of social things? Again, there are multiple plausible answers to this. Some answers should be familiar to anyone who has read even a little into the philosophy of science? Is a good explanation one which subsumes events under general or predictive laws? Or is it one that specifies the entities of an objective reality and describes their causal interactions? Or is it that, in social science, a good explanation is simply one that provides us with the reasons that motivated people to have done what they did?
  3. What counts as evidence in the social sciences? Simply put, what does it mean to observe or document the social world and how do these data allow us to build whatever it is that an explanation of social things should be? This is not just a question of methodology, but requires us to take a stand on the relationship between observer and observed, and on the status of knowledge claims in the sciences. Depending on our answer to the preceding two questions, we might decide that data are limited to recordings of events, or we might say that descriptions of real structures count, or perhaps only that reports about motives or beliefs tell us something about anything.

There are huge differences in the methodological implications that follow from how we answer these; that is, social science research will proceed in very different ways, based on how the researcher solves these philosophical puzzles. And these questions remain contested within the social sciences. Of course, philosophers of science still contest similar kinds of questions when it comes to the natural sciences, but actual practicing physicists, chemists, biologists, and the like rarely need to wonder about them. For whatever reason (more on that later), natural scientists have been able to avoid major fractures on these matters, and their disciplines tend to be more or less unified when it comes to matters of basic methodology.[1]


Though ‘positivism’ as a mode of enquiry in the social sciences dates, at least nominally, back to Auguste Comte, what typically receives the label today in the social sciences is significantly different from what Comte proposed. In discussing its roots, it makes more sense to relate it back to the tradition of logical positivism and to critical responses to that tradition made by Popper and Hempel. For positivist social scientists – who comprise the bulk of scholars in political science and sociology – the social world is constituted by the behaviours characteristic of interpersonal interaction. In other words, positivists are studying (a) people engaging in actions that (b) engage with other people and with things like institutions or ideologies. If this seems like a vague or under-specified definition of what social is, I suggest setting aside this concern. Positivists typically have. Positivist explanations typically seek to subsume social behaviour under general, predictive, though typically probablistic laws that specify how a change in one variable ’causes’ a change in another variable, ideally though not always through hypothetico-deduction. True to its Humean roots, evidence for positivists consists of recordings of events, sorted into types and coded to indicate occurrence or non-occurrence at a given value. And, true to their roots in logical positivism, positivist scholars in the social sciences have advocated for a unity of method that entails a similar ‘logic of enquiry’ for both social and natural scientists.

Though positivist social science for a while truly embraced the search for laws of behaviour, and did not aspire to anything more (on the grounds that such aspirations were doomed), it has since expanded in horizon to incorporate the search for causal mechanisms, typically defined as ‘intervening variables’. It has always quite frequently referred to the behaviour of entities such as institutions, states, or firms, rather than to only that of individuals. These broader horizons, though they reflect the practical necessity of referring to collectives and to processes when talking about social reality, bring with them certain philosophical tensions. To refer to causal mechanisms as ‘intervening variables’ appears to dissolve the ontological implications of a mechanistic theory of causality; it turns talk about the role that certain kinds of interactions play in bridging cause and effect into observations of nothing more than finer-grained sequences of events. This is in keeping with Humean empiricism, but it doesn’t seem to preserve what makes mechanistic theorising helpful in the first place – namely, that it allows us to step beyond ‘sense events’. Meanwhile, important questions about the relationship between individuals and the larger social entities in which those individuals live – on the relationship between structure and agency – become answerable only in terms of behavioural patterns, effectively shoving into a ‘black box’ questions of meaning, subjectivity, and normativity that are quite central to other modes of social enquiry.

Hermeneutics and Interpretivism

From the mid-19th century onwards, philosophers have sought to bifurcate the natural sciences and the social sciences according to a fundamental difference in method. While the natural sciences involve the search for the laws of nature, the ‘sciences of man’ (geisteswissenschaften as those in this almost-exclusively-German tradition called them, taking after Dilthey, who coined the term) involve uncovering the reasons according to which actions are rational. The bifurcation of ‘explanation’ and ‘understanding’ defines this methodological difference; the social sciences, according to this tradition, require us to interpret social life and reconstruct the cultural or normative backdrop of a given set of events. In other words, the social world is one of meaning-laden actions taken by subjects moved by moral and instrumental reasons, and an explanation of those actions is an interpretation of them that shows why they made sense.

While this approach is historical, in that it involves rooting explanations in particular points of social time and space, it still leaves plenty of room for theorising. However, theory in this tradition is very different from theory in the positivist tradition; finding laws are not the goal (nor even held to be possible), but taxonomy and ideal-typification are common. The prototypical examples of theory in this tradition would be Weber’s taxonomy of authority and his attempt to explain the apparent affinity between capitalism and certain Christian communities in his book The Protestant Ethic and the Spirit of Capitalism. But in the past several decades, the interpretive tradition has taken a sharp post-modern turn, in which social scientists are less interested in trying to explain big social processes spanning tens or even hundreds of years and more interested in limiting their investigations to local and often overlooked discourses or forms of life , without trying to abstract from them. In other words, the interpretive tradition has taken on a distinctly anthropological sensitivity. This is not inconsistent with the roots of the hermeneutic or interpretivist tradition, for a consistent thread throughout is to treat theory more as a means of organising our data and less as means of uncovering deeper relationships, but the critical and post-modern shade of much scholarship entails a lack of interest in truth or objectivity that many scholars find off-putting. It is also difficult to talk of anything other than discourses and meanings in this tradition, which makes theorising social structures related to class-relations or institutional orderings like states or bureaucracies difficult if not impossible.

Structural Realisms: Historical Materialism, Structural Functionalism and Critical Realism

A third stream of social science, which sometimes gets conflated with positivism but which really, really isn’t, is Critical Realism and its antecedent in the social sciences[2], historical materialism. Historical materialism is the Marxist approach to theorising society, and classically involves explaining explaining all social phenomena as the product of underlying economic structures – of ‘basic’ relations of exploitation and exchange – and explaining variation across social time and space in terms of differences in the broader form (or ‘mode’) that those underlying structures take. This traditionally takes the form of mainly economic analyses of particular historical cases; knowledge is produced not through empirical cross-case comparison but through explicating the causal processes and relationships specific to those cases. In some ways similar to historical materialism, though not coming from the Marxist tradition, is structural functionalism. Structural functionalists hold that society is made up of interlocking sets of self-regulating systems, and thus that actions can be understood according to their role (or function) in maintaining the stability of those systems. Thus like historical materialism, structural functionalism involves explaining specific cases by making reference to a delimited set of social structures which determine who people are, what their interests are, and why they do what they do. The ‘economic determinism’ of historical materialism has fallen out of favour, and many aspects of the Marxist approach to thinking about the social world have been taken up by postmodern discourse theorists, while structural functionalism has fallen out of favour for all sorts of reasons, largely being replaced by historically-minded interpretivists or by rational choice theorists. But there remains an interest in ‘real structures’ and their properties. This interest has, over the past three decades, been increasingly taken up by Critical Realists, working from the philosophical foundations set forth by Roy Bhaskar.

Critical Realists retain the historical materialist view that society is constituted by structured relations between actors whose roles are defined by those relations, but drops the economic determinism; now structures of ideas can be just as causally important, and the structures salient to explaining society can comprise a nexus of interacting ideas, institutions, even physiological attributes. For example, gender can be as important as class. What really distinguishes Critical Realism here is the addition of a dispositional theory of causality to that structural ontology. Critical Realists hold that structures have ‘causal powers’ – the inherent disposition to generate a discrete set of effects under the right conditions. Explanation thus consists of the identification of the structures, causal powers, and conditions that generate social events – to go beyond correlations of outcomes and to find the underlying social objects and relationships that make society what it is. Thus while it is historical like the Weberian or hermeneutical tradition, it is not about trying to understand how situated social actors make sense of their worlds, but about describing the world as it really is, with theories serving not merely as ideal types but as possibly correct representations of objective reality.

While Critical Realism is considerably more sensitive to the power of ideas than historical materialism, it depends heavily upon highly contentious ontological and epistemological presuppositions, including strong emergence and the possibility of isomorphically representing the social world through our theories.[3] While a Critical Realist philosophical framework makes it possible to use the language of causation and constitution to discuss a wide range of social phenomena, it ceases to be much more than a very complex set of ideal types if it can’t be realist. At this point, it becomes nothing more than a curiously blithe form of interpretivism, in which relatively little explanatory role is given to the uncovered reasons that people had for doing what they did, and most attention is paid to arranging the data into coherent structural patterns – in other words, hermeneutics that ignore the very thing that would make them explanatorily adequate, out of the vain desire to transcend the perspectives of individual subjects. Suffice it to say, this would also deprive Critical Realism of much of what makes it ‘critical’ to begin with.

Some Final Words, and My Take

I broadly locate myself in the hermeutical or interpretivist tradition, but I find a great deal of value in Critical Realism as well. Ultimately, I find many of the criticisms of realist philosophy of social science to be compelling: I do not think that it is possible to describe some ‘really real reality’ through the use of language, and even if there is enough stability to the natural world for some sort of quasi-realism, I do not thing that the social world exhibits that level of stability. I join with the American Pragmatists in viewing the entire project of theorising as an attempt to make sense of the world, and thus I ultimately see theories as ideal-types. That said, it is common for hermeneutic and interpretivist approaches to focus so much on individual minds or, alternatively, on mind-less discourses containing no agents but only symbols, and these approaches make it hard to think about the big picture in social life. It may be that the idea of a social structure, like class or gender, is nothing more than a convenient way for us to organise our experiences of the world as we live through it, but it is a very helpful idea nonetheless. It is a challenge to link the concept of a structure, which is basically ‘objective’ in character and which sits at a ‘macro-level’ position in our social ontologies, and the phenomenal bedrock of life and action that underpins the hermeneutical approach. [4] But that challenge need not prevent us from maintaining our broadly hermeuticist philosophical foundations and still building theories that make reference to macro- or meso-level social things like structures, processes, causal mechanisms, and the like. It merely means that we must ultimately recognise that our theories about such things are about solving conceptual problems arising out of our need to understand, and not about painting a picture of the social world as it really is.

Finally, one might notice that I’ve devoted little time to my position on positivism. This is ultimately because I find positivism an uninteresting and unhelpful form of theorising. Its fundamental mode is to locate conjunctions of events, but this provides me with no satisfying account of causal relationships or of actors’ reasons. In other words, it provides neither explanation nor understanding.


[1] This is not to say that the natural sciences don’t feature many significant methodological debates, nor that natural scientists never consider questions of philosophical relevance. Rather, I mean that the basic questions about being and knowledge that must be given at least implicit answers in order for a science to proceed don’t really come into the picture like they do in the social science.

[2] It is important to note that Critical Realism is not supposed to be specific to the social sciences, even if philosophers of the natural sciences have not given it much attention.

[3] This would be contentious for a few reasons. First, many physicalists – and Critical Realism is a kind of physicalism – would deny that mental states could be anything other or more than token ‘brain’ states, meaning that agency, which is an essential feature of the social world, is a sort of ‘illusion’ resulting from the vast complexity of the mind. Were this true, Critical Realist theories would be guilty of reducing actors to ‘structural dopes‘, and therefore would suffer from the same undesirable determinism as their historical materialist and structural functionalist predecessors. Second, all of the criticisms that philosophers have made of the notion that we can ‘step outside of language’ or in any way represent an ‘external reality’ through language apply to Critical Realism in a way that they do not to hermeneutical/interpretivist or even to positivist alternatives.

[4] A challenge I hope to take up in some of my own work.


Radicalisation, Belief, and Violence

An awfully popular and depressingly always-relevant topic discussion these days is the relationship between radicalisation, violence, and ideological or religious belief. One of the most salient questions people seem to want to answer is on the connection between scriptural or textual interpretation (i.e. what it says in the holy book) and the motives that members of radical groups have for committing violent acts. This post is my attempt to introduce some of the academic literature on radicalisation to the discussion. In it I will tackle several key questions by reviewing several of the most well-regarded scholars and articles:

  • What is radicalisation?
  • Who becomes a radical?
  • How does one become a radical?
  • What is the role of religion or ideological belief?

Specifically, I will rely as much as possible on four articles. They are ‘Mechanisms of Political Radicalization‘ (2005), by McCauley and Moskalenko [pdf], ‘The Staircase to Terrorism‘ (2005), by Mohaddam [pdf], ‘The Role of Religious Fundamentalism in Terrorist Violence‘ (2007), by Rogers et al [pdf], and ‘The Trouble with Radicalization‘ (2013), by Neumann [pdf]. There are two reasons for this. The first is that it lets me defer as much as possible to scholars whose expertise on this issue vastly exceeds my own. Peter Neumann, for example, is among the world’s foremost authorities on radicalisation, and has co-directed the ICSR for many years. The second is that it allows anyone who finds my claims contentious to easily check my sources and confirm for themselves that I am accurately representing the opinion of these experts. To facilitate this, I will be detailed in my citations.

What is radicalisation?

As Neumann points out, the word ‘radical’ has no meaning outside of a larger social context in which certain views are viewed as more extreme than others (876). Nevertheless, he suggests some basic definitions. ‘Radicalization’, he writes, ‘is the process whereby people become extremists’ (874). He identifies a conceptual divide between ‘between notions of radicalization that emphasize extremist beliefs (‘cognitive radicalization’) and those that focus on extremist behaviour (‘behavioural radicalization’)’ (873). He uses this divide to examine a debate within the scholarly literature on radicalisation between those who focus on the way in which individuals develop extremist beliefs and then decide, for whatever reason, to engage in violence based on those beliefs, and those who believe attention should be directed primarily at behaviour alone. Neumann’s conclusion, however, is that both the substantive beliefs that actors hold and the social mechanisms by which extremists engage in violence (the ‘how’ rather than the ‘why’) are essential to the picture, and suggests that social movement theory provides a way forward in thinking about radicalisation and violence (884).

The conceptual distinction between cognitive and behavioural extremism is paralleled in how McCauley and Moskalenko approach the nature of radicalisation: ‘Functionally, political radicalization is increased preparation for and commitment to intergroup conflict. Descriptively, radicalization means change in beliefs, feelings, and behaviors in directions that increasingly justify intergroup violence and demand sacrifice in defense of the ingroup’ (416). As this shows, they are interested specifically in what leads to violence.

Fathali Moghaddam takes the more practical approach of looking at what moves people to engage in terrorism, defined as ‘politically motivated violence, perpetrated by individuals, groups, or state-sponsored agents, intended to instill feelings of terror and helplessness in a population in order to influence decision making and to change behavior’ (161). By extension, an extremist or a radical is someone willing to engage in such actions. Rogers et al, meanwhile, seem interested in whatever will make a person from a religious group willing to kill and die for a cause, as they are both casting their net wide and looking mainly at religious radicalism, as opposed to other forms (254).

For what it’s worth, I prefer the definition proposed by McCauley and Moskalenko, as I think it is the most flexible and unproblematic, while taking into account some of the normative concerns that Neumann brings up regarding cultural context.

Who becomes a radical?

It is evident simply from a brief consideration of the wide range of violent ideological and religious groups that have popped up around the world over the past century that many types of people can become a radical. In addition to Islamist groups, which surely need no listing, there have been Jewish groups (such as IZL and the Stern Gang), Buddhist groups (see the ongoing ethnic cleansing of Muslims by Buddhists in Burma), Sikh groups (such as those responsible for the Air India bombing), Hindu groups (such as the LTTE, who were until 2000 responsible for more suicide bombings than every other group in the world combined), Christian groups and individuals (such as the Phalangists or, arguably, Anders Breivik), Marxist-Leninist groups (such as the Red Army Faction), purely nationalist groups (such as the IRA),  and anarchists (such as Action Directe).

Psychologists and psychiatrists have consistently found that individuals who become terrorists are not, as a rule, psychopathic or otherwise psychologically abnormal (Rogers et al 2007, 254 citing a long list of other experts). Neumann, Moghaddam, McCauley and Moskalenko, and Rogers et al all find that radical or terrorist violence is a ‘last resort’ for most people, coming only after they have gone through a long process of increasing individual radicalisation and commitment to radical activities and groups.

How does one become a radical?

While Neumann in the article of his that I’m citing here does not really discuss this issue, instead reviewing major debates in the literature, he has elsewhere offered a model of radicalisation claiming that individuals begin with sets of social/political grievances, develop an ideology that explains the origins of these grievances and proposes possible solutions for them, and finally mobilise through various social mechanisms to engage in extremist or violent actions.

Moghaddam uses the metaphor of a staircase to explain how individuals move through increasing levels of radicalisation until the point that they are finally ready to engage in violent actions. Each of the steps represents an individual’s progress towards more radical values and activity, and while many people may climb the lower steps, only a few ascend all the way. The mechanisms that drive radicalisation, in his model, differ per ‘step’ of the staircase, but include such things as the displacement of aggression, the solidification of categorical thinking, and the sidestepping of inhibitions (164-166).

McCauley and Moskalenko use the metaphor of a pyramid, with a wide range of mechanisms spanning the very personal/individual to the broadly social all having the power to tug people towards the apex, although, like Moghaddam, they observe that an ever diminishing number of people reach that apex and engage in radical violence. Their mechanisms are helpfully laid out in this table here.

mechanisms of radicalisation
Rogers et al conclude based on an extensive literature review that radicals are usually ‘rational, psychologically healthy individuals’ (256) and are drawn into engaging in violent activity by a range of social drivers, such as ‘the loss of parents or loved ones (fragmented families), severe conflict, especially with parents, and the existence of a criminal record’ (Ibid.). While they affirm the importance of cultural values validating self-sacrifice or struggle, they find these values wholly inadequate for explaining why terrorism happens (257), preferring to focus on the role of group social dynamics and the ways in which personal identity becomes subordinated to group identity and violence becomes a necessary action for an individual to take in order to affirm their sense of self (258-259). In this they draw upon the literature in social identity theory,  from social psychology.

What is the role of religion and ideological belief?

As Neumann points out, looking at the substance of radical beliefs is essential to understanding why some radical individuals engage in violence and others do not (880). However, in many cases the substance of religious belief has been found to be less important than tactical or strategic pressures. As Rogers et al note, a near consensus has formed that religious beliefs poorly explain suicide bombing, and that other situational factors are far more significant, such as concrete grievances and self-esteem issues on the part of the bomber (254). Furthermore, while LTTE represents a primarily Hindu ethnic group, its aspirations and organisational institutions are for the most part secular, but they have made extensive use of suicide bombing.

None of the models describing the process of radicalisation that I have cited make much reference to the substance of belief. They refer instead to the power of interpersonal dynamics, such as peer pressures, or socio-economic dynamics such as competition for resources or state use of repression, to explain why individuals come to the conclusion that extremist beliefs and actions are justified. This suggests, I think, that while looking at such things as religion is important, both because it helps us to understand the rhetoric and vocabulary with which extremists express their grievances and demands, and because it helps us to understanding the historical basis of certain social identities and groups, it is less helpful to look at theology or ideology as an explanation for radical violence.

Instead, as these various radicalisation experts show, it is more productive to look at psychological and social mechanisms that drive increased involvement in radical activities and which progressively limit the options available to individuals until they feel like violence is the only thing left to them.

Experiments with vlogs

I’ve been working on developing a series of short youtube videos on major topics in social theory and the philosophy of social science. The first batch of footage is already filmed and is getting edited up. But I thought, just for the hell of it and also to show how the concept began, that I’d post my two early ‘drafts’. As will hopefully soon be evident, there is a big leap in quality and organisation between these and the real thing.

DISCLAIMER: these totally suck and I was just rambling about interesting stuff in front of a camera. Nevertheless, as they might contain a nugget of worth, however small:


Scientism and its Problems

Scientism is, more or less, the position that all important questions of meaning, morality, and knowledge can be solved best through scientific methods. According to it, science has increasingly colonised those areas of enquiry that were once the preserve of philosophers and theologians, and that this trend will continue until the point where philosophy and theology have basically nothing left to contribute.

In this post I will discuss several problems with this position, and propose some possible solutions to those problems that may satisfy those who find scientism appealing.

The first problem with scientism is that there appear to be questions of value that are essentially unanswerable by any enquiry into facts. Consider what may be the bible of scientism, at least of the ethical variety, Sam Harris’s book The Moral Landscape. In it, Harris proposes that it is possible to use neuroscience to determine, as a matter of fact, what states of affairs produce the maximum amount of ‘wellbeing’ for human beings. Setting aside the exegetical question of what Harris really means to say, I have seen many people interpret his work as demonstrating that there is no longer a need, or at least, there is an ever-diminishing need, for moral philosophy.

Yet there are several questions that need answering here for this to make sense:

  • How do we define wellbeing and why is this particular definition better than another? Put differently, why should we care about wellbeing of one type over another?
  • How do we deal with distributional issues? Consider the ‘utility monster‘ problem, or the ‘organ donor‘ problem?

These are strong challenges against utilitarianism, which appears to generally be the position that follows from this brand of scientism. Curiously enough, it is philosophers who try to answer these challenges. Perhaps there is at least an interim need for moral philosophy after all?

The second problem of scientism is that scientific enquiry itself involves all sorts of philosophical assumptions, and scientists themselves are usually unprepared and uninterested in discussing them. As the meta-methodological turn in the philosophy of science has shown [1], scientific research is only possible within methodological packages; that is, scientists do research on the basis of assumptions about what kinds of things exist (i.e., on the ontological status of the objects of enquiry) and how knowledge about them is possible (i.e., on the epistemological status of claims about the world). There is also the apparent fact that certain commitments are bound up in scientific practice, such as a commitment to establishing an open discourse on theories and to diligence in evaluating all claims. As Popper discussed, there is an affinity between the ideals of scientific enquiry and those of a liberal democracy. This doesn’t mean that such commitments cannot ultimately be justified with reference to their efficiency in truth-seeking, but the process by which scientists are socialised into their disciplines leads most to hold commitments as moral obligations rather than only as means to ends.

Oddly enough, many advocates of scientism I encounter still operate from the view that science either does or should proceed through naive falsificationism. This speaks volumes about their philosophical literacy, but also about the extent to which such people are actually themselves engaged in any form of scientific practice.

Nevertheless, there is a strong argument to be made that scientists do not have to think about their own philosophical assumptions – or at least not in the natural sciences [2] – because at a meta level, their methodological packages will emerge, transform, and be selected-for based entirely on how much empirical or cognitive traction they bring in studying the world. In other words, while philosophers of science may be interested in the metaphysics and epistemology of science, these interests bring no value-added to the actual process and progress of science. Hence the dreadful refrain, ‘philosophy has had two thousand years to progress, and I defy you to name a single major discovery in philosophy in the last two decades’. Or something along those lines.

This is simply false, as a survey of journals in such fields as cognitive science, computer science, biology, and physics shows active engagement by scientists with these assumptions and also, often, collaboration with scholars whose disciplinary background lies in philosophy. [3]

But also, and perhaps most importantly, it proceeds from a misunderstanding of what philosophy is for – indeed, what knowledge is for. This is the third problem. Philosophers are not in the business of progressively accumulating facts about the natural world. Rather, philosophers are in the business of developing sophisticated and critical instruments for analysing concepts and methods. Moral philosophy, for example, grapples with many of the same problems that it has for millennia because these problems are not solvable in any simple sense. Instead, philosophers have accumulated bodies of scholarship that allow us to explore the problems, complexities, and implications of our ethical stances, developing methods for interrogating our and others’ views, and in some cases putting the nail in the coffin on views that once had currency but are now known to be terrible, or for demolishing popular but philosophically bankrupt ideologies.

We need ideology. We need morality. We cannot go about our lives without some idea, even if often implicit, of what the good life is and how society should be organised – what sort of deeds should be subject to sanction, how violence should be used, and so on. And while empirical input from the sciences is incredibly useful for resolving questions about these sorts of things, ultimately, empirical input only helps us undermine empirical premises or instruct us on efficient means to ends. What ends we choose and how we go about conceiving of them and their validity as such is not, in the end, amenable to empirical confirmation and disconfirmation.

Does scientism therefore have absolutely no leg to stand on? Is it irredeemably ignorant, arrogant, and smug? Well, mostly. Even usually. But there is perhaps one small way in which it captures an approach to the world that I find helpful.

I like to take a problem-driven approach to enquiry. What this means is that I do not see it as some attempt to find ultimate truth, but rather to solve pressing dilemmas in accomplishing whatever it is I feel like doing. This sounds prosaic, but it has significant implications. When I am going about doing science, for example, what I feel like doing is making claims about the world that satisfy my standards for descriptive accuracy. And this is a dilemma, in that I can’t just say anything, but rather have to follow certain procedures and engage in certain kinds of directed research. When I am doing ethics, I am recognising that I can’t just do anything to anyone, but rather have to find ways of juggling competing or mutually exclusive obligations, desires, and courses of action that have interpersonal significance. Proponents of scientism often seem interested in the practical nature of science: scientists don’t spend aeons debating whether the smallest things are made of substances or processes, for example, but just get on with finding stuff out about the world so that it’s less confusing and so that we can do new things. This is certainly a problem-driven view of enquiry, even if we recognise that many of the problems that scientists want to solve are influenced not just by method [4] but also by social concerns [5].

According to the above view, there is actually a way to view philosophy as a kind of science, and a progressive one at that. The pragmatist philosopher of science Larry Laudan argues that science consists of attempts to solve conceptual and empirical problems occurs within ‘research traditions’:

A set of beliefs about what sorts of entities and processes make up the domain of inquiry [and] a set of epistemic and methodological norms about how the domain is to be investigated, how theories are to be tested, how data are to be collected, etc.

As Laudan explains:

A theory solves an empirical problem when it entails, along with appropriate initial and boundary conditions, a statement of the problem [and] a theory solves a conceptual problem when it fails to exhibit a conceptual difficulty of its predecessor.

In an important way, philosophy as it is done by academic philosophers also conforms to this model. As you will see from a quick scan of philosophy journals, philosophers are responding to current problems and debating matters of method and concept in a way that cumulatively sorts good solutions to philosophical problems from bad ones and leaves a body of accepted knowledge in its wake, albeit one that is confined to particular research traditions. If we set aside for a moment the term ‘science’ and think instead about ‘enquiry’, we find common ground in the epistemic nature of what chemists do and what moral philosophers do. This doesn’t exactly vindicate the scientistic notion that all research should be ‘scientific’, but it does suggest that there is something deeper about what scientists do that makes their enquiries into the world especially conducive to the production of good knowledge, and that one doesn’t need to use special instruments or mathematics to follow their lead on many issues not typically thought of as falling within the domain of science.

In other words, the notion that science will bring us all the answers is unforgivably ignorant and daft. But thoroughgoing, self-conscious, critical problem-solving carried out within a community of investigators committed to producing a kind of public knowledge need not be the sole province of STEM practitioners, but can provide a model for tackling all sorts of puzzles and dilemmas, including those that cannot be resolved in any final sense by reference to empirical fact.

[1]As exemplified by Thomas Kuhn, Imre Lakatos, and Larry Laudan, who all wrote extensively on the nature of scientific practice.

[2] In the social sciences, where theorising does not take place entirely within one settled paradigm or tradition, but rather features multiple competing, contrasting, and often non-interacting modes of enquiry, active reflection upon fundamental assumptions happens more frequently. Probably not frequently enough though.

[3] Indeed, Sam Harris’s own modest publication shows just such an engagement.

[4] For example, by the appearance an anomaly in the course of research that needs to be investigated and theorised.

[5] For example, enquiries into gender or welfare or climate change that are driven by political or moral concerns related to society as a broader whole, but are still conducted ‘scientifically’ rather than, say, polemically.

The Many Kinds of Rationality!

The nature of rationality and its relationship to action features significantly in pretty much all social science theory and is an essential component of virtually all moral philosophy as well. In my main subfield, international relations, the orthodox view has been and still is that states or other forms of corporate international-level actors behave as ‘rational agents’, and that this rationality consists in attempts to maximise benefits, minimise costs, and otherwise select efficient means to ends. Nevertheless, there are other views, notably those coming from the constructivist tradition, which typically focus on reasoning oriented not around a ‘logic of consequences’ but a ‘logic of appropriateness’ – ie., a focus on moral or deontic rationality.

The thing is, there are actually quite a few ways to think about rationality besides these two approaches, and while they have featured in a few important pieces of scholarship in my field, they are nevertheless not frequently discussed. One of the reasons for this might be that these alternative views of rationality stem largely from traditions in social theory and philosophy that most practicing political scientists do not normally engage with. While it is fair to allow research traditions to focus on some questions over others, and to prefer certain social ontologies (and their views of the subject) over others, I think it’s valuable to develop a broad literacy on such a central thing as rationality and action.

Besides, even non-academics would benefit from a wider view of what rationality is.

To this end, I list here in numbered form a brief overview of different facets, varieties, or conceptions of rationality as it has been theorised in sociology and philosophy.


  1. Instrumental rationality is what we students of politics are most used to. Also called teleological, goal-oriented, or strategic rationality, this is the form of reasoning in which actors conceive of a desired outcome, represent it to themselves (cf. Searle on intentionality), and attempt to make the world conform with that representation. Reason is thus a matter of efficiently selecting amongst means. Consequentialist ethics and game theory both rest upon an instrumental view of rationality.
  1. Value rationality is where reasoning is about aligning actions to rules. When actors engage their capacity for value rationality, they are reflecting upon how their conduct relates to their underlying moral principles. While it is possible to derive preferences based upon these principles on a situational basis, and indeed this is sort of what practical ethics is about, doing so runs the risk of missing the point. As Searle amusingly points out somewhere, if we were beings of pure instrumental rationality, then there would be some odds at which I’d bet my [hypothetical] child’s life for a quarter. And yet we see people willing to sacrifice everything for their principles. Reason is thus about rule-following and value commitments. Deontological ethics and the scholarship in my field on taboos against slavery or the use of chemical weapons rest upon value rationality.
  2. Practical rationality is what many in the recent ‘practice turn’ in the social sciences want to talk about. It is where reasoning is largely sub-intentional; action does not stem from deliberation upon means or upon norms, but is the engaging of skills or habits that the actor need not even think about. The stereotype here would be the craftsperson who works the loom or the lathe with such alacrity and precision that can only be the result of thousands of hours of practice, or the judo fighter who just knows when the time is right for a seioi nage. Reason is thus about performances, rather than about conscious reflection. Aristotle’s phronesis and Bourdieu’s concepts of habitus and field are examples of theories that rest upon practical rationality.
  3. Affective rationality is the means by which the actor seeks authentic expression of their feelings. This is the kind of thinking that the artist uses to determine which colours or brush-strokes to use. It’s the kind of rationality that led Michelangelo to claim that David already existed, and it was simply a matter of chiselling the stone away to reveal him. And the kind of rationality that makes such an utterance sensical. Reason is thus a matter of self-expression.
  4. Theoretical rationality is, if I recall, what Weber refers to as the kind of rationality that the scientist deploys when studying the world. Theoretical rationality concerns the formation of concepts and cognitive representations. Abductive reasoning would be a great example, in that the actor is attempting to develop knowledge of the world rather than to simply follow rules of inference or spot patterns. Bayesian reasoning might also fall into this category.
  5. Communicative rationality is Habermas’s banner, and he develops it in a fascinating synthesis and discussion of basically all of the above. Communicative rationality is what the actor deploys in argumentation, where the goal is not principled action, not strategic action, but truth-seeking and justificatory validity. It (according to Habermas) presupposes such things as the recognition of one’s interlocutors as rational fellow agents and a good-faith commitment to following the methods of logic; one cannot participate in discourse simply out of obligation or to manipulate others through rhetoric. Reason is thus the capacity to engage in authentic discursive interchange.
  6. Creative rationality is my favourite, and it is the view of reason that one finds in the American pragmatists and more recently in, inter alia, the superb book by Hans Joas, The Creativity of Action. The general idea is that there is a sort of spontaneity and innovation that underlies all action – and bearing in mind that thought is, on the pragmatist account, itself a form of action. So rather than creativity being something that people have in measure, with the artist having more than the accountant, creativity is a quality in everyone’s life. Creative rationality is not so much a kind of rationality, therefore, but a way of viewing what rationality is. One example of creative rationality would be the pragmatist view of ends and means. As John Dewey noted, ends and means interact with one-another. As we pursue our ‘ends in view’, we become aware of new capacities hitherto unrecognised. Thus new ends become achievable and old ones begin to change. Instrumentality still persists, but it becomes one part of a complex interchange of other kinds of rationality, ultimately revolving around an inherent human creativity – a capacity for re-orientation and reflection or meta-cognition that makes our struggles in the world ultimately stochastic. In other words, thinking is doing, and doing always involves some unpredictable spontaneity and novelty. Joas’s creative rationality is developed in a meticulous, and if I may be honest, rather difficult (for me, anyway) critical journey through a over hundred years of social theory, and takes into account all of the above forms of rationality.
  7. Other stuff like maybe functional rationality would be the reasoning that impels action in theories such as those of Talcott Parsons and perhaps Niklas Luhmann and to some degree Marx, where the actor isn’t so much treated as a reflecting agent whose behaviour stems from considered desires, values, or communicative relationships, but is understood in terms of its role in social system maintenance. In the same way as we explain the action of the heart and lungs in terms of their role in maintaining the life of the body, functionalists explain actors and their reasons in terms of their role in maintaining some kind of broader social whole. This is a curious view of rationality, in that it ultimately seems to deny the agency of actor, reducing agents to what Giddens famously referred to as ‘structural dopes’ of ‘stunning mediocrity’.

Ends, Means, and John Dewey

Apropos of nothing beyond my appreciation for his phrasing and for the general idea he expresses, I quote here Dewey’s explanation of  what he would come to call the difference between ultimate ends and ‘ends-in-view’ – that our goals are never final but rather are formed in continual conversation with the means available to us and our understanding of possibility.

‘[T]he process of growth, of improvement and progress, rather than the static outcome and result, becomes the significant thing. Not health as an end fixed once and for all, but the needed improvement in health—a continual process—is the end and good. The end is no longer a terminus or limit to be reached. It is the active process of transforming the existent situation. Not perfection as a final goal, but the ever-enduring process of perfecting, maturing, refining is the aim in living.[1]


[1] MW 12:181

For Dewey, then, there is no such thing as instrumental rationality in the sense that it is usually understood, and nor is there a value rationality to be counterposed against it. There is adaptation and transformation, and thus there is only continual growth, as the passage of time, and creativity, as the essence of action.

What really is the difference between being an ‘agnostic’ and being an ‘atheist’?

Most people I speak to about this seem to think that they actually refer to positions on two different spectra. The first spectrum is between ‘gnosticism’ and ‘agnosticism’, or ‘knowing vs not-knowing’. The second spectrum is between ‘theism’ and ‘atheism’. Thus we can end up with something like the following 2×2 matrix:

(A)theism/(A)gnosticism Theism Atheism
Gnosticism Gnostic Theist Gnostic Atheist
Agnosticism Agnostic Theist Agnostic Atheist

We could define the four possible options as follows:

  1. Gnostic Theist: Believes that there is at least one god and that it is possible to know this.
  2. Gnostic Atheist: Believes that there are no gods and that it is possible to know this.
  3. Agnostic Theist: Believes that there is at least one god and that it is impossible to know this.
  4. Agnostic Atheist: Believes that there are no gods and that it is impossible to know this.

The idea behind dividing the positions like this is to allow for differences in opinions as to whether divine entities exist, but also differences in opinions as to whether or not it is possible to know whether divine entities exist.

The main problem with this typology is that it contains a contradiction, a tautology, and a false premise.  The contradiction is to assert a belief while simultaneously asserting that there is no reason to accept this belief as true. I should clarify that this is more of a pragmatic contradiction, in that while it may be formally possible to assert ‘I believe that P’ while also asserting ‘I believe there there is no reason to believe that P’, almost nobody would do this. Rather, the vast majority of people are likely to claim that they have a justification for every belief — that there is a reason to think that their beliefs are true.

Thus 1 and 2 are trivial, while 3 and 4 are contradictions, because basically everyone who makes a claim about the world believes that this claim can be justified and if someone didn’t, they’d appear pretty irrational.

Of course, the reason why this typology looks so broken because a core premise itself is false.

The false premise is that ‘(a)gnosticism’, as the possibility of knowing/not-knowing, can condition atheism in the way outlined above. Rather, let us for a moment instead say that (a)gnosticism refers to the degree of certainty that one has in their belief. Almost everyone is capable of meta-cognition, of looking at some belief P and coming up with an answer to the question, ‘how confident am I that P is true?’ This necessarily admits that it is possible for P to be true, but also that there are conditions, more or less likely, under which P would also be false.

This gives us a different range of possibilities:

  1. Gnostic Theist: Believes that there is at least one god, with high confidence.
  2. Gnostic Atheist: Believes that there are no gods, with high confidence.
  3. Agnostic Theist: Believes that there is at least one god, with low confidence.
  4. Agnostic Atheist: Believes that there are no gods, with low confidence.

However, this excludes one important position: the position that no justification exists for any belief about the divine. Hence:

  1. Theism (T): (it is warranted to claim that) at least one god exists
  2. Atheism (A): (it is warranted to claim that) no god exists
  3. Agnosticism (X): (it is warranted to claim that) no belief about the existence of gods can be justified*

As we can see, 3 is not simply a matter of knowing or not knowing that T/A, but is a substantively different proposition. One in which we can have more or less confidence. That is, we could believe that ‘agnosticism is true’ but with low confidence. So if we’re going to propose a typology, I propose this one:

  1. Theist, high confidence (by definition, ‘gnostic’ aka not-agnostic)
  2. Atheist, high confidence (by definition, ‘gnostic’ aka not agnostic)
  3. Theist, low confidence
    1. tends towards agnosticism
    2. tends towards atheism
  4. Atheist, low confidence
    1. tends towards agnosticism
    2. tends towards theism
  5. Agnostic, high confidence
  6. Agnostic, low confidence
    1. tends towards theism
    2. tends towards atheism

Besides all of this, though, there is another method. One that is a bit less complicated. That is, while I have just outlined what I think is the most logical way to break down the question, this is not necessarily a description of what actually is the case. Thus I advance two empirical hypotheses:

H1: There is often a significant practical difference between how people who would identify as theists or as atheists live their life, in terms of regular attendance at a place of worship, rituals such as prayer, or justifying fatalism or lack thereof in terms of God’s will.

H2: There is often very little practical difference between how people who would identify as agnostics or as atheists live their life, in that both will not attend places of worship, church, not engage in prayer, or refer to divine will in any way.

I believe these hypotheses are largely true, and I believe, therefore, that the main difference between atheists and agnostics is, practically, how much they care about beliefs about the divine and whether or not they, within the context of their community, really want to be associated with other people who call themselves atheists – something seems to matter a lot, actually, in places where declaring oneself to be an atheist can lead to marginalisation or punishment.


*Note that while A is not the same as Not-T, A nevertheless proposes something that is contradictory to what T proposes, and thus A XOR T rather than A OR T.

A random remark on cultural norms and targeted killing

I’ve occasionally run into the argument that once the fighting starts, cultural norms or the finer points of military ethics become less relevant. Rational calculations dictate actions on all sides, abrogated only by terror when terror rises. This may be the case in some situations, but it isn’t the case in all; indeed, I can think of an excellent example of a counter-insurgent military action that quite neatly illustrates the relationship between cultural norms and operational success.

On 5 May 1987, British special forces ambushed and killed an entire, veteran IRA cell: the eight men of the East Tyrone Brigade, as they attempted to attack the Loughgall police station. The East Tyrone Brigade was a bit special; rather than stage ambushes, snipe at police patrols, or plant bombs, they preferred more spectacular and ‘kinetic’ attacks, using construction vehicles as bomb-bearing battering rams and leaping out in full force to rake their enemies’ positions with gunfire. So removing them from the scene appealed to the security forces in Northern Ireland.

Thus, when information came from one of the many double-agents that the RUC and British intelligence services had cultivated within the IRA that the the attack on Loughgall was in the works, which would be the third such attack on an RUC station by the East Tyrone Brigade, it occurred to the people in charge that there might be a good way to prevent there from ever being a fourth. Enter the SAS.

The use of the SAS in Northern Ireland had been controversial. Earlier excess violence by the special forces unit had led to a five-year operational hiatus, while the 1985 killing of two IRA militants in Strabane had led to a public outcry after reports that SAS troopers had a delivered a coup-de-grace to each of the two men as they lay wounded and pleading for mercy. But with Loughgall, this sort of public relations problem could be averted.

Anyway, in rolls the East Tyrone Brigade, with their digger-with-a-big-bomb-in-the-bucket. They blast their way up to the police station then shoot the building up. The building is, of course empty, and nobody is about because it is the evening. Then up pops the SAS where they had been laying in wait around the site, and they do their thing, and thus ends the tale of this particular IRA cell.

The following day, Sinn Fein leader Gerry Adams stated, when asked his view of the rather lopsided engagement (no SAS casualties), ‘I believe that the IRA volunteers would understand the risks that they were taking,’ What else could he say? The IRA’s own narrative was that they were an army fighting in a war. They could hardly complain about losing men when those men were on an operation, and had themselves set out with bombs, guns, and the intent to kill members of the security forces.

Counter-terrorism or counter-insurgency is particularly sensitive to the broader propaganda or political messaging surrounding uses of force, and the way that the Irish public reacted to British military actions had an effect upon the strategic environment. By staging a ‘counter-ambush’ of the East Tyrone Brigade instead of just picking them up (or off) in their homes – which might have been safer, from an operational perspective – the SAS avoided being labeled as excessively violent or invasive (for the most part) while removing a significant terrorist threat.

Identity Crises and IR Scholars

What am I?

It’s very difficult for me to precisely orient myself and my work within political science as it is traditionally organised and conducted in North America. As those familiar with how most social science PhD programmes are structured in N.A., students typically take major field examples in two subfields of their discipline. In my case, I ‘majored’ in international relations (IR) and ‘minored’ in comparative politics. I might have instead made my second field political theory, except that my theoretical interests are much closer to what might be called social or sociological theory. In fact, if I later today trundled over to the department of sociology and took the major field exam for sociological theory, I’d probably pass it, which is currently not the case for comparative politics. And while the things I am learning in the seminar on comparative politics that I am currently taking, to prepare me to write the exam in that subfield, help to give me a sense of what political science is, as a discipline, it has not helped me at all in my own research. Those few covered topics which do relate to my research are ones I’ve already read rather deeply on, while most other covered topics relate to things I don’t care to study employing methods I don’t care to use.

When people who are generally unfamiliar with disciplinary distinctions in the academy ask me what I do, I usually answer ‘the sociology of war’. Does this mean that I’m distancing myself from political science, or misrepresenting myself? Does this indicate an academic identity crises? What does this say about international relations scholars and scholarship, that I find myself identifying in this way? I think these are interesting questions to explore as a way of considering how my field and my discipline work, and what sort of knowledge we produce in studying international or global politics.

One important thing to remember about being part of a discipline is that it disciplines you. Comparative politics as a field is, basically, the kernel of American political science, and is in many respects what ‘political science’ is, in terms of what goes on in departments that carry this name. My journey through modernisation theory and the study of electoral politics might as well be through shards of broken glass, but these are the sorts of things that many of my colleagues do. And, quite saliently, my interest in methodology and the philosophy of social science does give me some motivation to look at the methods my colleagues use for their investigations. Critically. Because people like me are nudniks.

An interesting quality of IR is that it actually has quite a lot of space for scholars whose approaches and theoretical backgrounds more closely resemble those of philosophers and social theorists than of typical political scientists. Indeed, this is why there are a number of notable universities where IR is its own separate department, and not a subfield of political science. I have been fortunate enough to secure the supervision, in my work, of one of the more ‘sociological’ scholars in the field, and this goes a long way to making me feel secure in my position within the academy, and not to feel like I’m in the wrong university or the wrong department. And while I probably could also be content in the sociology department – it is perhaps worth noting that both my BA and my MA are neither in political science nor in sociology – it is the case that the study of war, particularly at the systemic or inter-/trans-national level, is traditionally carried out within IR.

However, many of the conditions that lead to me feeling comfortable calling myself a sociologist of war, and in using my preferred methods of enquiry, while being a student in a political science department are partially idiosyncratic. They are a fortuitous confluence of having a particular kind of committee and having a few colleagues whose approaches bear an affinity to mine. Were I in another department, even (or perhaps especially) at an American Ivy League institution, there’s a good chance that these conditions wouldn’t obtain. Toronto is unusual in that respect, in that we have a number of IR scholars who draw upon social theory in ways I find interesting, but it’s also because it turns out that I have, like, two or three fellow PhD students to talk about my work with. And had his or my circumstances been a little different, I might not have ended up such an appropriate supervisor.

I think this highlights the need for departments of political science to recognise that IR is unusually inter-disciplinary, and to make space for IR grad students to maximise their cross-disciplinary education, such as by allowing people like me to fulfill my second field requirement by taking field exams in other departments. It also means encouraging and facilitating scholars with degrees from other disciplines to apply to IR jobs, and vice versa. I recognise that there are plenty of professional/institutional disincentives to this, but it is a possibility worth discussing.