What’s in a ranking? Considering the implications of the 2016 draft APSA journal rankings

At the moment I am revising my chapter on Qualitative Methods for the fourth edition of the text-book Theory and Methods in Political Science. Today I was also invited to referee a paper for the American Political Science Review, for the first time. These two events actually point to a cognitive dissonance in my professional life, but also in political science more generally. Being a political scientist based in Australia, who considers herself to be methodologically pluralist, I am starting to worry about the reduced appreciation and rewarding of qualitative methods and analysis. The release last week of the 2016 draft Journal Rankings by the Australian Political Studies Association (AusPSA) has only heightened my worries.


But first some context. In 2000 a debate started within the powerful American Political Science Association (APSA) that signalled dissatisfaction with the mainstream and dominant quantitative approaches in political science. The movement referred to as ‘Perestroika’, taken from the pseudonym used by an email writer Mr Perestroika, who questioned the continuing dominance of positivist, quantitative methods within the most powerful journals, editorial boards and associations in the discipline. The aims of the movement converged around the need for methodological pluralism and political science research that was relevant to the real-world. Broad success and ‘incremental reforms’ from the Perestroika movement included founding a qualitative methods section in APSA, and democratising the leadership of APSA and its premier journal, the American Political Science Review (APSR) (e.g. to include more women, and more people from non-elite research universities). The journal Perspectives on Politics was launched by the American Political Science Association in the wake of Perestroika, and intended to open up political science to new (or very old) epistemologies, methods, and questions. However, in 2010 Dvora Yanow and Peregrine Schwartz-Shea argued in PS that there remained a “poor record of methodological inclusivity, diversity or pluralism in the APSA’s flagship (APSR) and other journals”. Greg Kasza, in the same issue of PS, examined whether top journals had started to publish more articles using qualitative data and analysis, which included case studies, interviews, focus groups, comparative history, participant observation, and interpretivism. Of 92 articles only 4 in American Political Science Review used qualitative data; though another 22 were political philosophy or theory with no data; of 64 articles published in American Journal of Political Science none were qualitative, and 3 were theory-based. This is marginal change from an earlier study by Andrew Bennett, Aharon Barth, and Kenneth Rutherford in PS in 2003 that found only 1% of articles on US domestic politics published in the top journals, was qualitative. In area studies-focussed comparative work, and international relations, they found that qualitative analysis fared better and was present in a majority of articles in journals such as Comparative Politics, Political Studies Quarterly, International Organisations, and World Politics. The authors were concerned, and called for more inclusion of qualitative political science in top journals, and suggested more extensive teaching of qualitative data collection and analysis was also necessary.

The data above is now out of date, and by all accounts the inclusion of qualitative analysis in these top journals is in further decline. For example, Perspectives also published a Perestroika retrospective in 2015, in response to an article by John Gunnell. While there was some agreement that change had occurred in the discipline, including a broader acceptance of methodological pluralism and interpretive qualitative methods, there was still little evidence that qualitative political science had achieved a new respect and/or better publication outcomes. If anything, there is increasing evidence that the Perestroika movement was short-lived and barely influential on mainstream political science.

In 2010 Yanow and Schwartz-Shea pointed to the increased focus in many high impact journals on requiring research data availability for future replication. Sharing primary data sources is more challenging in qualitative research, often for ethical reasons and preserving the anonymity of research subjects, than it is to supply a de-identified quantitative dataset. Yet by 2015 27 journals had signed on to DA-RT – a commitment to increasing data access and research transparency as a social science standard. One of the few vocal opponents to DA-RT is Jeffery Isaac, the editor of Perspectives on Politics, on the grounds that it unnecessarily narrows the mission of political inquiry. The next step in increased transparency is pre-registration: whereby authors are asked to pre-submit their theory, hypotheses, and intended empirical analysis before they submit an article for review. Where does inductive, qualitative analysis fit in this model of doing social science research? In a recent issue of Comparative Political Studies a pilot study of pre-registration and “results-free review” was published. Unsurprisingly, no qualitative researchers volunteered to participate in this pilot. The authors, Michael Findley, Nathan Jensen, Edmund Malesky and Thomas Pepinsky note that:

“Qualitative case studies, comparative historical analyses, and other similar types of research cannot be preregistered and that results cannot be removed from case studies. These are types of research in which scholars generally accept that theories and arguments are informed by the interaction between a researcher’s initial hypotheses—in some cases little more than hunches—and the specifics of a case. We believe this type of work is a valuable contribution to political science scholarship, but we can imagine the complexity of submitting this work preregistered and/or results free” p. 24 .

and then later:

“It is hard to escape the conclusion, though, that any requirement that research manuscripts have been preregistered will almost certainly affect the types of submissions that a journal receives. One possible consequence is a bifurcation of publication outlets, and as a result, of researchers. One set of researchers adheres strictly to a normal science template to produce manuscripts that are eligible for journals that insist on results-free review, while others adhere to and are assessed on a very different set of standards in a different set of journals. For the discipline as a whole, this would almost certainly generate divisions and inequalities “ p. 26.

So what does this all mean for Australian-based political science, and the recently released ranking of journals by quality? Quite a lot. AusPSA’s Journal Ranking reveals a substantial amount of information about what journals are valued, but also norms on what kind of methods political scientists ought to use. The likely bifurcation of publication outlets and division noted by Findley et al above is only being consolidated in the journal list. However, it will affect some political science sub-disciplines much more than others.

It has long been acknowledged that the study of politics in Australia has been underpinned by a qualitative tradition, more or less inherited from the UK. For example, Marian Sawer and Jennifer Curtin have a an article in press with European Political Science where they present data on the broad methods used by 95 Professors at Australian universities: only 8 use mainly quantitative methods, all of these are in politics, rather than international relations, and six of the 8 are men. Sawer and Curtin acknowledge that this imbalance will change, as there has been more recruitment of USA-trained political scientists at junior levels in some universities, in recent years.

The 2016 AusPSA journal list has ranked over 600 journals on a 4 point scale from A* to C; 31 Journals, or around 5%, receive the ranking of A*, and about 90 are ranked A. These rankings achieved discursive significance from when they were first used in the Australian university context with the 2010 Excellence of Research in Australia survey of discipline-based research quality, and again when APSA released a new Journal Ranking list in 2013. By discursive significance I mean that the classifications A* and A articles started to appear in promotion, job and grant applications, with little reflection on what they meant or were indeed measuring. There has also been anecdotal evidence shared by some that they were being told to only publish in A* or A ranked journals. There has also been a path dependency in the construction and legitimacy of the list from the 2010 ERA, with only small changes made in the list each time for journals to be promoted or demoted: in 2016 about 15 new journals were added, and around 40, or 6%, had their rank changed.

My primary argument is not that we should not have a list at all, but that we should have a realistic conversation about the construction of such a list, and what it means in the context of Australian-based political science that still largely uses qualitative methods.

I categorised the 31 A* journals into four core sub-disciplines of:

  1. comparative politics – 13 journals
  2. international relations – 11 journals
  3. public policy – 6 journals
  4. political theory – 1 journal

I then coded whether the journals regularly publish articles that include qualitative methods. Only four of the 13 journals coded as Comparative Politics do (i.e. African Affairs; European Journal of Political Research; Perspectives on Politics; and Political Geography), severely limiting the capacity to have a top-ranked publication for most qualitative comparativists in Australia. In contrast, both the international relations and public policy A* journals have strong quantitative traditions of research but also remain more pluralist, and are thus more likely to publish qualitative analysis. Constructivist, interpretive and historical case study research and analysis are mainstream in these two sub-disciplines, in Australia and internationally.

At this stage what can be done? I am still unsure about the best options for dealing with the systematic bias in the Journal Ranking list, but three came to mind that are neither exhaustive or mutually exclusive.

  1. Sub-disciplines should have more say on their journal rankings. From the outside there seems to be more consensus in International Relations and Public Policy on the highest quality journals in the field. Comparative Politics (including Australian politics) needs to have this discussion, as does Political Theory.
  2. The A* list could be expanded to deliberately include more pluralist comparative politics journals
  3. The A* and A list could be combined to have only three rankings: A, B and C. This still amounts to only 20% of the list overall receiving the highest ranking, but also becomes more inclusive of a range of approaches to political science undertaken in Australia.

As noted earlier, the Perestroika movement has not had huge gains in changing the dominant paradigm in US-based political science; and with data access, research transparency and pre-registration looming as new norms, qualitative research will become even more marginal. These new pathways will have significant effects on future generations of political scientists as the expectations of them increase in quantity and their research options narrow. Australia does not need to choose the narrow path; we can better celebrate our pluralism by more realistically ranking the journals our top comparative politics scholars are likely to publish in.

Ariadne Vromen is Professor of Political Sociology at the University of Sydney