Up until several years ago, researchers in humanities and social sciences had been blessed with a sort of serenity amidst an otherwise frantic academia. The majority of them, including ourselves, had never heard of ‘impact factor’, not to mention ‘h-index’. Our research outputs had been, and actually still are, underrepresented in citation databases such as Scopus and Web of Science, but we could not care less. We would love to be left alone quietly and to stay aloof from distractions such as bibliometrics, rankings and even grant writing.

“Those were such happy times, and not so long ago,” to use the words of a familiar song. In the face of competition for excellence among top universities around the world, there is a growing demand for visibility and accountability of research performance in every single field including the humanities and social sciences. The fate of the humanities and social sciences is particularly vulnerable in Japan. In June 2015, the Japanese Ministry of Education, Culture, Sports, Science and Technology sent a notification to all national universities and urged them to “actively try to abolish organizations of humanities and social sciences or to convert them to those of fields with higher societal demands”[1]. If Japanese researchers in humanities and social sciences, including ourselves, are to strive not to be left out, we should care more about visibility and accountability of our research performance.

It is known, however, that humanities and social sciences have been underrepresented in citation-based metrics mainly for two technical limitations, i.e. the language and type of publication. Citation databases as of now contain very few data from papers written in a language other than English, yet in some fields such as Japanese literature and German law, papers written in Japanese or German are generally more highly esteemed than those in English. In addition, books often have more influence on the scholarly community than journal papers in humanities and social sciences. Elsevier and Thomson Reuters are aware of this and have already started working on this[2], but there is still a long way to go. Lacking sufficient data from papers written in languages other than English and from books, current citation-based metrics manages to capture only a fraction of the actual research performance in the field of humanities and social sciences.

As a solution to the language limitation, we propose a new way of quantifying an aspect of the quality of an academic journal, which is simple enough to be applicable to any field of study, including humanities and social sciences, and to any language with ease and little cost and effort. We call it ‘Diversity Factor’ (DF). DF as of now is calculated by the following formula: (A+C)/I, where ‘A’ stands for the number of distinct Affiliations of the annual contributors, ‘C’ for the number of distinct Countries in which these institutions are located, and ‘I’ for the number of Issues published in a given year. DF as such quantifies the varieties of contributors per issue in terms of their affiliation and international distribution.

We believe that DF does not substitute Impact Factor (IF), but rather supplements it. IF quantifies the impact that a journal has on a scientific community. DF captures a different quality of an academic journal based on the assumption that journals attracting contributions by, and peer-reviewed by, more diverse researchers are to be weighted more than those attracting contributions by, and peer-reviewed by, less diverse researchers (e.g. single institution or school).

While IF depends on citation data, which requires intricate processing of huge bibliometric information and therefore enormous cost and effort, DF relies on no more than a table of contents of a given journal and a list of contributors with their affiliations. The formula above can be calculated easily, even manually, by both bibliometric data specialists and the rest of us alike.

Fig. 1: Diversity Factor (blue) and Impact Factor (red) compared

Blue bars in Figure 1 above illustrate how journals might be measured by DF. It turns out that DF does not contradict largely with researchers’ qualitative and more or less tacit evaluation of these journals:

DF also appears to be reasonably stable over time as can be seen in Figure 2 below.

Fig. 2: Changes in Diversity Factor over time

Comparison with IF (red bars in Figure 1) confirms that DF does not contradict essentially with IF either, yet at the same time it manages to present us with a remarkably different landscape in that it makes the invisible rest visible.

[1]The original Japanese text reads: 組織の廃止や社会的要請の高い分野への転換に積極的に取り組むよう努める.
[2]Books Expansion Project by Elsevier and Book Citation Index by Thomson Reuters.