ResearchInsights from the American Philosophical Association’s Journal Surveys Project

Insights from the American Philosophical Association’s Journal Surveys Project

Introduction

The Journal Surveys Project of the American Philosophical Association offers an outlet for philosophers to report their experiences with journals. Authors can submit information about how long a journal took to review their manuscript, how good their experience was with the editors, if and what quality of comments they received, and if the manuscript was accepted for publication. Since its creation by Andy Cullison in 2009 over 9,400 surveys have been completed as of April 2024.

There are drawbacks to using author-reported surveys to glean insights into journal metrics as we do not know how accurate the surveys are and most journals do not provide the stats that would allow us to verify the survey results. For example, the acceptance rate for most journals is widely believed to be overstated in the surveys (see Jonathan Weisberg’s ‘Visualizing the Philosophy Journal Surveys’).

However, it is reasonable to assume the survey results consistently correlate with actual journal performance; that the best-performing journals in the Journal Surveys are likely to be the best-performing journals in practice. Hence, some valuable insights ought to arise solely from comparing relative journal performance in the Journal Surveys. For a philosophical researcher who, for example, wants to send their manuscript to a venue that will review it quickly, while the Journal Surveys might not be able to provide an accurate estimate on how long ‘quick’ is, they can provide a reasonable estimate on which journals will be ‘quicker’ than others.  As such, this report uses the data obtained from the Journal Surveys to analyse which are the best (and worst) performing journals when compared to one another in the following areas: response length, comment chances, quality of comments, and experience with journal editors.

Setup

All survey responses are readily available and easily exported from the Journal Surveys site. This is great for anyone looking to use it as the basis for further analysis. However, there are several issues with the raw data, which if used to create journal-level metrics, undermine their credibility. Problems identified include:

  • Some surveys report that they received no comments but also recorded a value for the quality of comments (231 cases).
  • Some surveys had response times of 0 (280 cases).
  • 7 surveys have review times of greater than five years (one of which was 2911 months).
  • In several columns, ‘0’ and empty cells represent no data.
  • Several journals have entries listed under slightly different names such as:
    • Ergo‘, ‘Ergo an Open Access Journal of Philosophy‘, ‘Ergo: an Open Access Journal of Philosophy
    • Philosophers Imprint‘, ‘PhilosophersImprint
  • Note: PhilPapers’s David Bourget – who hosts the Journal Surveys site—has since deployed several improvements to the Journal Surveys addressing some of these issues. This includes:
    • Merging journal duplicates and making changes that will prevent future duplicates from being created.
    • Validation has been added to review duration that requires the value to be greater than 0.

Some of these issues are less problematic than others and can be fixed by simply removing certain surveys from the overall data or merging entries for journals with name variants. For others, the impact will hinge on how journal-level metrics are calculated. For example, the ‘Comment Quality’ column has valid values of ‘1, 2, 3, 4, 5’ but uses both ‘0’ and ‘NULL’ to represent that the survey respondent did not respond to comment quality. If the journal metric is calculated over all integers, then, ‘0’ values will be incorrectly included and consequently (and incorrectly) lower the overall average.

To ensure a robust basis for conducting data analysis, the raw data has been cleaned by removing any problematic entries, merging journals, and making uniform ‘0’ and null values (and where it is unclear, removing surveys that might distort the data). The clean data for surveys completed before March 2024 is accessible here. The simple SQL script used is also available here for anyone who wants to conduct similar analysis with the most recent data available.

The clean dataset contains the results of 8,512 surveys. However, many of these surveys are from some time ago and only 1,568 surveys (18.42%) were submitted in the last 5 years. While older data might still be relevant to current journal operations—particularly as philosophy journal editors tend to have long tenures and journal practices change infrequently—I will focus on the last 5 years.

Figure 1 – Annual Survey Responses 2019-2023

Each year averaged 313.6 surveys; 2021 saw a reduction in the number of people submitting surveys—from 314 in 2020 down to 184 in 2021—but numbers have since recovered to record highs with 481 surveys completed in 2023. I’d speculate that the 2021 slump was related to an endemic virus but I have no means of corroborating that claim. Otherwise, the surveys seem to be more popular than ever and if this trend continues, it will make future results increasingly reliable.

Survey responses are spread out among 97 journals, for an average of 16.2 per journal. However, they are not evenly distributed, and a small number of journals account for a large proportion of the overall responses; the 10 journals with the most responses account for 42% (660) of all surveys in the last 5 years. Inversely, there are many journals with a small number of surveys; 52 journals (53.6%) had less than 10 surveys in the last 5 years. This makes any journal-level metrics unreliable for those journals with a limited number of surveys.

To create metrics that have a modest degree of credibility, the focus will be restricted to journals that have the most surveys. Somewhat arbitrarily, as only 19 journals recorded more than 30 completed surveys in the last 5 years, only those journals will feature from here on. While the number could be higher, I’m assuming that 30 surveys will lead to reasonably trustworthy results. Those journals are: Philosophical Studies (P Studies), Analysis, Australasian Journal of Philosophy (AJP), Synthese, Ergo, Philosophical Quarterly (PQ), Philosophy and Phenomenological Research (PPR), Philosophers’ Imprint (Imprint), Nous, Journal of Ethics & Social Philosophy (JESP), Mind, Ethics, Pacific Philosophical Quarterly (PPQ), Journal of Philosophy (JoP), European Journal of Philosophy (EJP), Journal of Moral Philosophy (JMP), Canadian Journal of Philosophy (CJP), Philosophy and Public Affairs (PPA), and the Journal of the American Philosophical Association (JoAPA). These are ordered descending by the number of surveys; the number of surveys for each journal is depicted in Figure 2.

Figure 2 – Survey Responses by Journal 2019-2023

Lastly, it is worth noting that if the journals that receive the most submissions correlate with the journals that have the most survey responses, then these journals are likely those that most philosophers will be interested in knowing the results of.

Results of the Journal Surveys: Response Length

Survey respondents are asked about the ‘Initial verdict review time’ for their submitted manuscript. The answer is recorded in the number of months where decimals may be used to represent partial months. Across all surveys, the average time for journals to return the initial verdict was 3.22 months. 866 surveys (55.3%) reported a response time of less than 3 months, 468 (29.8%) reported between 3-6 months and 234 (14.9%) reported longer than 6 months. Individual journal averages are depicted in Figure 3.

Figure 3 – Average Response Time by Journal (in months)

Journals that recorded a notably lower average than the overall survey average include Ergo with an average time of 1.47 months, Analysis with 1.75 months, and Philosophy and Public Affairs with 1.75 months. While one might expect response times for Analysis to be quicker due to the journal only publishing papers with 4,000 words or less, Ergo has the lowest overall average. Moreover, Ergo and Philosophy and Public Affairs are the only two journals of the 19 where no survey reported a response taking longer than 6 months.

Journals that recorded a notably higher average than the overall survey average include Pacific Philosophical Quarterly with an average time of 7.05 months and the Journal of Philosophy with 5.55 months. Pacific Philosophical Quarterly’s high average is largely due to 2021 where their average climbed to 9.77 months. It has since come down to 4.6 months in 2023 which although greatly improved, would still put them as one of the worst performers. The Journal of Philosophy’s response times are consistently on the lengthier side with each year’s average topping 5 months.

Results of the Journal Surveys: Comment Chances

Survey respondents are asked about the ‘Sets of reviewer comments initially provided’ where the choices of response are: ‘0’, ‘1’, ‘2’, ‘3’, ‘4’ or ‘5’. The results of this question can be used to determine what proportion of surveys received comments from the journal on their submitted manuscript. Across all surveys, 67% of surveys reported receiving 1 or more sets of reviewer comments. Individual journal results are depicted in Figure 4.

Figure 4 – % of Manuscripts Receiving Comments by Journal

Journals that recorded a notably higher result than the overall survey result include Synthese with 99% of manuscripts returned to authors with comments, Journal of the American Philosophical Association with 90%, the Australasian Journal of Philosophy with 89%, and Analysis with 85%.

Journals that recorded a notably lower result than the overall survey result include Philosophy and Phenomenological Research with 26% of manuscripts returned to authors with comments, the Journal of Moral Philosophy with 28%, Pacific Philosophy Quarterly with 31%, and Philosophers’ Imprint with 31%.

One might expect review times to correlate with comment % as more papers being sent out should increase the average review time and inversely more papers being desk rejected should decrease average review times. This thinking holds for some journals; Synthese’s average review time is on the lengthier side, but they provide comments to 99% of manuscripts, Philosophers’ Imprint’s average review time is on the shorter side, but they only provide comments to 31% of manuscripts. However, for many journals, factors beyond the proportion of papers sent for external review must be at play; journals with quick review times and high comment % include Ergo, Journal of the American Philosophical Association, Analysis, and Philosophical Quarterly.

Results of the Journal Surveys: Comment Quality

Before diving into individual journal results for comment quality, it is worth noting that as many journals only provide comments on some manuscripts, the 19 journals that were chosen as the focus because they have the most completed surveys might not be those with the most values for comment quality. The number of surveys with a value for comment quality for each journal is depicted in Figure 5. It shows that most have had a significant drop in the number of surveys with values for comment quality, and this will detrimentally impact the reliability of those journal averages. To remain consistent, the same 19 journals will be compared but for those on the lower side, caution is advisable.

Figure 5 – Surveys with answers to ‘Comment Quality’ by Journal

Survey respondents are asked about the ‘Quality of reviewer comments’ provided on their submitted manuscript. The choices of response are: ‘1 (Very poor)’, ‘2 (Poor)’, ‘3 (OK)’, ‘4 (Good)’, and ‘5 (Excellent)’. Across all surveys, the average score for comment quality was 3.44 out of 5. Individual journal averages are depicted in Figure 6.

Figure 6 – Average Comment Quality (out of 5) by Journal

Journals that recorded notably higher averages than the overall survey average (and each had greater than 60 responses to the comment quality question) include, Synthese with an average score of 3.72 out of 5 and Analysis with 3.52. Journals that recorded notably lower averages than the overall survey average (and each had greater than 60 responses to the comment quality question) include the Australasian Journal of Philosophy with an average of 3.17 out of 5 and Philosophical Studies with 3.18. While Philosophers’ Imprint, Pacific Philosophical Quarterly, and Philosophy and Phenomenological Research performed poorly, they were also among those with the fewest surveys completed with 15, 11, and 14 respectively.

Results of the Journal Surveys: Experience with Editors

Survey respondents are asked about their ‘Overall experience with editors’. The choices of response are: ‘1 (Very poor)’, ‘2 (Poor)’, ‘3 (OK)’, ‘4 (Good)’, and ‘5 (Excellent)’. Across all surveys, the average score for experience with editors was 3.51 out of 5. Individual journal averages are depicted in Figure 7.

Figure 7 – Average Experience with Editors (out of 5) by Journal

Journals that recorded a notably higher average than the overall survey average include Synthese with an average score of 4.24 out of 5, Analysis with 4.01, and the Journal of the American Philosophical Association with 4.

Many journals recorded a lower average than the overall survey average; from Philosophers’ Imprint at 3.21 down to Ethics at 2.94, but Pacific Philosophy Quarterly stands out with an average score of 2.03. Unlike response times, where one year had a sizable impact on the journal’s overall performance, Pacific Philosophy Quarterly is consistently one of the worst performers each year.

Conclusion

As I stated at the start, all one can hope to discern from this report with any degree of credibility are comparisons between various journals. For the 19 journals compared, one can look at individual journal performance and draw out conclusions about where a manuscript is likely to be reviewed quicker than elsewhere, where a manuscript is more likely to be returned with comments, where those comments are likely to be of higher quality than elsewhere, or where their experience with the journal’s editors is likely to be more pleasant than elsewhere. To make assessing relative journal performance easier, the results contained within this report have been placed into an interactive table here.

There are many ways the analysis in this report could be improved. The simplest one is increasing the sample size used to create journal averages. Anyone is free to submit their experience with a journal and doing so will both improve the accuracy of journal averages and allow for new journals to be included in future reports of this kind.

However, if we want to have an understanding of journal operations beyond their relative performance when compared to other journals (i.e., to know how long a review is likely to take with any given journal) another approach is required. The most obvious of these would be for more journals to provide this information, either on their website or as part of another survey project (the Philosophy Journal Insight Project will be running this type of survey in the summer of 2024).

Sam Andrews

Sam Andrews is a recent PhD graduate from the University of Birmingham who specializes in Metaphysics, Epistemology, and the Philosophy of Science. He is the director of the Philosophy Journal Insight Project, a project that aims to improve the transparency of journal practices and operations.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

WordPress Anti-Spam by WP-SpamShield

Topics

Advanced search

Posts You May Enjoy

Editorial: ‘Fear Me, if You Dare!’ Inviting Reflections on the Riddle...

As the community of philosophers worldwide mourns the recent passing of Professor Jerome Borges Schneewind, it now falls to the current generation of philosophers...