NSF Funding for Economics Research: Good or Bad?

The latest Journal of Economic Perspectives includes a pair of papers debating the social value of ecNSFonomics research funding from the National Science Foundation, featuring Robert Moffitt from Johns Hopkins and Tyler Cowen and Alex Tabarrock from George Mason. The abstracts of their respective viewpoints follow:

Robert Moffitt: “In Defense of the NSF Economics Program
The NSF Economics program funds basic research in economics across all its disparate fields. Its budget has experienced a long period of stagnation and decline, with its real value in 2013 below that in 1980 and having declined by 50 percent as a percent of the total NSF budget. The number of grants made by the program has also declined over time, and its current budget is very small compared to that of many other funders of economic research. Over the years, NSF-supported research has supported many of the major intellectual developments in the discipline that have made important contributions to the study of public policy. The public goods argument for government support of basic economic research is strong. Neither private firms, foundations, nor private donors are likely to engage in the comprehensive support of all forms of economic research if NSF were not to exist. Select universities with large endowments are more likely to have the ability to support general economic research in the absence of NSF, but most universities do not have endowments sufficiently large to do so. Support for large-scale general purpose dataset collection is particularly unlikely to receive support from any nongovernment agency. On a priori grounds, it is likely that most NSF-funded research represents a net increase in research effort rather than displacing already-occurring effort by academic economists. Unfortunately, the empirical literature on the net aggregate impact of NSF economics funding is virtually nonexistent.

Tyler Cowen & Alex Tabarrock: “A Skeptical View of the National Science Foundation’s Role in Economic Research
We can imagine a plausible case for government support of science based on traditional economic reasons of externalities and public goods. Yet when it comes to government support of grants from the National Science Foundation (NSF) for economic research, our sense is that many economists avoid critical questions, skimp on analysis, and move straight to advocacy. In this essay, we take a more skeptical attitude toward the efforts of the NSF to subsidize economic research. We offer two main sets of arguments. First, a key question is not whether NSF funding is justified relative to laissez-faire, but rather, what is the marginal value of NSF funding given already existing government and nongovernment support for economic research? Second, we consider whether NSF funding might more productively be shifted in various directions that remain within the legal and traditional purview of the NSF. Such alternative focuses might include data availability, prizes rather than grants, broader dissemination of economic insights, and more. Given these critiques, we suggest some possible ways in which the pattern of NSF funding, and the arguments for such funding, might be improved.

The Editorial Process in Economics and Social Sciences

Marc Bellemare offers some thoughts about the editorial review process in economics and social sciences…from an editors perspective. His insights are helpful for new or younger scholars, and a good reminder for those more seasoned.

On May 1, I will become editor of Food Policy, replacing the University of London’s School of Oriental and African Studies’ Bhavani Shankar, and sharing the role of editor with the University of Bologna’s Mario Mazzocchi, serving for an initial term of three years.

Given that, I thought now would be as good a time as any to write my thoughts about the editorial process. This will allow me to go back to these thoughts once my term as editor ends, to see what else I might have learned. So here goes–in no particular order–some thoughts I’ve accumulated on the editorial process in the social sciences. I hope others with editorial experience can chime in with their own additional thoughts in the comments.

The (Fake) Academic Publishing Game

Last month Vox reported on a “scientific paper” written by Maggie SimpsonMaggieSimpson1, et al., being accepted by two scientific journals. The paper, a spoof generated by engineer Alex Smolyanitsky using a random text editor, was allegedly peer reviewed and accepted for publication by two of the many for-profit open access science journals that have sprung up over the past decade.The article (here) provides a nice overview of how rampant the trolling by fake scientific journals has become and some of the economic incentives behind them.

If you’re in academia, you probably receive email solicitations from these predatory journals regularly. I probably delete a handful of solicitations per day from such journals. I just assumed they were bogus, but the Vox article also provided a link to a useful listing of suspected predatory publishers created by Jeffrey Beall. Sure enough, my most recent email was from one of the publishers on this list.

While the article focuses on the problems these journals create for trust in scientific publications, the credibility of real peer reviewed scientific research, and evaluation of a given scholar’s publication resume, it fails to mention the complementary cause of the problem: Continue reading The (Fake) Academic Publishing Game

Research Productivity of New Economics PhDs

The Economist posted a blog last week about the research productivity of new PhDs in economics. They point to a recent paper by John Conley and Ali Sina Önder in the Journal of Economic Perspectives. Below is the abstract:

We study the research productivity of new graduates from North American PhD programs in economics from 1986 to 2000. We find that research productivity drops off very quickly with class rank at all departments, and that the rank of the graduate departments themselves provides a surprisingly screen_shot_2014-11-05_at_16.31.22poor prediction of future research success. For example, at the top ten departments as a group, the median graduate has fewer than 0.03 American Economic Review (AER)-equivalent publications at year six after graduation, an untenurable record almost anywhere. We also find that PhD graduates of equal percentile rank from certain lower-ranked departments have stronger publication records than their counterparts at higher-ranked departments. In our data, for example, Carnegie Mellon’s graduates at the 85th percentile of year-six research productivity outperform 85th percentile graduates of the University of Chicago, the University of Pennsylvania, Stanford, and Berkeley. These results suggest that even the top departments are not doing a very good job of training the great majority of their students to be successful research economists. Hiring committees may find these results helpful when trying to balance class rank and place of graduate in evaluating job candidates, and current graduate students may wish to re-evaluate their academic strategies in light of these findings.

I remember one of my graduate advisers, Lee Benham, claiming that the mode number of publications among PhD economists was zero. I think that was Lee’s way of encouraging grad students who are sweating out their dissertations and trying to get papers out for publication. Conley and Önder’s results would seem to substantiate his claim.