Medical Malpractice Data and Inquiries

The current issue of Journal of Empirical Legal Studies includes an interesting data resource and survey by Bernard Black, et al., titled Medical Liability Insurance Premia: 1990–2016 Dataset, with Literature Review and Summary Information. Having just talked briefly about med mal premia and healthcare regulation last week, I was interested to read through the review and description of some of the data and trends. The authors have compiled data from the Medical Liability Monitor, “the only national, longitudinal source of data on med mal insurance rates.”  But they don’t stop there.

We link the MLM data with several related datasets: county rural-urban codes (from 2013); annual county- and state-level data on population (from the Census Bureau); number of total and active, nonfederal physicians, with a breakdown by specialty (from the Area Health Resource File, originally from the American Medical Association); annual state-level data on paid med mal claims against physicians from the National Practitioner Data Bank (NPDB), available through 2015; and data on direct premiums written by med mal insurers from the National Association of Insurance Commissioners (NAIC), available through 2015. We also provide a literature review of papers using the MLM data and summary information on the association between med mal insurance premia and other relevant features of the med mal landscape.

The data appendix, public data, and STATA code book (for cleaning the dataset) are also available from SSRN here. The survey includes a summary of some research into possible explanations for and consequences of medical malpractice premia: effect of med mal risk on healthcare spending, effect of med mal reform on med mal premia, effect of med mal rates on C-section rates and physician supply, effect of med mal payouts on med mal premia.

Noticeably absent from the literature they summarize, which they claim are the “principle” prior studies using MLM data, is any attention to or focus on market structure issues. Doubly so since there has been a consistent drop in rates over the past 15 years that is generally unexplained in the cited literature. Now, I don’t specialize in health care industry research, but I do know that in the past 15 years there has been an ongoing trend of consolidation among both health insurance companies and medical providing companies (e.g., hospital networks, physician groups, both).  I could easily hypothesize a couple potential dynamics:

  • Increased consolidation among insurance companies may lead to contractual incentives (by way of contract rates and performance measures) that affect the expected cost of med mal insurance.
  • Increased consolidation among hospital networks and physician groups leads to more consistent or standardized practices across larger populations of patients/services, thereby reducing uncertainty or volatility of medical service provision/quality and, thereby, expected cost of med mal insurance.

I suspect there are several potential channels, but it would seem a potentially fruitful area of research–and now there is a more convenient data set with which to play.

How mergers affect innovation…maybe?

Justus Haucap and Joel Stiebale with the Düsseldorf Institute for Competition Economics (DICE) at the University of Düsseldorf have a recent paper analyzing the effects of mergers on innovation in the European pharmaceutical industry. The develop a model that suggests mergers reduce innovation not only in the merged firms, but among industry competitors as well. Their data bear this out, as explained in the abstract:

This papers analyses how horizontal mergers affect innovation activities of the merged entity and its non-merging competitors. We develop an oligopoly model with heterogeneous firms to derive empirically testable implications. Our model predicts that a merger is more likely to be profitable in an innovation intensive industry. For a high degree of firm heterogeneity, a merger reduces innovation of both the merged entity and non-merging competitors in an industry with high R&D intensity. Using data on horizontal mergers among pharmaceutical firms in Europe, we find that our empirical results are consistent with many predictions of the theoretical model. Our main result is that after a merger, patenting and R&D of the merged entity and its non-merging rivals declines substantially. The effects are concentrated in markets with high innovation intensity and a high degree of rm heterogeneity. The results are robust towards alternative specifications, using an instrumental variable strategy, and applying a propensity score matching estimator.

While I haven’t yet read the paper in detail, a cursory examination suggests they have ignored another possibility: mergers in high-intensity R&D industries could be a leading indicator of decreased innovation productivity (i.e., lower returns to investment in R&D). Consider that as research advances, the “low hanging fruit” are collected first before the more difficult (and lower return) investments are pursued. As companies in a high-intensity R&D industry exploit all of the low hanging fruit, particularly internally, one might expect mergers as a way of expanding the available set of lower-cost/higher-return R&D investment opportunities. Since firms are competing in the same science space, a slow-down in one firm is likely to be spuriously correlated with slowdowns throughout the industry.

“Affect” is a word of causation. To suggest that mergers cause a reduction in innovation is a strong statement–especially when paired with a merger policy implication. This may be something that bears more scrutiny since, as the authors note, the entire subject is one on which relatively little light has thus far been shed.

Flipping a Coin for Happiness

Steven Levitt of Freakonomics fame (and professor of economics at University of Chicago) has a new paper out on a not-so-new research project. In “Heads or Tails: The Impact of a Coin Toss on Major Life Decisions and Subsequent Happiness” (gated at NBER, a summary article is available here), Levitt finds that individuals who made important life decisions based on a coin flip were more likely to be happy two or six months afterward.  Based on these findings, Levitt suggests that individuals may be too cautious in making major decisions. The abstract reads:

Little is known about whether people make good choices when facing important decisions. This paper reports on a large-scale randomized field experiment in which research subjects having difficulty making a decision flipped a coin to help determine their choice. For important decisions (e.g. quitting a job or ending a relationship), those who make a change (regardless of the outcome of the coin toss) report being substantially happier two months and six months later. This correlation, however, need not reflect a causal impact. To assess causality, I use the outcome of a coin toss. Individuals who are told by the coin toss to make a change are much more likely to make a change and are happier six months later than those who were told by the coin to maintain the status quo. The results of this paper suggest that people may be excessively cautious when facing life-changing choices.

So if in doubt, maybe you should reach in your pocket for a coin. Or you could do it like this guy:

NSF Funding for Economics Research: Good or Bad?

The latest Journal of Economic Perspectives includes a pair of papers debating the social value of ecNSFonomics research funding from the National Science Foundation, featuring Robert Moffitt from Johns Hopkins and Tyler Cowen and Alex Tabarrock from George Mason. The abstracts of their respective viewpoints follow:

Robert Moffitt: “In Defense of the NSF Economics Program
The NSF Economics program funds basic research in economics across all its disparate fields. Its budget has experienced a long period of stagnation and decline, with its real value in 2013 below that in 1980 and having declined by 50 percent as a percent of the total NSF budget. The number of grants made by the program has also declined over time, and its current budget is very small compared to that of many other funders of economic research. Over the years, NSF-supported research has supported many of the major intellectual developments in the discipline that have made important contributions to the study of public policy. The public goods argument for government support of basic economic research is strong. Neither private firms, foundations, nor private donors are likely to engage in the comprehensive support of all forms of economic research if NSF were not to exist. Select universities with large endowments are more likely to have the ability to support general economic research in the absence of NSF, but most universities do not have endowments sufficiently large to do so. Support for large-scale general purpose dataset collection is particularly unlikely to receive support from any nongovernment agency. On a priori grounds, it is likely that most NSF-funded research represents a net increase in research effort rather than displacing already-occurring effort by academic economists. Unfortunately, the empirical literature on the net aggregate impact of NSF economics funding is virtually nonexistent.

Tyler Cowen & Alex Tabarrock: “A Skeptical View of the National Science Foundation’s Role in Economic Research
We can imagine a plausible case for government support of science based on traditional economic reasons of externalities and public goods. Yet when it comes to government support of grants from the National Science Foundation (NSF) for economic research, our sense is that many economists avoid critical questions, skimp on analysis, and move straight to advocacy. In this essay, we take a more skeptical attitude toward the efforts of the NSF to subsidize economic research. We offer two main sets of arguments. First, a key question is not whether NSF funding is justified relative to laissez-faire, but rather, what is the marginal value of NSF funding given already existing government and nongovernment support for economic research? Second, we consider whether NSF funding might more productively be shifted in various directions that remain within the legal and traditional purview of the NSF. Such alternative focuses might include data availability, prizes rather than grants, broader dissemination of economic insights, and more. Given these critiques, we suggest some possible ways in which the pattern of NSF funding, and the arguments for such funding, might be improved.

11th Annual Conference on Empirical Legal Studies

11th Annual Conference on Empirical Legal Studies (CELS)
Duke Law School, Durham, North Carolina
Friday, November 18 and Saturday, November 19, 2016

Duke Law School is pleased to host the 11th Annual Conference on Empirical Legal Studies (CELS) on November 18-19, 2016. CELS is a highly regarded interdisciplinary gathering that draws scholars from across the country and internationally and is sponsored by the Society for Empirical Legal Studies. The conference brings together hundreds of scholars from law, economics, political science, psychology, policy analysis, and other fields who are interested in the empirical analysis of law and legal institutions. Papers are selected through a peer review process and discussion at the conference includes assigned commentators and audience questions.

Paper submissions are due by July 31, 2016.

For more information about the conference click here (https://law.duke.edu/cels2016/).

Database of Federal Regulations

Omar Al-Ubaydli and Patrick McLaughlin (both at George Mason University) have an article in the most recent issue of Regulation & Governance documenting their RegData database, which “measures [federal] regulation for industries at the two, three, and four-digit levels of the North American Industry Classification System.” While any attempt to quantify regulations is fraught with problems, as the authors note in their paper, their text-based approach would seem as good a method as any (and superior to some) for providing a numerical measure of regulation that could be used for empirical research. And what’s even better, the data are freely available here. The abstract of the paper reads:

We introduce RegData, formerly known as the Industry-specific Regulatory Constraint Database. RegData annually quantifies federal regulations by industry and regulatory agency for all federal regulations from 1997–2012. The quantification of regulations at the industry level for all industries is without precedent. RegData measures regulation for industries at the two, three, and four-digit levels of the North American Industry Classification System. We created this database using text analysis to count binding constraints in the wording of regulations, as codified in the Code of Federal Regulations, and to measure the applicability of regulatory text to different industries. We validate our measures of regulation by examining known episodes of regulatory growth and deregulation, as well as by comparing our measures to an existing, cross-sectional measure of regulation. Researchers can use this database to study the determinants of industry regulations and to study regulations’ effects on a massive array of dependent variables, both across industries and time.

Now, if only there was such a database of State-level regulations.

Craft Beer in the US: History, Stats and Geography

Ken Elzinga (Virginia) and Carol and Victor Tremblay (Oregon State) have a paper in the latest Journal of Wine Economics titled “Craft Beer in the United States: History, Statistics and Geography.” The paper provides a great overview of the history of the craft brew industry as well as some interesting analysis on the geographic development of the industry. The history section seems to draw heavily on Tom Aticelli’s 2013 book The Audacity of Hops: The History of America’s Craft Beer Revolution, but provides a much more concise summary. And paired with the statistical overview of the beer industry in general and the empirical analysis of the craft brew industry that follows, this paper offers a nice, short primer for anyone interested in the history (and economics) of the craft brew industry in the US. The paper’s abstract follows:

We provide a mini-history of the craft beer segment of the U.S. brewing industry with particular emphasis on producer-entrepreneurs but also other pioneers involved in the promotion and marketing of craft beer who made contributions to brewing it. In contrast to the more commodity-like lager beer produced by the macrobrewers in the United States, the output of the craft segment more closely resembles the product differentiation and fragmentation in the wine industry. We develop a database that tracks the rise of craft brewing using various statistical measures of output, number of producers, concentration within the segment, and compares output with that of the macro and import segment of the industry. Integrating our database into Geographic Information Systems software enables us to map the spread of the craft beer segment from its taproot in San Francisco across the United States. Finally, we use regression analysis to explore variables influencing the entrants and craft beer production at the state level from 1980 to 2012. We use Tobit estimation for production and negative binomial estimation for the number of brewers. We also analyze whether strategic effects (e.g., locating near competing beer producers) explain the location choices of craft beer producers.