Tuesday, April 17, 2018

He who must not be named....or can we say 'causal'?

Recall in the Harry Potter series, the wizard community refused to say the name of 'Voldemort' and it got to the point where they almost stopped teaching and practicing magic (at least officially as mandated by the Ministry of Magic). In the research community, by refusing to use the term 'causal' when and where appropriate, are we discouraging researchers from asking interesting questions and putting forth the effort required to implement the kind of rigorous causal inferential methods necessary to push forward the frontiers of science? Could we somehow be putting a damper on teaching and practicing economagic...I mean econometrics...you know the mostly harmless kind? Will the credibility revolution be lost?

In a recent May 2018 article in the American Journal of Public Health (by Miguel Hernan of the Departments of Epidemiology and Biostatistics, Harvard School of Public Health) there is an important discussion about the somewhat tiring mantra 'correlation is not causation' and disservice to scientific advancement that it can lead to in absence of critical thinking about research objectives and designs. Some people might think this is ironic, since often the phrase is invoked as a means to point out fallacious conclusions that have been uncritically based on mere correlations found in the data. However, the pendulum can swing too far in the other direction causing as much harm.

I highly recommend reading this article! It is available ungated and will be one of those you hold onto for a while. See the reference section below.

Key to the discussion are important distinctions between questions of association, prediction, and causality. Below are some spoilers:

While it is wrong to assume causality based on association or correlation alone, refusing to recognize a causal approach in the analysis because of growing cultural 'norms' is also not good either....and should stop:

"The resulting ambiguity impedes a frank discussion about methodology because the methods used to estimate causal effects are not the same as those used to estimate associations...We need to stop treating “causal” as a dirty word that respectable investigators do not say in public or put in print. It is true that observational studies cannot definitely prove causation, but this statement misses the point"

All the glitters isn't gold, as the author notes on randomized controlled trials :

"Interestingly, the same is true of randomized trials. All we can estimate from randomized trials data are associations; we just feel more confident giving a causal interpretation to the association between treatment assignment and outcome because of the expected lack of confounding that physical randomization entails. However, the association measures from randomized trials cannot be given a free pass. Although randomization eliminates systematic confounding, even a perfect randomized trial only provides probabilistic bounds on “random confounding”—as reflected in the confidence interval of the association measure—and many randomized trials are far from perfect."

There are important distinctions between analysis and methodological approaches when asking questions related to prediction and association vs causality. Saying a bit more, this is not just about model interpretation. We are familiar with discussions about challenges related to interpreting predictive models derived from complicated black box algorithms, but causality hinges on much more than just the ability to interpret the impact of features on an outcome. Also note that while we are seeing applications of AI and automated feature engineering and algorithm selection, models optimized to predict well may not explain well at all. In fact, a causal model may perform worse in out of sample predictions of the 'target' while giving the most rigorous estimate of causal effects:

"In associational or predictive models, we do not try to endow the parameter estimates with a causal interpretation because we are not trying to adjust for confounding of the effect of every variable in the model. Confounding is a causal concept that does not apply to associations...By contrast, in a causal analysis, we need to think carefully about what variables can be confounders so that the parameter estimates for treatment or exposure can be causally interpreted. Automatic variable selection procedures may work for prediction, but not necessarily for causal inference. Selection algorithms that do not incorporate sufficient subject matter knowledge may select variables that introduce bias in the effect estimate, and ignoring the causal structure of the problem may lead to apparent paradoxes."

It all comes down to being a question of identification....or why AI has a long way to go in the causal space...or as Angrist and Pischke would put it....if applied econometrics were easy theorists would do it:

"Associational inference (prediction)or causal inference (counterfactual prediction)? The answer to this question has deep implications for (1) how we design the observational analysis to emulate a particular target trial and (2) how we choose confounding adjustment variables. Each causal question corresponds to a different target trial, may require adjustment for a different set of confounders, and is amenable to different types of sensitivity analyses. It then makes sense to publish separate articles for various causal questions based on the same data."

I really liked how they phrased 'prediction' in terms of distinctly being associational or prospective vs. counterfactual. Also, what a nice way to think about 'identification' being about how we emulate a particular trial and handle confounding/selection bias/endogneity.


Miguel A. Hernán, “The C-Word: Scientific Euphemisms Do Not Improve Causal Inference From Observational Data”, American Journal of Public Health 108, no. 5 (May 1, 2018): pp. 616-619.

See also:

Will there be a credibility revolution in data science and AI?

To Explain or Predict?

Sunday, March 18, 2018

Will there be a credibility revolution in data science and AI?

Summary: Understanding where AI and automation are going to be the most disruptive to data scientists in the near term relates to understanding methodological differences between explaining and predicting, between machine learning and causal inference. It will require the ability to ask a different kind of question than machine learning algorithms are capable of answering off of the shelf today.

There is a lot of enthusiasim about the disruptive role of automation and AI in data science. Products like H20ai and DataRobot offer tools to automate or fast track many aspects of the data science work stream. If this trajectory continues, what will the work of the future data scientist look like?

Many have already pointed out the very difficult task of automating the soft skills possessed by data scientists. In a previous LinkedIn post I discussed this in the trading space where automation and AI could create substantial disruptions for both data scientists and traders. Here I quoted Matthew Hoyle:

"Strategies have a short shelf life-what is valuable is the ability and energy to look at new and interesting things and put it all together with a sense of business development and desire to explore"

My conclusion: They are talking about bringing a portfolio of useful and practical skills together to do a better job than was possible before open source platforms and computing power became so proliferate. I think that is the future.

So the future is about rebalancing the data scientists portfolio of skills. However, in the near term I think the disruption from AI and automation in data science will do more than increase the emphasis on soft skills. In fact there will remain a significant portion of 'hard skills' that will see an increase in demand because of the difficulty of automation.

Understanding this will depend largely on making a distinction between explaining and predicting. Much of what appears to be at the forefront of automation involves tasks supporting  supervised and unsupervised machine learning algorithms as well as other prediction and forecasting tools like time series analysis.

Once armed with predictions, businesses will start to ask questions about 'why'. This will transcend prediction or any of the visualizations of the patterns and relationships coming out of black box algorithms. They will want to know what decisions or factors are moving the needle on revenue or customer satisfaction and engagement or improved efficiencies. Essentially they will want to ask questions related to causality, which requires a completely different paradigm for data analysis than questions of prediction. And they will want scientifically formulated answers that are convincing vs. mere reports about rates of change or correlations. There is a significant difference between understanding what drivers correlate with or 'predict' the outcome of interest and what is actually driving the outcome. What they will be asking for is a credibility revolution in data science.

What do we mean by a credibility revolution?

Economist Jayson Lusk puts it well:

"Fortunately economics (at least applied microeconomics) has undergone a bit of credibility revolution.  If you attend a research seminar in virtually any economi(cs) department these days, you're almost certain to hear questions like, "what is your identification strategy?" or "how did you deal with endogeneity or selection?"  In short, the question is: how do we know the effects you're reporting are causal effects and not just correlations."

Healthcare Economist Austin Frakt has a similar take:

"A “research design” is a characterization of the logic that connects the data to the causal inferences the researcher asserts they support. It is essentially an argument as to why someone ought to believe the results. It addresses all reasonable concerns pertaining to such issues as selection bias, reverse causation, and omitted variables bias. In the case of a randomized controlled trial with no significant contamination of or attrition from treatment or control group there is little room for doubt about the causal effects of treatment so there’s hardly any argument necessary. But in the case of a natural experiment or an observational study causal inferences must be supported with substantial justification of how they are identified. Essentially one must explain how a random experiment effectively exists where no one explicitly created one."

How are these questions and differences unlike your typical machine learning application? Susan Athey does a great job explaining in a Quora response about how causal inference is different from off the shelf machine learning methods (the kind being automated today):

"Sendhil Mullainathan (Harvard) and Jon Kleinberg with a number of coauthors have argued that there is a set of problems where off-the-shelf ML methods for prediction are the key part of important policy and decision problems.  They use examples like deciding whether to do a hip replacement operation for an elderly patient; if you can predict based on their individual characteristics that they will die within a year, then you should not do the operation...Despite these fascinating examples, in general ML prediction models are built on a premise that is fundamentally at odds with a lot of social science work on causal inference. The foundation of supervised ML methods is that model selection (cross-validation) is carried out to optimize goodness of fit on a test sample. A model is good if and only if it predicts well. Yet, a cornerstone of introductory econometrics is that prediction is not causal inference.....Techniques like instrumental variables seek to use only some of the information that is in the data – the “clean” or “exogenous” or “experiment-like” variation in price—sacrificing predictive accuracy in the current environment to learn about a more fundamental relationship that will help make decisions...This type of model has not received almost any attention in ML."

Developing an identification strategy, as Jayson Lusk discussed above, and all that goes along with that (finding natural experiments or valid instruments, or navigating the garden of forking paths related to propensity score matching or a number of other quasi-experimental methods) involves careful considerations and decisions to be made and defended in ways that would be very challenging to automate. Even when human's do this there is rarely a single best approach to these problems. They are far from routine. Just ask anyone that has been through peer review or given a talk at an economics seminar or conference.

The kinds of skills required to work in this space would be similar to those of the econometrician or epidemiologist or any quantitative researcher that has been culturally immersed in the social norms and practices that have evolved out of the credibility revolution.. as data science thought leader Eugene Dubossarsky puts it:

“the most elite skills…the things that I find in the most elite data scientists are the sorts of things econometricians these days have…bayesian statistics…inferring causality” 

Noone has a crystal ball.  It is not to say that the current advances in automation are falling short on creating value. They should no doubt create value like any other form of capital complementing the labor and soft skills of the data scientist. And they could free up more resources to focus on more causal questions that previously may not have been answered. I discussed this complementarity previously in a related post:

 "correlations or 'flags' from big data might not 'identify' causal effects, but they are useful for prediction and might point us in directions where we can more rigorously investigate causal relationships if interested" 

However, if automation in this space is possible, it will require a different approach than what we have seen so far. We might look to the pioneering work that Susan Athey is doing converging machine learning and causal inference. It will require thinking in terms of potential outcomes, endogeniety, and counterfactuals which requires the ability to ask a different kind of question than machine learning algorithms are capable of answering off of the shelf today.

Additional References:

From 'What If?' To 'What Next?' : Causal Inference and Machine Learning for Intelligent Decision Making https://sites.google.com/view/causalnips2017

Susan Athey on Machine Learning, Big Data, and Causation http://www.econtalk.org/archives/2016/09/susan_athey_on.html 

Machine Learning and Econometrics (Susan Athey, Guido Imbens) https://www.aeaweb.org/conference/cont-ed/2018-webcasts 

Related Posts:

Why Data Science Needs Economics

To Explain or Predict

Culture War: Classical Statistics vs. Machine Learning: http://econometricsense.blogspot.com/2011/01/classical-statistics-vs-machine.html 

HARK! - flawed studies in nutrition call for credibility revolution -or- HARKing in nutrition research  http://econometricsense.blogspot.com/2017/12/hark-flawed-studies-in-nutrition-call.html

Econometrics, Math, and Machine Learning

Big Data: Don't Throw the Baby Out with the Bathwater

Big Data: Causality and Local Expertise Are Key in Agronomic Applications

The Use of Knowledge in a Big Data Society II: Thick Data

The Use of Knowledge in a Big Data Society

Big Data, Deep Learning, and SQL

Economists as Data Scientists

Tuesday, February 13, 2018

Intuition for Random Effects

Previously I wrote a post based on course notes from J.Blumenstock that attempted to provide some intuition for how fixed effects estimators can account for unobserved heterogeneity (individual specific effects).

Recently someone asked if I could provide a similarly motivating and intuitive example regarding random effects. Although I was not able to come up with a new example, I can definitely discuss random effects in the same context of the previous example. But first a little (less intuitive) background.


To recap, the purpose of both fixed and random effects estimators is to model treatment effects in the face of unobserved individual specific effects.

yit =b xit + αi + uit  (1) 

In the model above this is represented by αi . In terms of estimation, the difference between fixed and random effects depends on how we choose to model this term. In the context of fixed effects it can be captured through a dummy variable estimation (this creates different intercepts or shifts capturing specific effects) or by transforming the data, subtracting group (fixed effects) means from individual observations within each group.  In random effects models, individual specific effects are captured by a composite error term (αi + uit) which assumes that individual intercepts are drawn from a random distribution of possible intercepts. The random component of the error term αi captures the individual specific effects in a different way from fixed effects models. 

As noted in another post, Fixed, Mixed, and Random Effects, the random effects model is estimated using Generalized Least Squares (GLS) :

βGLS = (X’Ω-1X)-1(X’Ω-1Y) where Ω = I  Σ    (2) 

Where Σ is the variance αi+ uit If  Σ is unknown, it is estimated, producing a feasible generalized least squares estimate βFGLS

Intuition for Random Effects

In my post Intuition for Fixed Effects I noted: 

"Essentially using a dummy variable in a regression for each city (or group, or type to generalize beyond this example) holds constant or 'fixes' the effects across cities that we can't directly measure or observe. Controlling for these differences removes the 'cross-sectional' variation related to unobserved heterogeneity (like tastes, preferences, other unobserved individual specific effects). The remaining variation, or 'within' variation can then be used to 'identify' the causal relationships we are interested in."

Lets look at the toy data I used in that example. 

The crude ellipses in the plots above (motivated by the example given in Kennedy, 2008) indicate the data for each city and the the 'within' variation exploited by fixed effects models (that allowed us to correctly identify the correct price/quantity relationships expected in the previous post). The differences between the ellipses represents 'between variation.' As Kennedy discusses, random effects models differ from fixed effects models in that they are able to exploit both 'within' and 'between' variation, producing an estimate that is a weighted average of both kinds of variation (via Σ in equation 2 above). OLS, on the other hand exploits both kinds of variation as an unweighted average.

More Details 

As Kennedy discusses, both FE and RE can be viewed as running OLS on different transformations of the data.

For fixed effects: "this transformation consists of subtracting from each observation the average of the values within its ellipse"

For random effects: "the EGLS (or FGLS above) calculation is done by finding a transformation of the data that creates a spherical variance-covariance matrix and then performing OLS on the transformed data."

As Kennedy notes, the increased information used by RE makes them more efficient estimators, but correlation between 'x' and the error term creates bias. i.e. RE assumes that αis uncorrelated with (orthogonal to) regressors. Angrist and Pischke (2009) discuss (footnote, p. 223) that they prefer FE because the gains in efficiency are likely to be modest while the finite sample properties of RE may be worse. As noted on p.243 an important assumption for identification in FE is that the most important sources of variation are time invariant (because information from time varying regressors gets differenced out). Angrist and Pischke also have a nice discussion on page 244-245 discussing the choice between FE and lagged dependent variable models.


A Guide to Econometrics. Peter Kennedy. 6th Edition. 2008
Mostly Harmless Econometrics. Angrist and Pischke. 2009

See also: ‘Metrics Monday: Fixed Effects, Random Effects, and (Lack of) External Validity (Marc Bellemare.

Marc notes: 

"Nowadays, in the wake of the Credibility Revolution, what we teach students is: “You should use RE when your variable of interest is orthogonal to the error term; if there is any doubt and you think your variable of interest is not orthogonal to the error term, use FE.” And since the variable can be argued to be orthogonal pretty much only in cases where it is randomly assigned in the context of an experiment, experimental work is pretty much the only time the RE estimator should be used."

Friday, February 2, 2018

Deep Learning vs. Logistic Regression ROC vs Calibration Explaining vs. Predicting

Frank Harrel writes Is Medicine Mesmerized by Machine Learning? Some time ago I wrote about predictive modeling and the differences between what the ROC curve may tell us and how well a model 'calibarates.'

There I quoted from the journal Circulation:

'When the goal of a predictive model is to categorize individuals into risk strata, the assessment of such models should be based on how well they achieve this aim...The use of a single, somewhat insensitive, measure of model fit such as the c statistic can erroneously eliminate important clinical risk predictors for consideration in scoring algorithms'

Not too long ago Dr. Harrel shares the following tweet related to this:

I have seen hundreds of ROC curves in the past few years.  I've yet to see one that provided any insight whatsoever.  They reverse the roles of X and Y and invite dichotomization.  Authors seem to think they're obligatory.  Let's get rid of 'em. @f2harrell 8:42 AM - 1 Jan 2018

In his Statistical Thinking post above, Dr. Harrel writes:

"Like many applications of ML where few statistical principles are incorporated into the algorithm, the result is a failure to make accurate predictions on the absolute risk scale. The calibration curve is far from the line of identity as shown below...The gain in c-index from ML over simpler approaches has been more than offset by worse calibration accuracy than the other approaches achieved."

i.e. depending on the goal, better ROC scores don't necessarily mean better models.

But this post was about more than discrimination and calibration. It was discussing the logistic regression approach taken in Exceptional Mortality Prediction by Risk Scores from Common Laboratory Tests  vs the deep learning approach used in Improving Palliative Care with Deep Learning.

"One additional point: the ML deep learning algorithm is a black box, not provided by Avati et al, and apparently not usable by others. And the algorithm is so complex (especially with its extreme usage of procedure codes) that one can’t be certain that it didn’t use proxies for private insurance coverage, raising a possible ethics flag. In general, any bias that exists in the health system may be represented in the EHR, and an EHR-wide ML algorithm has a chance of perpetuating that bias in future medical decisions. On a separate note, I would favor using comprehensive comorbidity indexes and severity of disease measures over doing a free-range exploration of ICD-9 codes."

This kind of pushes back against the idea that deep neural nets can effectively bypass feature engineering, or at least raises cautions in specific contexts.

Actually, he is not as critical of the authors of this paper as he is about what he considers undue accolades it has received.

This ties back to my post on LinkedIn a couple weeks ago, Deep Learning, Regression, and SQL. 

See also:

To Explain or Predict
Big Data: Causality and Local Expertise Are Key in Agronomic Applications


Feature Engineering for Deep Learning
In Deep Learning, Architecture Engineering is the New Feature Engineering

Sunday, December 31, 2017

HARK! - flawed studies in nutrition call for credibility revolution -or- HARKing in nutrition research

There was a nice piece over at the Genetic Literacy Project I read just recently: Why so many scientific studies are flawed and poorly understood. (link). They gave a fairly intuitive example of false positives in research using coin flips. I like this because I used the specific example of flipping a coin 5 times in a row to demonstrate basic probability concepts in some of the stats classes I used to teach. Their example might make a nice extension:

"In Table 1 we present ten 61-toss sequences. The sequences were computer generated using a fair 50:50 coin. We have marked where there are runs of five or more heads one after the other. In all but three of the sequences, there is a run of at least five heads. Thus, a sequence of five heads has a probability of 0.55=0.03125 (i.e., less than 0.05) of occurring. Note that there are 57 opportunities in a sequence of 61 tosses for five consecutive heads to occur. We can conclude that although a sequence of five consecutive heads is relatively rare taken alone, it is not rare to see at least one sequence of five heads in 61 tosses of a coin."

In other words, a 5 head run in a sequence of 61 tosses (as evidence against a null hypothesis of p(head) = .5 i.e. a fair coin) is their analogy for a false positive in research. Particularly they relate this to nutrition research where it is popular to use large survey questionnaires that consist of a large number of questions:

"asking lots of questions and doing weak statistical testing is part of what is wrong with the self-reinforcing publish/grants business model. Just ask a lot of questions, get false-positives, and make a plausible story for the food causing a health effect with a p-value less than 0.05"

It is their 'hypothesis' that this approach in conjunction with a questionable practice referred to as 'HARKing' (hypothesizing after the results are known) is one reason we see so many conflicting headlines about what we should and should not eat or benefits or harms of certain foods and diets. There is some damage done in terms of peoples' trust in science as a result.  They conclude:

"Curiously, editors and peer-reviewers of research articles have not recognized and ended this statistical malpractice, so it will fall to government funding agencies to cut off support for studies with flawed design, and to universities to stop rewarding the publication of bad research. We are not optimistic."

More on HARKing.....

A good article related to HARKing is a paper written by Norbert L. Kerr.  By HARKing he specifically discusses it as the practice of proposing one hypothesis (or set of hypotheses) but later changing the research question *after* the data is examined. Then presenting the results *as if* the new hypothesis were the original.  He does distinguish this from a more intentional exercise in scientific induction, inferring some relation or principle post hoc from a pattern of data. This is more like exploratory data analysis.

I discussed exploratory studies and issues related to multiple testing in a previous post:  Econometrics, Multiple Testing, and Researcher Degrees of Freedom. 

To borrow a quote from this post- "At the same time, we do not want demands of statistical purity to strait-jacket our science. The most valuable statistical analyses often arise only after an iterative process involving the data" (see, e.g., Tukey, 1980, and Box, 1997).

To say the least, careful consideration of tradeoffs should be made in the way research is conducted, and as the post discusses in more detail, the garden of forking paths involved.

I am not sure to what extent the credibility revolution has impacted nutrition studies, but the lessons apply here.


HARKing: Hypothesizing After the Results are Known
Norbert L. Kerr
Personality and Social Psychology Review
Vol 2, Issue 3, pp. 196 - 217
First Published August 1, 1998

Thursday, August 24, 2017

Granger Causality

"Granger causality is a standard linear technique for determining whether one time series is useful in forecasting another." (Irwin and Sanders, 2011).

A series 'granger' causes another series if it consistently predicts it. If series X granger causes Y, while we can't be certain that this relationship is causal in any rigorous way, we might be fairly certain that Y doesn't cause X.


Yt = B0 + B1*Yt-1 +... Bp*Yt-p + A2*Xt-1+.....+Ap*Xt-p + Et

if we reject the hypothesis that all the 'A' coefficients jointly = 0  then 'X' granger causes 'Y'

Xt = B0 + B1*Xt-1 +... Bp*Xt-p + A2*Yt-1+.....+Ap*Yt-p + Et

if we reject the hypothesis that all the 'A' coefficients jointly = 0 then 'Y' granger causes 'X'


Below are some applications where granger causality methods were used to test the impacts of index funds on commodity market price and volatility.

The Impact of Index Funds in Commodity Futures Markets:A Systems Approach
The Journal of Alternative Investments
Summer 2011, Vol. 14, No. 1: pp. 40-49

Irwin, S. H. and D. R. Sanders (2010), “The Impact of Index and Swap Funds on Commodity Futures Markets: Preliminary Results”, OECD Food, Agriculture and Fisheries Working Papers, No. 27, OECD Publishing. doi: 10.1787/5kmd40wl1t5f-en

Index Trading and Agricultural Commodity Prices:
A Panel Granger Causality Analysis
Gunther Capelle-Blancard and Dramane Coulibaly
CEPII, WP No 2011 – 28
No 2011 – 28


Using Econometrics: A Practical Guide (6th Edition) A.H. Studenmund. 2011

Monday, August 7, 2017

Confidence Intervals: Fad or Fashion

Confidence intervals seem to be the fad among some in pop stats/data science/analytics. Whenever there is mention of p-hacking, or the ills of publication standards, or the pitfalls of null hypothesis significance testing, CIs almost always seem to be the popular solution.

There are some attractive features of CIs. This paper provides some alternative views of CIs, discusses some strengths and weaknesses, and ultimately proposes that they are on balance superior to p-values and hypothesis testing. CIs can bring more information to the table in terms of effect sizes for a given sample however some of the statements made in this article need to be read with caution. I just wonder how much the fascination with CIs is largely the result of confusing a Bayesian interpretation with a frequentist application or just sloppy misinterpretation. I completely disagree that they are more straight forward to students (compared to interpreting hypothesis tests and p-values as the article claims).

Dave Giles gives a very good review starting with the very basics of what is a parameter vs. an estimator vs. an estimate, sampling distributions etc. After reviewing the concepts key to understanding CIs he points out two very common interpretations of CIs that are clearly wrong:

1) There's a 95% probability that the true value of the regression coefficient lies in the interval [a,b].
2) This interval includes the true value of the regression coefficient 95% of the time.

"we really should talk about the (random) intervals "covering" the (fixed) value of the parameter. If, as some people do, we talk about the parameter "falling in the interval", it sounds as if it's the parameter that's random and the interval that's fixed. Not so!"

In Robust misinterpretation of confidence intervals, the authors take on the idea that confidence intervals offer a panacea for interpretation issues related to null hypothesis significance testing (NHST):

"Confidence intervals (CIs) have frequently been proposed as a more useful alternative to NHST, and their use is strongly encouraged in the APA Manual...Our findings suggest that many researchers do not know the correct interpretation of a CI....As is the case with p-values, CIs do not allow one to make probability statements about parameters or hypotheses."

The authors present evidence about this misunderstanding by presenting subjects with a number of false statements regarding confidence intervals (including the two above pointed out by Dave Giles) and noting the frequency of incorrect affirmations about their truth.

In Osteoarthritis and Cartilage, authors write:

"In spite of frequent discussions of misuse and misunderstanding of probability values (P-values) they still appear in most scientific publications, and the disadvantages of erroneous and simplistic P-value interpretations grow with the number of scientific publications."

They raise a number of issues related to both p-values and confidence intervals (multiplicity of testing, the focus on effect sizes, etc.) and they point out some informative differences between using p-values vs. using standard errors to produce 'error bars.' However, in trying to clarify the advantages of p-values they step really close to what might be considered an erroneous and simplistic interpretation:

"the great advantage with confidence intervals is that they do show what effects are likely to exist in the population. Values excluded from the confidence interval are thus not likely to exist in the population. "

Maybe I am being picky, but if we are going to be picky about interpreting p-values then the same goes for CIs. It sounds a lot like they are talking about 'a parameter falling into an interval' or the 'probability of a parameter falling into an interval' as Dave cautions against. They seem careful enough in their language using the term 'likely' vs. making strong probability statements, so maybe they are making a more heuristic interpretation that while useful may not be the most correct.

In Mastering 'Metrics, Angrist and Pishcke give a great interpretation of confidence intervals that doesn't lend itself in my opinion as easily to abusive probability interpretations:

"By describing a set of parameter values consistent with our data, confidence intervals provide a compact summary of the information these data contain about the population from which they were sampled"

I think the authors Osteoarthritis and Cartilage could have stated their case better if they had said:

"The great advantage of confidence intervals is that they describe what effects in the population are consistent with our sample data. Our sample data is not consistent with population effects excluded from the confidence interval."

Both hypothesis testing and confidence intervals are statements about the compatibility of our observable sample data with population characteristics of interest. The ASAreleased a set of clarifications on statements on p-values. Number 2 states that "P-values do not measure the probability that the studied hypothesis is true." Nor does a confidence interval (again see Ranstan, 2014).

Venturing into the risky practice of making imperfect analogies, take this loosely from the perspective of criminal investigations. We might think of confidence intervals as narrowing the range of suspects based on observed evidence, without providing specific probabilities related to the guilt or innocence of any particular suspect. Better evidence narrows the list, just as better evidence in our sample data (less noise) will narrow the confidence interval.

I see no harm in CIs and more good if they draw more attention to practical/clinical significance of effect sizes. But I think the temptation to incorrectly represent CIs can be just as strong as the temptation to speak boldly of 'significant' findings following an exercise in p-hacking or in the face of meaningless effect sizes. Maybe some sins are greater than others and proponents feel more comfortable with misinterpretations/overinterpretations of CIs than they do with misinterpretations/overinterpretaions of p-values.

Or as Briggs concludes about this issue:

"Since no frequentist can interpret a confidence interval in any but in a logical probability or Bayesian way, it would be best to admit it and abandon frequentism"

Methods of Psychological Research Online 1999, Vol.4, No.2 © 1999 PABST SCIENCE PUBLISHERS Confidence Intervals as an Alternative to Significance Testing Eduard Brandstätter1 Johannes Kepler Universität Linz

J. Ranstam, Why the -value culture is bad and confidence intervals a better alternative, Osteoarthritis and Cartilage, Volume 20, Issue 8, 2012, Pages 805-808, ISSN 1063-4584, http://dx.doi.org/10.1016/j.joca.2012.04.001 (http://www.sciencedirect.com/science/article/pii/S1063458412007789)

Robust misinterpretation of confidence intervals
Rink Hoekstra & Richard D. Morey & Jeffrey N. Rouder &
Eric-Jan Wagenmakers Psychon Bull Rev
DOI 10.3758/s13423-013-0572-3 2014