Essays Forthcoming or in Progress

(Click on the title for a link to the essay in pdf format.)

 Comments are wellcome.

 

Forthcoming

Testability and Ockham’s Razor: How Formal and Statistical Learning Theory Converge in the New Riddle of Induction

Nelson Goodman’s new riddle of induction forcefully illustrates a basic challenge that must be confronted by any adequate theory of inductive inference: provide some basis for choosing among alternative hypotheses that fit past data but make divergent predictions. One response to this challenge is to appeal to some version of Ockham’s razor, according to which simpler hypotheses should be preferred over more complex ones. Statistical learning theory takes this approach by showing how a concept similar to Popper’s notion of degrees of testability is linked to minimizing expected predictive error. In contrast, formal learning theory explains Ockham’s razor by reference to the goal of efficient convergence to the truth, where efficiency is understood as minimizing the maximum number of retractions of conjecture or “mind changes.” In this essay, I show that, despite their differences, statistical and formal learning theory yield precisely the same result for a class of inductive problems that I call strongly VC ordered, of which Goodman’s riddle is just one example.

 

Cartwright on Causality: Methods, Metaphysics, and Modularity

Nancy Cartwright’s most recent book, Hunting Causes and Using Them: Approaches to Philosophy and Economics maintains that methodological issues having to with inferring and applying claims about cause and effect must be considered in tandem with metaphysical questions about what causation is. And with regard to the latter issue, Cartwright insists that causation is not just one kind of thing but is instead a general category for various types of processes that often differ in important ways. From these two themes, it naturally follows that one should be skeptical that there is any method of causal inference that is applicable in all cases. Moreover, for any method, one ought to be very clear about the types of causal systems for which it is suited and, of equal importance, those for which it is not. I am quite sympathetic to Cartwright’s overall perspective on causation, but I take issue with some of her characterizations of particular approaches and several of her specific claims about their limitations. I argue that Cartwright’s discussion of this Bayesian network approaches is problematic insofar as it does not pay adequate attention to the distinct projects that might be pursued within a Bayes nets approach to causation. In addition, I disagree with a number of claims Cartwright makes about the limitations of Bayes nets as a method of causal inference.

 

Naturalism and the Enlightenment Ideal: Rethinking a Central Debate in the Philosophy of Social Science

The naturalism versus interpretivism debate the in philosophy of social science is traditionally framed as the question of whether social science should attempt to emulate the methods of natural science. I show that this manner of formulating the issue is problematic insofar as it presupposes an implausibly strong unity of method among the natural sciences. I propose instead that what is at stake in this debate is the feasibility and desirability of what I call the Enlightenment ideal of social science. I argue that this characterization of the issue is preferable, since it highlights the central disagreement between advocates of naturalism and interpretivism, makes connections with recent work on the topics of causal inference and social epistemology, while avoiding unfruitful comparisons between the social and natural sciences.

 

Causality, Causal Models, and Mechannisms

One commonly drawn distinction in social science research is between quantitative and qualitative approaches, a distinction sometimes also drawn in terms of variable versus case oriented research. A third approach, based on mechanisms, approaches study causal relationships by developing models, often represented by mathematical formula, of micro-processes that could generate a macro-sociological phenomenon of interest. In this chapter, I explore the interrelationships among variable, case, and mechanism oriented approaches to social science research. I agree that there is a common logic behind variable and case oriented approaches, but I suggest that this commonality is best formulated within an approach to causal inference that relies on Bayesian networks (Bayes nets, for short). More specifically, the types of causal models typically associated with the two approaches—linear equations for variable oriented and Boolean logic for case oriented approaches—are two types of parameterizations of Bayes nets. The Bayes nets framework, therefore, identifies model-general aspects of causal inference that pertain to these two as well as other types of causal models and thereby can reasonably be taken to articulate an “underlying logic” of causal inference. Finally, I consider the connection of mechanism-oriented research to variable and case oriented approaches to causal inference. I suggest that the relationship between mechanism and variable oriented approaches is best understood by way of a distinction between what I call direct and indirect causal inference.

 

Works in Progress

What if the Principle of Induction is Normative? Formal Learning Theory and Hume’s Problem

This essay argues that a successful answer to Hume’s problem of induction can be developed from a sub-genre of philosophy of science known as formal learning theory. One of the central concepts of formal learning theory is logical reliability: roughly, a method is logically reliable when it is assured of eventually settling on the truth for every sequence of data that is possible given what we know. I show that the principle of induction (PI) is necessary and sufficient for logical reliability in what I call simple enumerative induction. This answer to Hume’s problem rests on interpreting the PI as a normative claim justified by a non-empirical epistemic means-ends argument. In such an argument, a rule of inference is shown by mathematical or logical proof to promote a specified epistemic end. Since the proof concerning the PI and logical reliability is not based on inductive reasoning, this argument avoids the circularity that Hume argued was inherent in any attempt to justify the PI.

 

Inductive Rules, Background Knowledge, and Skepticism

This essay defends the view that inductive reasoning involves following inductive rules against objections that inductive rules are undesirable because they ignore background knowledge and unnecessary because Bayesianism is not an inductive rule. I propose that inductive rules be understood as sets of functions from data to hypotheses that are intended as solutions to inductive problems. According to this proposal, background knowledge is important in the application of inductive rules and Bayesianism qualifies as an inductive rule. Finally, I consider a Bayesian formulation of inductive skepticism suggested by Lange. I argue that while there is no good Bayesian reason for judging this inductive skeptic irrational, the approach I advocate indicates a straightforward reason not to be an inductive skeptic.

 

A New Approach to Argument by Analogy: Extrapolation and Chain Graphs

In order to make scientific results relevant to practical decision making, it is often necessary to transfer a result obtained in one set of circumstances—an animal model, a computer simulation, an economic experiment—to another that may differ in relevant respects—for example, to humans, the global climate, or an auction. Such inferences, which we can call extrapolations, are a type of argument by analogy. This essay sketches a new approach to analogical inference that utilizes chain graphs, which resemble directed acyclic graphs (DAGs) except in allowing that nodes may be connected by lines as well as arrows. This chain graph approach generalizes the account of extrapolation I provided in my (2008) book and leads to new insights. More specifically, this approach explicates the role of “fingerprints,” or distinctive markers, as a strategy for avoiding an underdetermination problem having to do with spurious analogies. Moreover, it shows how the extrapolator’s circle, one of the central challenges for extrapolation highlighted in my book, is closely tied to distinctive markers and the Markov condition as it applies to chain graphs. Finally, the approach suggests additional ways in which investigations of a model can provide information about a target that are illustrated by examples concerning nanomaterials in sunscreens and fingerprints in climate science.