Philosophy of Science
I am developing a research program to use ideas and methods from topology to formalize notions of similarity amongst models used in science. This can be put to use giving precise answers to a surprising variety of questions, such as the nature of intertheoretic reduction and emergence, theory change, lawhood and counterfactual reasoning in science, and the epistemology of modeling and idealization. Here, the selection of the relevant notion of similarity in a given context is crucial, as is specificity in just what models one is considering.
On modeling and idealization, one of my papers on Norton’s Dome—see below—also introduces the idea of a “Minimal Approximation,” and unholy but useful admixture of the sort of minimality for explanations one finds in minimal models, but applied only to properties of models rather models themselves, per John D. Norton’s distinction. I’m also co-editing a special issue of Synthese on infinite idealizations in science, forthcoming later in 2018.
There is a longstanding debate on the senses in which classical mechanics can be understood as a deterministic theory. In one paper, I examine a much recently discussed example of the purported failure of determinism in classical mechanics—that of Norton’s Dome—and the range of current objections against it. These objections all assume a fixed conception of classical mechanics, but I argue that there are in fact many different conceptions appropriate and useful for different purposes, none of which is intrinsically preferred in analyzing the Dome. Instead of also arguing for or against determinism, I stress the wide variety of pragmatic considerations that, in a specific context, may lead one to adopt one conception over another. Besides extending these ideas to inertial motion in general relativity, I’ve also showed how certain approximations Norton used do not affect the indeterminism of the example; in fact, the approximation are “minimal” in the sense that they elide details of the properties of the classical mechanical model that don’t matter to the inference drawn therefrom.
Spacetime Theory and Gravitation
In the context of general relativity, Stephen Hawking (among others) has proposed that a necessary condition for a property of spacetime to be “physically significant” is that it is stable: all the spacetimes sufficiently similar to the one in question must also have that property. Thus whether a property is stable depends on the notion of similarity. In physics, this is done by introducing a topology on the collection of all spacetimes. Some have thus suggested that one should find a canonical topology, a single “right” topology for every inquiry. In another paper, I show how the main candidates—and each possible choice, to some extent—faces the horns of a no-go result. I suggest that instead of trying to decide what the “right” topology is for all problems, one should let the details of particular types of problems guide the choice of an appropriate topology. This work forms one chapter of my dissertation.
In another chapter, I illustrate the importance of choosing a topology with the relationship between general relativity and Newtonian gravitation. Accounts of the reduction of the former to the latter usually take one of two approaches. One considers the limit as the speed of light c → ∞, while the other focuses on the approximation of formulas for low velocities. Although the first approach treats the reduction of relativistic spacetimes globally, many have argued that ‘c → ∞’ can at best be interpreted counterfactually, which is of limited value in explaining the past empirical success of Newtonian gravitation. The second, on the other hand, while more applicable to explaining this success, only treats a small fragment of general relativity. Building on work by Jürgen Ehlers, I propose a different account of the reduction relation that offers the global applicability of the c → ∞ limit while maintaining the explanatory utility of the low velocity approximation. In doing so, I highlight the role that a topology on the collection of all spacetimes plays in defining the relation, and how the choice of topology corresponds with broader or narrower classes of observables that one demands be well-approximated in the limit.
I’ve also written a paper with some technical results that justify why the clock hypothesis in general relativity makes sense—that is, why it makes sense to represent the time elapsed along a worldline as the length of that worldline as determined by the spacetime metric.
Finally, most authors don’t consider the role of the dimensionality in interpreting a spacetime theory, but in work with J. B. Manchak, Mike D. Schneider, and James Owen Weatherall I show that in two spacetime dimensions, it is far from clear what general relativity is even supposed to be, as the most obvious formulation has qualitatively different properties than in four spacetime dimensions.
I’m especially interested in structural features of quantum theory that make it (dis)similar to other theories, such as contextuality. Ben Feintzeig and I applied the Kochen-Specker theorem to derive a sort of no-go theorem for a large class of hidden variable theories that seek to avoid the hard choices of Bell’s theorem by generalizing probability theory. In a word, they can only do so on pain of admitting a finite null cover of events: a finite collection of events whose union is the trivial event (anything happens), but each of which is assigned a (generalized) probability of 0—something impossible for classical (Kolmogorovian) probability spaces.
Philosophy of Statistics
My work in philosophy of statistics has centered on the nature of evidence; I’m currently co-editing a special issue of Synthese on evidence amalgamation in the sciences, forthcoming later in 2018.
As for my own work, in one paper I consider the likelihood principle, a constraint on any measure of evidence arising from a statistical experiment, in light of procedures for model verification—statistical tests of modeling assumptions. I argue that if model verification is to be at all feasible, and insofar as the results of the verification should bear on the evidence produced by the experiment, the likelihood principle cannot be a universal constraint on any measure of evidence. Nevertheless, I suggest that proponents of the principle may hold out for a restricted version thereof, either as a kind of idealization or as defining one among many different forms of evidence.
I am also developing a formalization of Deborah G. Mayo‘s theory of severe testing and other defenses and elaborations of the foundations of classical statistics. When it comes to statistical schools I am a pluralist, but have found that classical statistics has received much less attention than Bayesian statistics in the philosophy of science literature.
Philosophy of Computing
I’m interested in account of computational implementation, i.e., of how physical objects act as computers. With Mike Cuffaro, I’m editing Physical Perspectives on Computation, Computational Perspectives on Physics, due out in May, 2018 with Cambridge University Press. In addition to our introduction, there are twelve commissioned chapters evenly divided amongst four thematic parts:
- The Computability of Physical Systems and Physical Systems as Computers
- The Implementation of Computation in Physical Systems
- Physical Perspectives on Computer Science
- Computational Perspectives on Physical Theory