I’ve been reading and rereading Longino’s “Feminist Epistemology as a Local Epistemology“. I originally came to it because, in missing philosophy, and in particular epistemology, I wanted to go back and read influential works that curriculum or ideological influences made impossible. Of course, neither of those constitute something intentionally enacted.
I think my favorite part of the essay is where Longino points out problems with the a central ontological commitment that drives empirical research: the idea that, in general, it is best be as simple as possible when it comes to a theory’s commitment to the existence of types of entities. If the data can be explained in a simpler way, then it should be. This idea, one that Longino takes as a standard virtue in epistemology, and often presumed in the natural science, is frequently characterized as “what closes the gap between evidence and theory”.
I understood these virtues also as justification for, as well as motivation to, invoke the use of “Occam’s Razor”. Occam’s Razor is the normative view that more often than not, the simplest theory is the best choice. While this description seems equivalent with the description issued above, it is not the same as Occam’s Razor belongs squarely to the area of theory selection/comparison. Thus is rears its head only in the context of a philosophical or scientific dispute over which one of two or more possible theories a particular scientific community should invoke and/or “project”–to use Nelson Goodman’s terminology.
Longino’s arguments against both the standard epistemic value (simplicity) and the standard method of theory selection (Occam’s Razor) bring up important inconsistencies that many academic philosophers have failed to acknowledge and/or listen to–as Longino also points out.
In any event, here are her most concise arguments levied against these normative constraints:
i. This formulation begs the question what counts as an adequate explanation. Is an adequate explanation an account sufficient to generate predictions or an account of underlying processes, and, if explanation is just retrospective prediction, then must it be successful at individual or population levels? Either the meaning of simplicity will be relative to one’s account of explanation, thus undermining the capacity of simplicity to function as an independent epistemic value, or the insistence on simplicity will dictate what gets explained and how.
ii. We have no a priori reason to think the universe simple, i.e. composed of very few kinds of thing (as few as the kinds of elementary particles, for example) rather than of many different kinds of thing. Nor is there or could there be empirical evidence for such a view.
iii. The degree of simplicity or variety in one’s theoretical ontology may be dependent on the degree of variety one admits into one’s description of the phenomena. If one imposes uniformity on the data by rejecting anomalies, then one is making a choice for a certain kind of account. If the view that the boundaries of our descriptive categories are conventional is correct, then there is no epistemological fault in this, but neither is there virtue.
I will discuss these and analyze them in greater detail in a (near) future post. In the meantime, you can see Longino’s piece Louis P. Pojman’s The Theory of Knowledge-Classical and Contemporary Readings (Third Edition)