Now out, after a seemingly infinite delay:
Papers in Press
In this paper, we defend a traditional approach to semantics, that holds that the outputs of compositional semantics are propositional, i.e. truth conditions (or anything else appropriate to be the objects of assertions or the contents of attitudes). Though traditional, this view has been challenged on a number of fronts over the years. Since classic work of Lewis, arguments have been offered which purport to show that semantic composition requires values that are relativized, e.g. to times, or other parameters that render them no longer propositional. Focusing in recent variants of these arguments involving quantification and binding, we argue that a correct understanding of how composition works gives no reason to relativize semantic values, and that propositional semantic values are in fact the preferred option. We take our argument to be mainly empirical, but along the way, we defend some more general theses. Simple propositional semantic values are viable in composition, we maintain, because composition is itself a complex phenomenon, involving multiple modes of composition. Furthermore, some composition principles make adjustments to the meanings of constituents in the course of composition. These adjustments are by triggered syntactic environments. We argue such small contributions of meaning from syntactic structure are acceptable.
Indirectness and Intentions in Metasemantics, to appear in The Architecture of Context and Context-Sensitivity, ed. T. Ciecierski and P. Grabarczyk, Springer. The penultimate version is available here.
In earlier work, I argued that different sorts of context-dependent expressions have different metasemantics. In particular, I distinguished what I call direct from indirect metasemantics. The model for direct semantics is that of demonstratives. In this case, one single factor determines its value. In contrast, I have argued that many contextual parameters have an indirect metasemantics, on which multiple factors work to fix the parameter’s value. The case I have made for indirect metasemantics highlights factors that go beyond speakers’ intentions. In this paper, I further explore these issues. My first goal is to review and expand my arguments in favor of indirect metasemantics. My second goal is to ask how much an indirect metasemantics must depart from an intention-based metasemantics. The indirect metasemantics I shall defend gives speakers’ intentions a limited role. But I also note an option for a more fully intention-based but indirect metasemantics. I argue in favor of my own version. But I also note that deciding between them requires deciding not only a range of specific issues about cognition, but some fundamental ones about how communication works.
Models, Model Theory, and Modeling, to appear in The Semantic Conception of Logic: Essays on Consequence, Invariance, and Meaning, ed. G. Sagi and J. Woods, Cambridge University Press. The penultimate version is available here.
This paper explores the relations between logic and semantics of natural language. It argues that logic as used in the study of natural language – an empirical discipline – functions much like specific kinds of scientific models. Particularly, it claims that logics can function like analogical models. More provocatively, it also suggests they can function like model organisms often do in the biological sciences, providing a kind of controlled environment for observations. Along the way, the paper also addresses the role of model theory in logic. It suggests model theory can play a number of different roles. It can offer something like scientific models when it comes to empirical applications, while at the same time furthering conceptual analysis of a basic notion of logic, and of course, being a part of pure mathematics.
Sources of Context Dependence: The Case of Knowledge Ascriptions, in Oxford Studies in Philosophy of Language, volume 1, ed. E. Lepore and D. Sosa, Oxford University Press, 2019, pp. 35-72. The penultimate version is available here.
This paper has two goals. The first is to defend a form of context-dependence for knowledge ascriptions. The second is to explore the different sources of context-dependence that natural language provides. Using knowledge ascriptions as an illustration, it argues that there are two very different sorts of sources of context-dependence in language. One is highly specific, typically lexical context-dependence. The other is general. Highly general features of extremely broad categories of expressions can create context-dependence that is only minimally associated with any one expression. The case of knowledge ascriptions provides an example of this kind of context-dependence. When it comes to philosophical concerns about contextualism, the difference points to something not always fully noted. specific context-dependence can often reveal something important about the concept a given word expresses. General context-dependence does this in at best in highly limited ways.
In this paper, I explore the nature of convention in language. The common notion of convention focuses on social aspects of coordination, but I identify two others that make minimal use of social coordination. I then explore in depth one example of a feature of language that has appeared to some not to be conventional: the information-structural notion of topic. I argue that the evidence strongly supports a conventional view of topic; but I also argue that it suggests a sort of convention that relies only minimally on social aspects of coordination. From this example, I conclude that we can often extend the reach of conventions in language, but that we should be careful about what those conventions are. Not everything that looks conventional is the same, and as we expand the scope of convention in language, we uncover very different sorts of conventions.
Lexical Meaning, Concepts, and the Metasemantics of Predicates, in The Science of Meaning: Essays on the Metatheory of Natural Language Semantics, ed. D. Ball and B. Rabern, Oxford University Press, 2018, pp. 97-225. The penultimate version is available here.
In this essay, I shall examine how concepts relate to lexical meanings. My main focus will be on how we can appeal to concepts to give specific, cognitively rich contents to lexical entries, while at the same time using standard methods of compositional semantics. I shall propose a way to do this that casts concepts in a metasemantic role for certain expressions; notably verbs, but more generally, expressions that function as content-giving predicates in a sentence. Along the way, I shall also consider the broader question of how rich, and how closely tied to cognition, a lexical meaning should be. The proposal I shall offer shows how we can see lexical meanings as importantly fixed by concepts, but at the same time, not having the internal structure of concepts, and having fully extensional and compositional properties. This, I shall argue, provides a better account of how concepts, drawn from the wider range of cognition, relate to language-specific lexical meanings.
A survey of work on the concept of truth. Minor update of the Spring 2013 version.
Our survey on the Liar paradox. Substantial revision of an earlier version, with new co-authors.
Complexity and Hierarchy in Truth Predicates, in Unifying the Philosophy of Truth, ed. T. Achourioti, H. Galinon, J. Martínez Fernández, and K. Fujimoto, Springer, 2015, pp. 211-243. Available from Springer. The penultimate version is available here.
In this paper, I speak in favor of hierarchies in the theory of truth. I argue that hierarchies are more well-motivated and can provide better and more workable theories than is often assumed. Along the way, I sketch the sort of hierarchy I believe is plausible and defensible. My defense of hierarchies assumes an ‘inflationary’ view of truth that sees truth as a substantial semantic concept. I argue that if one adopts this view of truth, hierarchies arise naturally. I also show that this approach to truth makes it a very complex concept. I argue that this complexity helps motivate hierarchies. Complexity and hierarchy go together, if you adopt the right view of truth.
Logical Consequence and Natural Language, in Foundations of Logical Consequence, ed. C. R. Caret and O. T. Hjortland, Oxford University Press, 2015, pp. 71-120. Available from Oxford University Press. The penultimate version is available here.
One of the great successes in the study of language has been the application of formal methods, including those of formal logic. Even so, this paper argues against one way of accounting for this success, by arguing that the study of natural language semantics and of logical consequence relations are not the same. There is indeed a lot we can glean about logic from looking at our languages, and at our inferential practices, but the semantic properties of natural languages do not determine genuine logical consequence relations. We can get from natural language semantics to logical consequence, but only by a significant process of identification of logical constants, abstraction, and idealization. This process takes us well beyond what we find in natural language semantics. The paper also discusses different approaches to the nature of logical consequence, and examines which allow logic and natural language to come closer together.
Representation and the Modern Correspondence Theory of Truth, in Meaning Without Representation: Essays on Truth, Expression, Normativity, and Naturalism, ed. S. Gross, N. Tebben, and M. Williams, Oxford University Press, 2015, pp. 81-102. Available from Oxford University Press. The penultimate version is available here.
This paper presents an approach to a substantial theory of truth, which it dubs the ‘modern correspondence theory’. This theory builds on insights from Tarski onward, to show how truth can depend on the things we talk about and their properties, without requiring a metaphysics of facts. The paper presents this theory as a development of the intuitions that motivated the traditional correspondence theory. The modern version is an improvement, as it relieves the correspondence theory of some contentious metaphysical commitments. Where the traditional theory relied on metaphysics, the modern theory relies on semantics. The paper explores the connections between semantics and truth that the modern theory proposes. It isolates a general notion of representation, and shows how the modern theory characterizes truth as a fundamental property of certain kinds of representational systems.
Explanation and Partiality in Semantic Theory, in Metasemantics: New Essays on the Foundations of Meaning, ed. A. Burgess and B. Sherman, Oxford University Press, 2014, pp. 259-292. Available from Oxford University Press. The penultimate version is available here.
This paper argues for a form of partiality in semantics. In particular, it argues that semantics, narrowly construed as part of our linguistic competence, is only a partial determinant of truth-conditional content. Likewise, semantic theories in linguistics function as partial theories of content. It offers an account of where and how this partiality arises, which focuses on how lexical meaning combines elements of distinctively linguistic competence with elements from our broader cognitive resources. This shows how we can accommodate some partiality in semantic theories without falling into skepticism about semantics or its place in linguistic theory. The argument of this paper proceeds by examining where semantic theories provide good explanations, and the differing roles of disquotation and model theory and other forms of mathematics in supplying them. It shows that model theory provides one illustration of where and how semantic theories can provide good explanations of semantic competence, while disquotation marks the places where they lose their explanatory force. Insofar as disquotation plays an ineliminable role in building theories of content, semantic theories can be at best partial theories of content. From there, we conclude that semantics is itself partial.
Quine on Reference and Quantification, in A Companion to W. V. O. Quine, ed. G. Harman and E. Lepore, Wiley-Blackwell, 2014, pp. 373-400. Available from Wiley-Blackwell. The penultimate version is available here.
A review of Quine on reference and quantification.
A review of Davidson on truth.
A New Puzzle about Discourse-Initial Contexts, in Brevity, ed. L. Goldstein, Oxford University Press, 2013, pp. 107-121. Available from Oxford University Press. The penultimate version is available here.
This note raises a puzzle derived from some recent observations about discourse-initial contexts. Many phenomena, loosely characterizable as anaphoric, tend to be infelicitous in discourse-initial contexts. Varieties of ellipsis provide the examples I shall focus on in this note. Though it has sometimes been argued that ellipsis requires explicit linguistic antecedents, a significant body of recent literature has shown that, with the right contextual background, discourse-initial occurrences can be acceptable. But, this note argues, these facts raise a further puzzle. Cases where ellipsis is acceptable in discourse-initial contexts typically require very heavy contextual set-up. Yet prior discourse easily supports ellipsis, without providing the same sort of information that heavy contextual set-up does. The puzzle, then, is why ellipsis is so easy in non-discourse-initial contexts. Why does simply uttering a sentence — merely creating so much noise — do the job that otherwise can be done only by heavy contextual set-up? The main goal of this note is to present this puzzle, but it concludes with some speculation about where an answer might be found, suggesting a role for sentence processing.
A survey of work on the concept of truth
This paper explores how words relate to concepts. It argues that in many cases, words get their meanings in part by associating with concepts, but only in conjunction with substantial input from language. Language packages concepts in grammatically determined ways. This structures the meanings of words, and determines which sorts of concepts map to words. The results are linguistically modulated meanings, and the extra-linguistic concepts associated with words are often not what intuitively would be expected. The paper concludes by discussing implications of this thesis for the relation of word to sentence meaning, and for issues of linguistic determinism.
This note focuses on Cappelen and Hawthorne’s analysis of the ‘Operator Argument’. This argument seeks to establish a form of temporalism, by showing that the semantic values of sentences must be temporally neutral, i.e. be sets of world-time pairs, on the basis of the presence of temporal operators in our languages. This note agrees with Cappelen and Hawthorne that the argument fails, and shows that this happens for some fundamental syntactic and semantic reasons. In doing so, it illustrates how language opts for a non-temporalist strategy for encoding tense and time. Even so, we can also see in the syntactic and semantic details ways language could have opted for temporalist strategies. Hence, when looking at the way language encodes information, we should see non-temporalism (eternalism) as a fundamental feature of the way language works, but also as a contingent ‘choice’ our languages make, and not as a conceptual truth.
Descriptions, Negation, and Focus, in Compositionality, Context and Semantic Values, ed. R. J. Stainton and C. Viger, Springer-Verlag, 2009, pp. 193-220. Available from Springer. The penultimate version is available here.
This paper argues that some familiar cases of interaction between definite descriptions and negation are not best analyzed as scope interactions. Attention to the role of focus, and a number of related semantic and pragmatic factors, show that the cases give no evidence of scope interaction. However, these factors can generate an illusion of scope. In particular, focus can generate illusions of scope, which may lead us to think sentences display scope ambiguities they do not. These conclusions offer limited support to non-quantificational treatments of definite descriptions.
This paper argues that relativity of truth to a world plays no significant role in empirical semantic theory, even as it is done in the model-theoretic tradition relying on intensional type theory. Some philosophical views of content provide an important notion of truth at a world, but they do not constrain the empirical domain of semantic theory in a way that makes this notion empirically significant. As an application of this conclusion, this paper shows that a potential motivation for relativism based on the relativity of truth to a world fails.
This paper shows that several sorts of expressions cannot be interpreted metaphorically, including determiners, tenses, etc. Generally, functional categories cannot be interpreted metaphorically, while lexical categories can. This reveals a semantic property of functional categories, and it shows that metaphor can be used as a probe for investigating them. It also reveals an important linguistic constraint on metaphor. The paper argues this constraint applies to the interface between the cognitive systems for language and metaphor. However, the constraint does not completely prevent structural elements of language from being available to the metaphor system. The paper shows that linguistic structure within the lexicon, specifically, aspectual structure, is available to the metaphor system.
Quantification and Contributing Objects to Thoughts, Philosophical Perspectives 22 (2008): 207-231. Available from Wiley-Blackwell. The penultimate version is available here.
This paper argues the determiner both is ambivalent as to whether it should be classified as quantificational or object-denoting. It displays scope behavior typical of quantification, but it can be interpreted as having an individual as semantic value. To show the significance of this, the paper discusses two ways of thinking about quantifiers. One is via singular thoughts, and which terms can contribute objects to them. Viewed this way, both can appear object-denoting and non-quantificational. Another is via a range linguistic features. Viewed this way, both can appear quantificational. It can appear both ways, the paper argues, because the notion of quantification in natural language is the intersection of a number of features, which do not always group together precisely in accord with our intuitions about expressing singular thoughts.
Where the Paths Meet: Remarks on Truth and Paradox (with Jc Beall), in Midwest Studies in Philosophy, Volume XXXII: Truth and Its Deformities, ed. P. A. French and H. K. Wettstein, Blackwell, 2008, pp. 169-198. Available from Wiley-Blackwell. The penultimate version is available here.
The study of truth is often seen as running on two separate paths: the nature path and the logic path. The former concerns metaphysical questions about the ‘nature’, if any, of truth. The latter concerns logic and the paradoxes. It is often assumed that these two paths do not meet, and the two concerns are independent of each-other. In this paper, we argue that the paths do in fact meet; in particular, that the nature path impacts the logic path. We argue that what one can and must say about the logic of truth and the Liar paradox is influenced, or even in some cases determined, by what one says about the metaphysical nature of truth.
This paper argues against relativism, focusing on relativism based on the semantics of predicates of personal taste. It presents and defends a contextualist semantics for these predicates, derived from current work on gradable adjectives. It then considers metasemantic questions about the kinds of contextual parameters this semantics requires. It argues they are not metasemantically different from those in other gradable adjectives, and that contextual parameters of this sort are widespread in natural language. Furthermore, this paper shows that if such parameters are rejected, it leads to an unacceptably rampant form of relativism, that relativizes truth to an open-ended list of parameters.
Definite Descriptions and Quantifier Scope: Some Mates Cases Reconsidered, European Journal of Analytic Philosophy 3 (2007): 133-158 (special issue on descriptions). Available at the journal’s web site here; but the typesetting got somewhat messed up, and you are better off reading the penultimate version is available here.
This paper reexamines some examples, discussed by Mates and others, of sentences containing both definite descriptions and quantifiers. It has frequently been claimed that these sentences provide evidence for the view that definite descriptions themselves are quantifiers. The main goal of this paper is to argue this is not so. Though the examples are compatible with quantificational approaches to definite descriptions, they are also compatible with views that treat definite descriptions as basically scopeless. They thus provide no reason to see definite descriptions as quantifiers. Even so, this paper shows that the examples do raise a surprising range of complex issues about how quantifier scope works, and where it occurs. Thus, a clear picture of how these examples work will help us to understand better where definite descriptions fit into the larger picture of quantifiers and related phenomena.
Context and Unrestricted Quantification, in Absolute Generality, ed. A. Rayo and G. Uzquiano, Oxford University Press, 2006, pp. 45-74. The penultimate version is available here.
Quantifiers, in The Oxford Handbook of Philosophy of Language, ed. E. Lepore and B. Smith, Oxford University Press, 2006, pp. 794-821. The penultimate version is available here.
Focus: A Case Study on the Semantics-Pragmatics Boundary, in Semantics versus Pragmatics, ed. Z. G. Szabo, Oxford University Press, 2005, pp. 72-110. The penultimate version is available here.
Minimalism, Deflationism, and Paradoxes (revised and expanded version of “Minimalism and Paradoxes”) in Deflationism and Paradoxes, ed. Jc Beall and B. Armour-Garb, Oxford University Press, 2005, pp. 107-132. The penultimate version is available here.
Presuppositions, Truth Values, and Expressing Propositions, in Contextualism in Philosophy: Knowledge, Meaning, and Truth, ed. G. Preyer and G. Peter, Oxford University Press, 2005, pp. 349-396. The penultimate version is available here.
Against Truth-Value Gaps, in Liars and Heaps: New Essays on Paradox, ed. Jc Beall, Oxford University Press, 2004, pp. 151-194. The penultimate version is available here.
Unpublished and in Progress
A great deal of discussion in recent philosophy of language has centered on the idea that there might be hidden contextual parameters in our sentences. But relatively little attention has been paid to what those parameters themselves are like, beyond the assumption that they behave more or less like variables do in logic. My goal in this paper is to show this has been a mistake. I argue there are at least two very different sorts of contextual parameters. One is indeed basically like variables in logic, but the other is very different, and much more like overt referring expressions. Most of this paper is an in-depth study of an example where we see both classes of contextual parameters at work: the case of predicates of personal taste. I claim they have standard parameter, like all gradable predicates, which behaves much as a variable does. But I also claim they have an experiencer parameter, which behaves strikingly like an overt referring expression, syntactically, semantically, and pragmatically. I show that the different properties of these two classes of parameters are reflected in different sorts of evidence we can bring to bear in showing their existence, and in different ways they interact with the contents speakers intuitively seek to convey with their utterances.