text
stringlengths 646
5.29k
| textbook_name
stringclasses 14
values |
---|---|
The Blackwell Companion
to Philosophy
Blackwell Companions to Philosophy
This outstanding student reference series offers a comprehensive and authoritative
survey of philosophy as a whole. Written by today’s leading philosophers, each volume
provides lucid and engaging coverage of the key figures, terms, topics, and problems of
the field. Taken together, the volumes provide the ideal basis for course use, represent-
ing an unparalleled work of reference for students and specialists alike.
Already published in the series:
1 The Blackwell Companion to
Philosophy, Second Edition
Edited by Nicholas Bunnin and
E. P. Tsui-James
2 A Companion to Ethics
Edited by Peter Singer
3 A Companion to Aesthetics
Edited by David Cooper
4 A Companion to Epistemology
Edited by Jonathan Dancy and Ernest Sosa
5 A Companion to Contemporary Political
Philosophy
Edited by Robert E. Goodin and
Philip Pettit
6 A Companion to Philosophy of Mind
Edited by Samuel Guttenplan
7 A Companion to Metaphysics
Edited by Jaegwon Kim and Ernest Sosa
8 A Companion to Philosophy of Law and
Legal Theory
Edited by Dennis Patterson
9 A Companion to Philosophy of Religion
Edited by Philip L. Quinn and
Charles Taliaferro
10 A Companion to the Philosophy of
Language
Edited by Bob Hale and Crispin Wright
11 A Companion to World Philosophies
Edited by Eliot Deutsch and Ron
Bontekoe
12 A Companion to Continental Philosophy
Edited by Simon Critchley and
William Schroeder
14 A Companion to Cognitive Science
Edited by William Bechtel and
George Graham
15 A Companion to Bioethics
Edited by Helga Kuhse and Peter Singer
16 A Companion to the Philosophers
Edited by Robert L. Arrington
17 A Companion to Business Ethics
Edited by Robert E. Frederick
18 A Companion to the Philosophy of
Science
Edited by W. H. Newton-Smith
19 A Companion to Environmental
Philosophy
Edited by Dale Jamieson
20 A Companion to Analytic Philosophy
Edited by A. P. Martinich and David Sosa
21 A Companion to Genethics
Edited by Justine Burley and John Harris
22 A Companion to Philosophical Logic
Edited by Dale Jacquette
23 A Companion to Early Modern
Philosophy
Edited by Steven Nadler
Forthcoming
A Companion to African American
Philosophy
Edited by Tommy Lott and John Pittman
A Companion to African Philosophy
Edited by Kwasi Wiredu
A Companion to Ancient Philosophy
Edited by Mary Louise Gill
13 A Companion to Feminist Philosophy
Edited by Alison M. Jaggar and
Iris Marion Young
A Companion to Medieval Philosophy
Edited by Jorge J. E. Gracia, Greg Reichberg,
and Timothy Noone
The Blackwell
Companion to
Philosophy
SECOND EDITION
Edited by
NICHOLAS BUNNIN
and
E. P. TSUI-JAMES
Copyright © 1996, 2003 Blackwell Publishers Ltd, a Blackwell Publishing company
Editorial matter, selection and arrangement copyright © Nicholas Bunnin and Eric Tsui-James
1996, 2003
First edition published 1996
Reprinted 1996 (twice), 1998, 1999, 2002
Second edition published 2003
350 Main Street, Malden, MA 02148-5018, USA
108 Cowley Road, Oxford OX4 1JF, UK
550 Swanston Street, Carlton South, Victoria 3053, Australia
Kurfürstendamm 57, 10707 Berlin, Germany
The right of Nicholas Bunnin and Eric Tsui-James to be identified as the Authors of the Editorial
Material in this work has been asserted in accordance with the UK Copyright, Designs and Patents
Act 1988.
All rights reserved. No part of this publication may be reproduced, stored in a retrieval system,
or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording
or otherwise, except as permitted by the UK Copyright, Designs and Patents Act 1988, without
the prior permission of the publisher.
Library of Congress Cataloging-in-Publication Data
The Blackwell companion to philosophy / edited by Nicholas Bunnin and
E. P. Tsui-James. –– 2nd ed.
p.
cm. –– (Blackwell companions to philosophy)
Includes bibliographical references and index.
ISBN 0–631–21907–2 –– ISBN 0–631–21908–0 (pbk.)
I. Bunnin, Nicholas.
1. Philosophy.
II. Tsui-James, E. P.
III. Series.
B21 .B56 2003
100––dc21
2002023053
A catalogue record for this title is available from the British Library.
| Blackwell |
Set in 10 on 121/2 pt Photina
by SNP Best-set Typesetter Ltd, Hong Kong
Printed and bound in the United Kingdom by T. J. International, Padstow, Cornwall
For further information on
Blackwell Publishing, visit our website:
http://www.blackwellpublishing.com
For Antonia and Oliver Bunnin and Jamie Perry
Contents
Preface to the Second Edition
Preface to the First Edition
Notes on Contributors
Contemporary Philosophy in the United States – John R. Searle
Contemporary Philosophy: A Second Look – Bernard Williams
Part I Areas of Philosophy
1 Epistemology – A. C. Grayling
2 Metaphysics – Simon Blackburn, with a section on Time by Robin Le Poidevin
3 Philosophy of Language – Martin Davies
4 Philosophy of Logic – A. W. Moore
5 Philosophy of Mind – William G. Lycan
6 Ethics – John Skorupski
7 Aesthetics – Sebastian Gardner
8 Political and Social Philosophy – David Archard
9 Philosophy of Science – David Papineau
10 Philosophy of Biology – Elliott Sober
11 Philosophy of Mathematics – Mary Tiles
12 Philosophy of Social Science – Martin Hollis
13 Philosophy of Law – N. E. Simmonds
14 Philosophy of History – Leon Pompa
15 Philosophy of Religion – Charles Taliaferro
16 Applied Ethics – John Haldane
17 Bioethics, Genethics and Medical Ethics – Rebecca Bennett,
Charles A. Erin, John Harris and Søren Holm
18 Environmental Ethics – Holmes Rolston, III
19 Business Ethics – Georges Enderle
20 Philosophy and Feminism – Jean Grimshaw and Miranda Fricker
21 Ethnicity, Culture and Philosophy – Robert Bernasconi
ix
x
xii
1
23
35
37
61
90
147
173
202
231
257
286
317
345
375
403
428
453
490
499
517
531
552
567
CONTENTS
Part II History of Philosophy
22 Ancient Greek Philosophy – Robert Wardy
23 Plato and Aristotle – Lesley Brown
24 Medieval Philosophy – Jorge J. E. Gracia
25 Bacon – Stephen Gaukroger
26 Descartes and Malebranche – Richard Francks and George Macdonald Ross
27 Spinoza and Leibniz – Richard Francks and George Macdonald Ross
28 Hobbes – Tom Sorell
29 Locke – R. S. Woolhouse
30 Berkeley – Howard Robinson
31 Hume – Peter Jones
32 Kant – David Bell
33 Hegel – Michael Inwood
34 Marx – Richard Norman
35 Bentham, Mill and Sidgwick – Ross Harrison
36 Pragmatism – Susan Haack
37 Frege and Russell – R. M. Sainsbury
38 Moore – Thomas Baldwin
39 Wittgenstein – David Pears
40 Nietzsche – David E. Cooper
41 Husserl and Heidegger – Taylor Carman
42 Sartre, Foucault and Derrida – Gary Gutting
Glossary
Appendix
Index
583
585
601
619
634
644
658
671
682
694
709
725
741
750
759
774
790
805
811
827
842
860
875
893
905
Preface to the Second Edition
We thank readers for their gratifying response to the first edition of the Companion.
The second edition provides new chapters on Philosophy of Biology; Bioethics,
Genethics and Medical Ethics; Environmental Ethics; Business Ethics; Ethnicity, Culture
and Philosophy; Plato and Aristotle; Francis Bacon; Nietzsche; Husserl and Heidegger;
and Sartre, Foucault and Derrida. There are significant revisions or extensions to
chapters on Metaphysics, Philosophy of Language, Philosophy of Mind, Political and
Social Philosophy, Philosophy of Religion, Philosophy and Feminism, and Hobbes. The
discussion of Descartes, Spinoza and Leibniz is now divided between two chapters, and
in a new section Malebranche is considered along with Descartes in the first of these.
A longer chapter on Medieval Philosophy replaces the chapter by C. F. J. Martin, who
was unavailable to extend his work. We welcome our new contributors and hope that
readers will continue to be challenged and delighted by the Companion as a whole.
Nicholas Bunnin
E. P. Tsui-James
Preface to the First Edition
This Companion complements the Blackwell Companions to Philosophy series by
presenting a new overview of philosophy prepared by thirty-five leading British and
American philosophers. Introductory essays by John Searle and Bernard Williams,
which assess the changes that have shaped the subject in recent decades, are followed
by chapters exploring central problems and debates in the principal subdisciplines of
philosophy and in specialized fields, chapters concerning the work of great historical | Blackwell |
figures and chapters discussing newly developing fields within philosophy. Throughout
the course of
its chapters, the Companion examines the views of many of the
most widely influential figures of contemporary philosophy.
Although wide-ranging, the Companion is not exhaustive, and emphasis is placed
on developments in Anglo-American philosophy in the latter part of the twentieth
century. A premise underlying the Companion is that major participants in philosoph-
ical debate can provide accounts of their own fields that are stimulating, accessible,
stylish and authoritative.
In its primary use, the Companion is an innovative textbook for introductory courses
in philosophy. Teachers can use the broad coverage to select chapters in a flexible way
to support a variety of courses based on contemporary problems or the historical devel-
opment of the subject. Specialist chapters can be used selectively to augment standard
introductory topics or to prepare students individually for term papers or essays. Chap-
ters include initial summaries, boxed features, cross-references, suggestions for further
reading, references and discussion questions. In addition, terms are marked for a
common glossary. These features and the problem-setting nature of the discussions
encourage students to see the subject as a whole and to gain confidence that explo-
rations within philosophy can lead to unexpected and rewarding insights. In this
aspect, the Companion reflects the contributors’ experience of small group teaching,
in which arguments and perspectives are rigorously tested and in which no solution is
imposed.
In its secondary use, the Companion will accompany students throughout their
undergraduate careers and will also serve the general reader wishing to understand
the central concepts and debates within philosophy or its constituent disciplines.
Students are unlikely to read the whole volume in their first year of study, but those
continuing with philosophy will find their appreciation of the work deepening over time
PREFACE TO THE FIRST EDITION
as they gain insight into the topics of the more advanced chapters. The Companion
will help them to formulate questions and to see connections between what they
have already studied and new terrain.
In its final use, the Companion bears a special relationship to the Blackwell Com-
panions to Philosophy series. Many readers will wish to read the integrated discussions
of the chapters of the present Companion for orientation before turning to the detailed,
alphabetically arranged articles of the volumes in the Companion series. Although con-
ceived as a separate volume, the Companion to Philosophy will serve as a useful guide
to the other excellent Companions in what amounts to a comprehensive encyclopedia
of philosophy.
The general reader might begin with the introductory essays and turn to chapters
on Epistemology, Metaphysics, Ethics and Political and Social Philosophy, or to histori-
cal chapters from Ancient Greek Philosophy to Hume. Cross-references and special
interests will lead readers to other chapters.
Cross-references in the text are marked in small capitals followed by a chapter
number or page numbers in parentheses: Ethics (chapter 6) or Probability (pp.
308–11). We have used our judgement in marking terms appearing many times in the
text for cross-references, and hope that we have supplied guidance without distracting
readers. The Companion also provides a glossary of 210 terms and a comprehensive
index. Both appear at the end of the volume, and readers are advised to use them reg-
ularly for help in reading the chapters. When an author does not refer to a book by its
first edition, a recent publication is cited in the text, and the original date of publica-
tion (or in some cases of composition) will appear in square brackets in the references.
As editors, we are fully aware of our good fortune in attracting superb contributors. | Blackwell |
The complexity of their insights and the clarity of their presentations are the chief
attractions of the Companion. We appreciate their care in making the difficult not only
accessible but delightful as well. We also wish to thank the Departments of Philosophy
at the University of Essex and the University of Hong Kong for their support through-
out the preparation of this volume. We are especially grateful to Laurence Goldstein,
Tim Moore and Frank Cioffi for their comments and advice. A version of the Com-
panion is published in Chinese by the Shandong Academy of Social Sciences, and we
appreciate the friendly co-operation of our Chinese co-editors.
Our cover illustration, R. B. Kitaj’s philosophically resonant If Not, Not, is a work by
an American artist working in London during the period that provides the main focus
of our volume.
Nicholas Bunnin
E. P. Tsui-James
xi
Notes on Contributors
David Archard is Reader in Moral Philosophy and Director of the Centre for Ethics,
Philosophy and Public Affairs at the University of St Andrews. He is the author of
Sexual Consent (1998) and co-editor of The Moral and Political Status of Children: New
Essays (2002).
Thomas Baldwin is Professor of Philosophy at the University of York. He previously
taught at the University of Cambridge (where he was Fellow of Clare College) and at
Makerere University. He has published G. E. Moore (1990) and Contemporary Philoso-
phy: Philosophy in English since 1945 (2001) in addition to many articles on issues in
metaphysics and the philosophy of language.
David Bell is Professor of Philosophy at the University of Sheffield. He is the author of
works on Frege, Husserl and Kant. His interests include the foundations of arithmetic,
solipsism and the nature and origins of the analytic tradition.
Rebecca Bennett is Lecturer in Bioethics at the Centre for Social Ethics and Policy,
School of Law, University of Manchester. She edited (with Charles Erin) HIV and AIDS:
Testing, Screening and Confidentiality (1999).
Robert Bernasconi is Moss Professor of Philosophy at the University of Memphis.
He is the author of The Question of Language in Heidegger’s History of Being (1985) and
Heidegger in Question (1993) as well as numerous articles on Hegel and on twentieth-
century European philosophy. He has edited collections of essays on Derrida and on
Levinas and most recently Race (2001).
Simon Blackburn is Professor of Philosophy at the University of Cambridge. A former
editor of the journal Mind, he has written Ruling Passions (1998), Spreading the Word
(1984), Essays in Quasi Realism (1993) and The Oxford Dictionary of Philosophy (1994).
His current work concerns problems of realism and its alternatives as they have
emerged in historical and contemporary work.
Lesley Brown is Tutorial Fellow in Philosophy, Somerville College, University of
Oxford. She has written on Plato, especially his metaphysics and epistemology, and on
ancient philosophy of language.
CONTRIBUTORS
Nicholas Bunnin is Director of the Philosophy Project at the Institute for Chinese
Studies, University of Oxford and previously taught at the University of Glasgow and
the University of Essex. He compiled (with Jiyuan Yu) the Dictionary of Western Philos-
ophy: English–Chinese (2001) and edited (with Chung-ying Cheng) Contemporary
Chinese Philosophy (2002). His main interests are in metaphysics, the philosophy of
mind and political philosophy.
Taylor Carman is Assistant Professor of Philosophy at Barnard College, Columbia
University. He is co-editor of The Cambridge Companion to Merleau-Ponty (forthcoming)
and the author of Heidegger’s Analytic: Interpretation, Discourse, and Authenticity in
‘Being and Time’ (forthcoming), and of other articles on Husserl, Heidegger, and
Merleau-Ponty.
David E. Cooper is Professor of Philosophy at the University of Durham and Director
of the Durham Institute of Comparative Ethics. His books include Metaphor (1986), | Blackwell |
Existentialism: A Reconstruction (2nd revd edn 2000), World Philosophies: An Historical
Introduction (2nd revd edn 2002) and The Measure of Things: Humanism, Humility and
Mystery (2002).
Martin Davies is Professor of Philosophy in the Research School of Social Sciences,
Australian National University. He was formerly Wilde Reader in Mental Philosophy at
the University of Oxford. He has published widely in the areas of philosophy of
language, mind and psychology.
Georges Enderle is Arthur and Mary O’Neil Professor of International Business Ethics
at the University of Notre Dame. His books include Business Students Focus on Ethics
(1993), translated into Portuguese (1997) and Chinese (2001).
Charles A. Erin is Senior Lecturer in Applied Philosophy and Fellow of the Institute
of Medicine, Law and Bioethics at the University of Manchester. He has written widely
on topics in bioethics and edited (with Rebecca Bennett) HIV and AIDS: Testing,
Screening and Confidentiality (1999).
Richard Francks is Director of Undergraduate Studies in Philosophy at the
University of Leeds. His main interests are in epistemology, the history of philosophy
and the philosophy of history.
Miranda Fricker is Lecturer in Philosophy at Birkbeck College, University of London
and was previously Lecturer in Philosophy and British Academy Postdoctoral Fellow at
Heythrop College, University of London. She has published articles in epistemology,
ethics and social philosophy, and edited (with Jennifer Hornsby) The Cambridge
Companion to Feminism in Philosophy (2000). Her current work focuses on the idea of
an ethics of epistemic practice.
Sebastian Gardner is Reader in Philosophy at University College, London. He is the
author of Irrationality and the Philosophy of Psychoanalysis (1993) and Kant and the
Critique of Pure Reason (1999). His interests lie in aesthetics, psychoanalysis and the
history of philosophy.
xiii
CONTRIBUTORS
Stephen Gaukroger is Professor of History of Philosophy and History of Science at
the University of Sydney. He is the author of Explanatory Structures (1978), Cartesian
Logic (1989), Descartes: An Intellectual Biography (1995), Francis Bacon and the
Transformation of Early Modern Philosophy (2000) and Descartes’ System of Natural
Philosophy (2001). He has also edited four collections of essays and published
translations of Descartes and Arnaud.
Jorge J. E. Gracia is a State University of New York Distinguished Professor and holds
the Samuel F. Capon Chair in the Department of Philosophy, State University of New
York, University at Buffalo. He has written widely on medieval philosophy, metaphysics,
philosophical historiography, philosophy of language and philosophy in Latin America.
His books include Introduction to the Problem of Individuation in the Early Middle Ages
(2nd revd edn 1988), Individuality: An Essay on the Foundations of Metaphysics (1988),
Philosophy and Its History: Issues in Philosophical Historiography (1992), A Theory of
Textuality: The Logic and Epistemology (1995), Texts: Ontological Status, Identity, Author,
Audience (1996) and Metaphysics and Its Task: The Search for the Categorical Foundation
of Knowledge (1999).
A. C. Grayling is Reader in Philosophy at Birkbeck College, London, and Supernu-
merary Fellow at St Anne’s College, Oxford. Among his books are An Introduction
to Philosophical Logic (3rd edn 1992), The Refutation of Scepticism (1985), Berkeley:
The Central Arguments (1986), Wittgenstein (1988), Russell (1993), Moral Values (1998),
The Quarrel of the Age (2000) and The Meaning of Things (2001). He has edited
Philosophy: A Guide Through the Subject (1995) and Philosophy: Further Through the
Subject (1998).
Jean Grimshaw taught Philosophy and Women’s Studies at the University of the West
of England, Bristol. She is the author of Feminist Philosophers: Women’s Perspectives on
Philosophical Traditions (1986) and a number of articles, mainly on feminism and phi- | Blackwell |
losophy. She has edited (with Jane Arthurs) Women’s Bodies: Discipline and Transgression
(1999).
Gary Gutting is Professor of Philosophy at the University of Notre Dame. He is the
author of Religious Belief and Religious Skepticism (1982), Michel Foucault’s Archaeology
of Knowledge (1989) and French Philosophy in the Twentieth Century (2001).
Susan Haack formerly Professor of Philosophy at the University of Warwick,
currently Professor of Philosophy at the University of Miami, is the author of Deviant
Logic (1974), Philosophy of Logic (1978), Evidence and Inquiry: Towards Reconstruction
in Epistemology (1993) and Manifesto of a Passionate Moderate: Unfashionable Essays
(1998). Her main areas of interest are the philosophy of logic and language, episte-
mology and metaphysics and pragmatism. She is a past President of the Charles
Peirce Society.
John Haldane is Professor of Philosophy and formerly Director of the Centre for
Philosophy and Public Affairs at the University of St Andrews. He has published
widely in the philosophy of mind, the philosophy of value and the history of philoso-
phy. He is co-author with J. J. C. Smart of Atheism and Theism (1996) in the Blackwell
Great Debates in Philosophy series.
xiv
CONTRIBUTORS
John Harris is Sir David Alliance Professor of Bioethics, Institute of Medicine, Law and
Bioethics, University of Manchester. He is a member of the United Kingdom Human
Genetics Commission and of the Ethics Committee of the British Medical Association.
He was a Founder Director of the International Association of Bioethics and a founder
member of the Board of the journal Bioethics. Among his books are The Value of Life
(1985) and Clones, Genes and Immortality (1998) (a revised edition of Wonderwoman and
Superman, 1992), and he is editor of Bioethics (2001) in the Oxford Readings in Phi-
losophy series.
Ross Harrison teaches philosophy at the University of Cambridge, where he is also a
Fellow of King’s College. Among his publications are Bentham (1983), Democracy
(1993) and (as editor and contributor) Henry Sidgwick (2001).
Martin Hollis was Professor of Philosophy at the University of East Anglia, Norwich.
He specialized in the philosophy of social science, especially in topics to do with ratio-
nality. Among his books are Models of Man (1977), The Cunning of Reason (1987), The
Philosophy of Social Science (1994), Reason in Action (1995), Trust Within Reason (1998)
and Pluralism and Liberal Neutrality (1999). The last two volumes were published after
his untimely death in 1998.
Søren Holm is Professor of Clinical Bioethics at the University of Manchester. He is
the author of Ethical Problems of Clinical Practice: The Ethical Reasoning of Health Care
Professionals (1997) and has edited (with Inez de Beaufort and Medard Hilhorst) In the
Eye of the Beholder: Ethics and Medical Change of Appearance (1996) and (with John
Harris) The Future Of Human Reproduction: Ethics, Choice and Regulation (1998).
Michael Inwood is Tutorial Fellow in Philosophy at Trinity College, Oxford. He has
published several books on Hegel. His other interests include ancient philosophy and
Heidegger. He is especially interested in the interconnections between Greek and
German philosophy.
Peter Jones was Professor of Philosophy and Director of the Institute for Advanced
Studies in the Humanities at the University of Edinburgh. He is the author of numer-
ous works, including Hume’s Sentiments (1982).
Robin Le Poidevin is Professor of Metaphysics at the University of Leeds, where he
was Head of the School of Philosophy 1988–2001. He is the author of Change, Cause
and Contradiction: A Defence of the Tenseless Theory of Time (1991) and Arguing for
Atheism: An Introduction to the Philosophy of Religion (1996) and has edited Questions of
Time and Tense (1998) and (with Murray MacBeath) The Philosophy of Time (1993).
William G. Lycan is William Rand Kenan, Jr Professor of Philosophy at the University | Blackwell |
of North Carolina. He has published a number of books, including Consciousness
(1987), Judgement and Justification (1988) and Consciousness and Experience (1996). He
is the editor of Mind and Cognition (1990). His interests are in the philosophy of mind,
the philosophy of language and epistemology.
A. W. Moore is Tutorial Fellow in Philosophy at St Hugh’s College, Oxford. He is the
author of The Infinite (2nd edn 2001) and Points of View (1997). He has also edited two
collections of essays: Meaning and Reference (1993) and Infinity (1993).
xv
CONTRIBUTORS
Richard Norman is Professor of Moral Philosophy at the University of Kent. His pub-
lications include The Moral Philosophers (1983), Free and Equal (1987) and Ethics, Killing
and War (1995).
David Papineau is Professor of Philosophy of Science at King’s College, London. He
has published widely in epistemology, the philosophy of mind and the philosophy of
science. His books include Reality and Representation (1987), Philosophical Naturalism
(1993), Introducing Consciousness (2000) and Thinking About Consciousness (2002).
David Pears is Emeritus Professor of Philosophy at the University of Oxford. His most
recent publications are The False Prison: A Study in the Development of Wittgenstein’s Phi-
losophy (2 vols, 1987 and 1988) and Hume’s System: An Examination of Book I of the
Treatise (1991). His other interests include entomology and the visual arts.
Leon Pompa was Professor of Philosophy at the University of Birmingham. His
research interests include the history of philosophy and the philosophy of history. He
has published a number of articles on the problems of fact, value and narrative in
history and on Descartes, Vico, Kant, Hegel, Marx, Collingwood and Wittgenstein. He
co-edited with W. H. Dray Substance and Form in History: Essays in Philosophy of History
(1981), was editor and translator of Vico: A Study of the ‘New Science’ (2nd edn 1990)
and is the author of Human Nature and Historical Knowledge: Hume, Hegel and Vico
(1990).
Howard Robinson is Professor of Philosophy, Central European University, Budapest.
He was previously Soros Professor of Philosophy at the Eötvös Loránd University,
Budapest and Reader in Philosophy at the University of Liverpool. His main interests
are in the philosophy of mind and in idealism. He is the author of Matter and Sense
(1982) and Perception (1994), and co-author (with John Foster) of Essays on Berkeley
(1985). He edited Objections to Physicalism (1991) and is currently editing Berkeley’s
Principles and Three Dialogues for Oxford University Press’s World Classics series.
Holmes Rolston, III is University Distinguished Professor and Professor of Philosophy
at Colorado State University. He has written seven books, most recently Genes, Genesis
and God (1999), Philosophy Gone Wild (1986), Environmental Ethics (1988), Science and
Religion: A Critical Survey (1987) and Conserving Natural Value (1994). He gave the
Gifford Lectures, University of Edinburgh, 1997–8, has lectured on seven continents,
is featured in Joy A. Palmer’s (ed.) Fifty Key Thinkers on the Environment and is past and
founding president of the International Society for Environmental Ethics.
George MacDonald Ross is Senior Lecturer in the Department of Philosophy at the
University of Leeds and Director of the Philosophical and Religious Studies Subject
Centre of the Learning and Teaching Support Network. He has written extensively on
Leibniz and other seventeenth- and eighteenth-century philosophers, and is the author
of Leibniz (1984).
R. M. Sainsbury is Stebbing Professor of Philosophy at King’s College, London
and was editor of the journal Mind for several years until 2000. He has published
Russell (1979), Paradoxes (1995) and Logical Forms (2000). His main interests are in
philosophical logic and the philosophy of language.
xvi
CONTRIBUTORS
John R. Searle is Mills Professor of Mind and Language at the University of California | Blackwell |
where he has been a faculty member since 1959. Before that, he was a lecturer at Christ
Church, Oxford, and he received all his university degrees from Oxford. Most of his work
is in the philosophy of mind, the philosophy of language, and social philosophy. His
most recently published books are Rationality in Action (2001) and Mind, Language and
Society (1998). He is the author of several other important books, including Speech Acts:
An Essay in the Philosophy of Language (1969), Expression and Meaning: Studies in the
Theory of Speech Acts (1985), Intentionality (1983), Minds, Brains and Science, the 1984
Reith Lectures (1989), The Rediscovery of Mind (1992) and The Construction of Social
Reality (1995).
N. E. Simmonds is a Fellow of Corpus Christi College, Cambridge where he lectures in
law. His interests include the philosophy of law and political philosophy. He has pub-
lished The Decline of Judicial Reason (1984), Central Issues in Jurisprudence (1986) and
numerous articles on the philosophy of law.
John Skorupski is Professor of Moral Philosophy at the University of St Andrews. He
is the author of John Stuart Mill (1989) and English-Language Philosophy 1750–1945
(1993). His most recent book is Ethical Explorations (1999).
Elliott Sober is Hans Reichenbach Professor and Henry Vilas Research Professor at
the University of Wisconsin, Madison and Centennial Professor at the London School
of Economics. He is the author of The Nature of Selection (1984), Reconstructing the Past
(1988), Philosophy of Biology (1993) and (with David S. Wilson) Unto Others: Evolution
and Psychology of Unselfish Behaviour (1988).
Tom Sorell is Professor of Philosophy at the University of Essex. He is the author of
Hobbes (1986), Descartes (1987), Moral Theory and Capital Punishment (1987), Scientism
(1991), (with John Hendry) Business Ethics (1994) and Moral Theory and Anomaly
(2000). He is the editor of The Rise of Modern Philosophy (1993), The Cambridge Com-
panion to Hobbes (1995), Health Care, Ethics, and Insurance (1998), Descartes (1999), and
(with John Rogers) Hobbes and History (2000).
Charles Taliaferro is Professor of Philosophy at St Olaf College, Northfield, Minnesota
and the author of Consciousness and the Mind of God (1994), Contemporary Philosophy
of Religion (1999) and the co-editor of A Companion to Philosophy of Religion (1998).
Mary Tiles is Professor of Philosophy at the University of Hawaii at Manoa. Her inter-
ests include the history and philosophy of mathematics, science and technology and
their interactions with culture (European and Chinese). She has published Living in a
Technological Culture (with Hans Oberdiek) (1995), An Introduction to Historical Episte-
mology (with James Tiles) (1993), Mathematics and the Image of Reason (1991) and
Bachelard: Science and Objectivity (1984).
Eric P. Tsui-James studied as a postgraduate at Oriel College, Oxford. He taught phi-
losophy at St Hilda’s College, Oxford, for two years before moving to the University of
Hong Kong in 1990. He has published work on the metaphysics of mathematics, but
his research interests now centre around the work of William James, especially the
nineteenth-century psychological and physiological contexts of his radical empiricism.
xvii
CONTRIBUTORS
Robert Wardy teaches philosophy and classics at St Catharine’s College, Cambridge.
He has published in the fields of ancient Greek philosophy and rhetoric, Latin litera-
ture, the philosophy of language and Chinese philosophy.
Bernard Williams is Monroe Deutsch Professor of Philosophy, University of Califor-
nia, Berkeley, and was White’s Professor of Moral Philosophy and a Fellow of Corpus
Christi College, Oxford. His works include Morality (1972), Problems of the Self (1973),
Descartes: The Project of Pure Enquiry (1978), Moral Luck (1981), Ethics and the Limits
of Philosophy (1985), Shame and Necessity (1993) and Making Sense of Humanity
(1995).
R. S. Woolhouse is Professor of Philosophy at the University of York. He is the author | Blackwell |
of Locke’s Philosophy of Science and Knowledge (1971), Locke (1983), The Empiricists
(1988) and Descartes, Spinoza, Leibniz: The Concept of Substance in Seventeenth-Century
Philosophy (1993).
xviii
Contemporary Philosophy in
the United States
J O H N R. S E A R L E
Philosophy as an academic discipline in America has considerably fewer practitioners
than do several other subjects in the humanities and the social sciences, such as
sociology, history, English, or economics; but it still shows enormous diversity. This
variety is made manifest in the original research published by professional philosophers,
whose differing points of view are expressed in the large number of books published
each year, as well as in the many professional philosophy journals. There are
over two thousand colleges and universities in the United States, of which nearly
all have philosophy departments, and the number of professional philosophers is
correspondingly large.
Because of this diversity, any generalizations about the discipline as a whole, which
I am about to make, are bound to be misleading. The subject is too vast and complex to
be describable in a single essay. Furthermore, anyone who is an active participant in
the current controversies, as I am, necessarily has a perspective conditioned by his or
her own interests, commitments and convictions. It would be impossible for me to give
an ‘objective’ account. I am not therefore in what follows trying to give a neutral or
disinterested account of the contemporary philosophical scene; rather I am trying to
say what in the current developments seems to me important.
In spite of its enormous variety, there are certain central themes in contemporary
American philosophy. The dominant mode of philosophizing in the United States
is called ‘analytic philosophy’. Without exception, the best philosophy departments
in the United States are dominated by analytic philosophy, and among the leading
philosophers in the United States, all but a tiny handful would be classified as analytic
philosophers. Practitioners of types of philosophizing that are not in the analytic
tradition – such as phenomenology, classical pragmatism, existentialism, or Marxism –
feel it necessary to define their position in relation to analytic philosophy. Indeed,
analytic philosophy is the dominant mode of philosophizing not only in the United
States, but throughout the entire English-speaking world, including Great Britain,
Canada, Australia and New Zealand. It is also the dominant mode of philosophizing in
Scandinavia, and it is also becoming more widespread in Germany, France, Italy and
throughout Latin America. I personally have found that I can go to all of these parts of
the world and lecture on subjects in contemporary analytic philosophy before audiences
who are both knowledgeable and well trained in the techniques of the discipline.
JOHN R. SEARLE
1 Analytic Philosophy
What, then, is analytic philosophy? The simplest way to describe it is to say that it
is primarily concerned with the analysis of meaning. In order to explain this enterprise
and its significance, we need first to say a little bit about its history. Though the
United States now leads the world in analytic philosophy, the origins of this mode of
philosophizing lie in Europe. Specifically, analytic philosophy is based on the work
of Gottlob Frege, Ludwig Wittgenstein, Bertrand Russell and G. E. Moore, as well as
the work done by the logical positivists of the Vienna Circle in the 1920s and 1930s.
Going further back in history, one can also see analytic philosophy as a natural descend-
ant of the empiricism of the great British philosophers Locke, Berkeley and Hume, and
of the transcendental philosophy of Kant. In the works of philosophers as far back as
Plato and Aristotle, one can see many of the themes and presuppositions of the | Blackwell |
methods of analytic philosophy. We can best summarize the origins of modern analytic
philosophy by saying that it arose when the empiricist tradition in epistemology,
together with the foundationalist enterprise of Kant, were tied to the methods of
logical analysis and the philosophical theories invented by Gottlob Frege in the late
nineteenth century. In the course of his work on the foundations of mathematics,
Frege invented symbolic logic in its modern form and developed a comprehensive
and profound philosophy of language. Though many of the details of his views on
language and mathematics have been superseded, Frege’s work is crucial for at least
two reasons. Firstly, by inventing modern logic, specifically the predicate calculus,
he gave us a primary tool of philosophical analysis; and, secondly, he made the
philosophy of
language central to the entire philosophical enterprise. From the
point of view of analytic philosophy, Frege’s work is the greatest single philo-
logical
sophical achievement of
analysis were later augmented by the ordinary language analysis inspired by the
work of Moore and Wittgenstein and are best exemplified by the school of
lin-
guistic philosophy that flourished in Oxford in the l950s. In short, analytic phi-
losophy attempts to combine certain traditional philosophical themes with modern
techniques.
the nineteenth century. Fregean techniques of
Analytic philosophy has never been fixed or stable, because it is intrinsically self-
critical and its practitioners are always challenging their own presuppositions and con-
clusions. However, it is possible to locate a central period in analytic philosophy – the
period comprising, roughly speaking, the logical positivist phase immediately prior to
the 1939–45 war and the postwar phase of linguistic analysis. Both the prehistory and
the subsequent history of analytic philosophy can be defined by the main doctrines of
that central period.
In the central period, analytic philosophy was defined by a belief in two linguistic
distinctions, combined with a research programme. The two distinctions are, firstly, that
between analytic and synthetic propositions, and, secondly, that between descriptive
and evaluative utterances. The research programme is the traditional philosophical
research programme of attempting to find foundations for such philosophically prob-
lematic phenomena as language, knowledge, meaning, truth, mathematics and so on.
2
CONTEMPORARY PHILOSOPHY IN THE UNITED STATES
One way to see the development of analytic philosophy over the past thirty years is to
regard it as the gradual rejection of these two distinctions, and a corresponding rejec-
tion of foundationalism as the crucial enterprise of philosophy. However, in the central
period, these two distinctions served not only to identify the main beliefs of analytic phi-
losophy, but, for those who accepted them and the research programme, they defined
the nature of philosophy itself.
1.1 Analytic versus synthetic
The distinction between analytic and synthetic propositions was supposed to be
the distinction between those propositions that are true or false as a matter of
definition or of the meanings of the terms contained in them (the analytic propo-
sitions) and those that are true or false as a matter of
fact in the world and not
solely in virtue of the meanings of the words (the synthetic propositions). Examples
of analytic truths would be such propositions as ‘Triangles are three-sided plane
figures’, ‘All bachelors are unmarried’, ‘Women are female’, ‘2 + 2 = 4’ and so on.
In each of these, the truth of the proposition is entirely determined by its meaning;
they are true by the definitions of the words that they contain. Such propositions
can be known to be true or false a priori, and in each case they express necessary | Blackwell |
truths. Indeed, it was a characteristic feature of the analytic philosophy of this central
period that terms such as ‘analytic’, ‘necessary’, ‘a priori’ and ‘tautological’ were
taken to be co-extensive. Contrasted with these were synthetic propositions, which, if
they were true, were true as a matter of empirical fact and not as a matter of definition
alone. Thus, propositions such as ‘There are more women than men in the United
States’, ‘Bachelors tend to die earlier than married men’ and ‘Bodies attract each other
according to the inverse square law’ are all said to be synthetic propositions, and, if
they are true, they express a posteriori empirical truths about the real world that are
independent of
language. Such empirical truths, according to this view, are never
necessary; rather, they are contingent. For philosophers holding these views, the terms
‘a posteriori’, ‘synthetic’, ‘contingent’ and ‘empirical’ were taken to be more or less
co-extensive.
It was a basic assumption behind the logical positivist movement that all meaning-
ful propositions were either analytic or empirical, as defined by the conceptions that I
have just stated. The positivists wished to build a sharp boundary between meaningful
propositions of science and everyday life on the one hand, and nonsensical propositions
of metaphysics and theology on the other. They claimed that all meaningful proposi-
tions are either analytic or synthetic: disciplines such as logic and mathematics fall
within the analytic camp; the empirical sciences and much of common sense fall within
the synthetic camp. Propositions that were neither analytic nor empirical propositions,
and which were therefore in principle not verifiable, were said to be nonsensical or
meaningless. The slogan of the positivists was called the verification principle, and, in
a simple form, it can be stated as follows: all meaningful propositions are either
analytic or synthetic, and those which are synthetic are empirically verifiable. This
slogan was sometimes shortened to an even simpler battle cry: the meaning of a
proposition is just its method of verification.
3
JOHN R. SEARLE
1.2 The distinction between evaluative utterances and
descriptive utterances
Another distinction, equally important in the positivist scheme of things, is the dis-
tinction between those utterances that express propositions that can be literally either
true or false and those utterances that are used not to express truths or falsehoods, but
rather, to give vent to our feelings and emotions. An example of a descriptive statement
would be, ‘The incidence of crimes of theft has increased in the past ten years’. An
instance of the evaluative class would be ‘Theft is wrong’. The positivists claimed that
many utterances that had the form of meaningful propositions were used not to state
propositions that were verifiable either analytically or synthetically, but to express emo-
tions and feelings. Propositions of ethics look as if they are cognitively meaningful,
but they are not; they have only ‘emotive’ or ‘evaluative’ meaning. The propositions of
science, mathematics, logic and much of common sense fall in the descriptive class; the
utterances of aesthetics, ethics and much of religion fall in the evaluative class. It is
important to note that on this conception evaluative propositions are not, strictly speak-
ing, either true or false, since they are not verifiable as either analytic or empirical. The
two distinctions are crucially related in that all of the statements that fall on one side
or the other of the analytic–synthetic distinction also fall within the descriptive class of
the descriptive–evaluative distinction.
The importance that these two distinctions had for defining both the character of
the philosophical enterprise and the relationships between language and reality is hard | Blackwell |
to exaggerate. One radical consequence of the distinction between descriptive and
evaluative propositions was that certain traditional areas of philosophy, such as ethics,
aesthetics and political philosophy, were virtually abolished as realms of cognitive
meaningfulness. Propositions in these areas were, for the most part, regarded as non-
sensical expressions of feelings and emotions, because they are not utterances that can
be, strictly speaking, either true or false. Since the aim of philosophers is to state the
truth, and since evaluative utterances cannot be either true or false, it cannot be one
of the aims of philosophy to make any evaluative utterances. Philosophers might
analyse the meaning of evaluative terms, and they might examine the logical rela-
tionships among these terms, but philosophers, qua philosophers, can make no first-
order evaluations in aesthetics, ethics or politics, as these first-order evaluations are
not, strictly speaking, meaningful. They may have a sort of secondary, derivative
meaning, called ‘emotive meaning’, but they lack scientifically acceptable cognitive
meaning.
If the task of philosophy is to state the truth and not to provide evaluations, what
then is the subject matter of philosophy? Since the methods of philosophers are not
those of empirical science – since their methods are a priori rather than a posteriori – it
cannot be their aim to state empirical truths about the world. Such propositions are the
propositions of the special sciences. The aim of philosophers, therefore, is to state
analytic truths concerning logical relations among the concepts of our language. In
this period of philosophy, the task of philosophy was taken to be the task of conceptual
analysis. Indeed, for most philosophers who accepted this view, philosophy and con-
ceptual analysis were the same. Where traditional philosophers had taken their task
to be the discussion of the nature of the good, the true, the beautiful and the just, the
4
CONTEMPORARY PHILOSOPHY IN THE UNITED STATES
positivist and post-positivist analytic philosophers took their task to be the analysis of
the meaning of concepts such as ‘goodness’, ‘truth’, ‘beauty’ and ‘justice’. Ideally the
analysis of these and other philosophically interesting concepts, such as ‘knowledge’,
‘certainty’ and ‘cause’, should give necessary and sufficient conditions for the applica-
tion of these concepts. They saw this as being the legitimate heir of the traditional phi-
losophical enterprise, but an heir purged of the metaphysical nonsense and confusion
that had discredited the traditional enterprise.
If we combine the assumption that philosophy is essentially a conceptual, analytic
enterprise with the assumption that its task is foundational – that is, its task is to provide
secure foundations for such things as knowledge – then the consequence for the posi-
tivists is that philosophical analysis tends in large part to be reductive. That is, the aim
of the analysis is to show, for example, how empirical knowledge is based on, and ulti-
mately reducible to, the data of our experience, to so-called sense data. (This view is
called ‘phenomenalism’.) Similarly, statements about the mind are based on, and there-
fore ultimately reducible to, statements about external behaviour (behaviourism). Nec-
essary truth is similarly based on conventions of language as expressed in definitions
(conventionalism); and mathematics is based on logic, especially set theory (logicism).
In each case, the more philosophically puzzling phenomenon is shown to have a secure
foundation in some less puzzling phenomenon, and indeed, the ideal of such analysis
was to show that the puzzling phenomena could be entirely reduced to less puzzling
phenomena. ‘Phenomenalism’ supposedly gave science a secure foundation because
science could be shown to be founded on the data of our senses. Since the form of the | Blackwell |
reduction was analytic or definitional, it had the consequence that statements about
empirical reality could be translated into statements about sense data. Similarly, accord-
ing to behaviourism, statements about mental phenomena could be translated into
statements about behaviour.
Within the camp of analytic philosophers who thought the aim of philosophy was
conceptual analysis, there were two broad streams. One stream thought ordinary
language was in general quite adequate, both as a tool and as a subject matter of
philosophical analysis. The other stream thought of ordinary language as hopelessly
inadequate for philosophical purposes, and irretrievably confused. The philosophers of
this latter stream thought that we should use the tools of modern mathematical logic
both for analysing traditional philosophical problems and, more importantly, for creat-
ing a logically perfect language, for scientific and philosophical purposes, in which
certain traditional confusions could not even arise. There was never a rigid distinction
between these two streams, but there were certainly two broad trends: one which
emphasized ordinary language philosophy and one which emphasized symbolic logic.
Both streams, however, accepted the central view that the aim of philosophy was con-
ceptual analysis, and that in consequence philosophy was fundamentally different from
any other discipline; they thought that it was a second-order discipline analysing the
logical structure of language in general, but not dealing with first-order truths about
the world. Philosophy was universal in subject matter precisely because it had no
special subject matter other than the discourse of all other disciplines and the discourse
of common sense.
A further consequence of this conception was that philosophy became essentially a
linguistic or conceptual enterprise. For that reason, the philosophy of language was
5
JOHN R. SEARLE
absolutely central to the philosophical task. In a sense, the philosophy of language was
not only ‘first philosophy’; all of philosophy became a form of philosophy of language.
Philosophy was simply the logical investigation of the structure of language as it was
used in the various sciences and in common life.
2 The Rejection of These Two Distinctions and
the Rejection of Foundationalism
Work done in the 1950s and 1960s led to the overcoming of these two distinctions;
and with the rejection of these two distinctions came a new conception of analytic phi-
losophy – a conception that emerged in the 1970s and 1980s and which is still being
developed. The rejection of these two distinctions and of the foundationalist research
programme led to an enormous upheaval in the conception of the philosophical enter-
prise and in the practice of analytic philosophers. The most obvious problem with tra-
ditional analytic philosophy was that the reductionist enterprise failed. In every case,
the attempts to provide reductionist analyses of the sort proposed by the phenomenal-
ists and behaviourists were unsuccessful, and by 1960 the lack of success was obvious.
A series of important theoretical developments also took place at this time, but for the
sake of simplicity I shall concentrate on only five of these: Quine’s rejection of the
analytic–synthetic distinction, Austin’s theory of speech acts, Wittgenstein’s criticism
of
foundationalism, Rawls’s work in political philosophy and the changes in the
philosophy of science due to Kuhn and others.
2.1 Quine’s attack on the analytic–synthetic distinction
Perhaps the most important criticism of the analytic–synthetic distinction was made
by W. V. O. Quine in a famous article entitled ‘Two dogmas of empiricism’ (Quine 1953).
In this article, Quine claimed that no adequate, non-circular definition of analyticity
had ever been given. Any attempt to define analyticity had always been made using
notions that were in the same family as analyticity, such as synonymy and definition, | Blackwell |
and consequently, the attempts to define analyticity were invariably circular. However,
an even more important objection that emerged in Quine’s article was this: the notion
of an analytic proposition is supposed to be a notion of a proposition that is immune
to revision, that is irrefutable. Quine claimed that there were no propositions that were
immune to revision, that any proposition could be revised in the face of recalcitrant
evidence, and that any proposition could be held in the face of recalcitrant evidence,
provided that one was willing to make adjustments in other propositions originally held
to be true. Quine argued that we should think of the language of science as being like
a complex network that was impinged upon by empirical verification only at the edges.
Recalcitrant experiences at the edges of science can produce changes anywhere along
the line, but the changes are not forced on us by purely logical considerations; rather,
we make various pragmatic or practical adjustments in the network of our sentences
or beliefs to accommodate the ongoing character of our experiences. Language, on this
view, is not atomistic. It does not consist of a set of propositions, each of which can be
assessed in isolation. Rather, it consists of a holistic network, and, in this network,
6
CONTEMPORARY PHILOSOPHY IN THE UNITED STATES
propositions as groups confront experience; propositions individually are not simply
assessed as true or false. (This holism of scientific discourse was influenced by the
French philosopher of science, Duhem, and the view is frequently referred to as ‘the
Duhem–Quine thesis’.)
Most philosophers today accept some version or other of Quine’s rejection of the
analytic–synthetic distinction. Not everybody agrees with his actual argument (I, for
one, do not), but now there is general scepticism about our ability to make a strict dis-
tinction between those propositions that are true by definition and those that are true
as a matter of fact. The rejection of the analytic–synthetic distinction has profound
consequences for analytic philosophy, as we shall see in more detail later.
At this point it is important to state that if there is no well-defined class of analytic
propositions, then the philosopher’s propositions cannot themselves be clearly identi-
fied as analytic. The results of philosophical analysis cannot be sharply distinguished
from the results of scientific investigation. On the positivist picture, philosophy was not
one among other sciences; rather, it stood outside the frame of scientific discourse and
analysed the logical relations between, on the one hand, that discourse and its vocabu-
lary and, on the other, experience and reality. Philosophers, so to speak, analysed the
relation between language and reality, but only from the side. If we accept Quine’s rejec-
tion of the analytic–synthetic distinction, then philosophy is not something that can
be clearly demarcated from the special sciences. It is, rather, adjacent to, and overlaps
with, other disciplines. Although philosophy is more general than other disciplines, its
propositions do not have any special logical status or special logical priority with regard
to the other disciplines.
2.2 Austin’s theory of speech acts
The British philosopher J. L. Austin was suspicious of both the distinction between ana-
lytic and synthetic propositions, and the distinction between evaluative and descriptive
utterances. During the 1950s he developed an alternative conception of
language
(Austin 1962). His first observation was that there is a class of utterances that are obvi-
ously perfectly meaningful, but which do not even set out to be either true or false. A
man who says, for example, ‘I promise to come and see you’ or a qualified authority
who says to a couple, ‘I pronounce you man and wife’ is neither reporting on nor
describing a promise or a marriage respectively. Such utterances should be thought | Blackwell |
not as cases of describing or stating, but rather as doing, as acting. Austin baptized
these utterances ‘performatives’ and contrasted them with ‘constatives’. The distinction
between constatives and performatives was supposed to contain three features: con-
statives, but not performatives, could be true or false; performatives, on the other hand,
though they could not be true or false, could be felicitous or infelicitous, depending on
whether or not they were correctly, completely and successfully performed; and finally,
performatives were supposed to be actions, doings or performances, as opposed to mere
sayings or statings. But, as Austin himself saw, the distinctions so drawn did not work.
Many so-called performatives turned out to be capable of being true or false; for
example, warnings could be either true or false. And statements, as well as performa-
tives, could be infelicitous. For example, if one made a statement for which one had
insufficient evidence, one would have made an infelicitous statement. And finally,
7
JOHN R. SEARLE
stating is as much performing an action as promising or ordering or apologizing. The
abandonment of the performative–constative distinction led Austin to a general theory
of speech acts. Communicative utterances in general are actions of a type he called
‘illocutionary acts’.
One great merit of Austin’s theory of speech acts is that it enabled subsequent
philosophers to construe the philosophy of language as a branch of the philosophy of
action. Since speech acts are as much actions as any other actions, the philosophical
analysis of language is part of the general analysis of human behaviour. And since
intentional human behaviour is an expression of mental phenomena, it turns out that
the philosophy of language and the philosophy of action are really just different aspects
of one larger area, namely, the philosophy of mind. On this view, the philosophy of lan-
guage is not ‘first philosophy’; it is a branch of the philosophy of mind. Though Austin
did not live to carry out the research programme implicit in his initial discoveries,
subsequent work, including my own, has carried this research further.
By treating speaking as a species of intentional action we can give a new sense to a
lot of old questions. For example, the old question, ‘How many kinds of utterances are
there?’ is too vague to be answered. But if we ask ‘How many kinds of illocutionary
acts are there?’, we can give a precise answer, since the question asks, ‘How many pos-
sible ways are there for speakers to relate propositional contents to reality in the per-
formance of actions that express illocutionary intentions?’ An analysis of the structure
of those intentions reveals five basic types of illocutionary act: we tell people how things
are (Assertives), we try to get them to do things (Directives), we commit ourselves to
doing things (Commissives), we express our feelings and attitudes (Expressives) and we
bring about changes in the world through our utterances, so that the world is changed
to match the propositional content of the utterance (Declarations). (For details see
Searle 1979 and 1983.)
2.3 Wittgenstein’s rejection of foundationalism
The single most influential analytic philosopher of the twentieth century, and indeed,
the philosopher whom most analytic philosophers would regard as the greatest philoso-
pher of the century, is Ludwig Wittgenstein.
Wittgenstein published only one short book during his lifetime, which represents his
early work, but with the posthumous publication of his Philosophical Investigations in
1953, a series of his later writings began to become available. Now, we have a sizeable
corpus of the work he did in the last twenty years of his life. Through painstaking analy-
sis of the use of language, particularly through analysis of psychological concepts,
Wittgenstein attempted to undermine the idea that philosophy is a foundational enter- | Blackwell |
prise. He asserted, on the contrary, that philosophy is a purely descriptive enterprise,
that the task of philosophy is neither to reform language nor to try to place the various
uses of language on a secure foundation. Rather, philosophical problems are removed
by having a correct understanding of how language actually functions.
A key notion in Wittgenstein’s conception of language is the notion of a language
game. We should think of the words in language as being like the pieces in a game. They
are not to be understood by looking for some associated idea in the mind, or by follow-
ing some procedure of verification, or even by looking at the object for which they stand.
8
CONTEMPORARY PHILOSOPHY IN THE UNITED STATES
Rather, we should think of words in terms of their use, and referring to objects in the
world is only one of many uses that words have. The meaning of a word is given by its
use, and the family of uses that a group of words has constitutes a language game.
Examples include the language game we play in describing our own sensations, or the
language game we play in identifying the causes of events. This conception of language
leads Wittgenstein to the rejection of the conception that the task of philosophical
analysis is reductionist or foundationalist. That is, Wittgenstein rejects the idea that lan-
guage games either have or need a foundation in something else, and he rejects the idea
that certain language games can be reduced to certain other kinds of language games.
The effect, Wittgenstein says, of philosophical analysis is not to alter our existing lin-
guistic practices or to challenge their validity; it is simply to describe them. Language
neither has nor needs a foundation in the traditional sense.
I said that Wittgenstein was the single most influential philosopher in the analytic
tradition, but there is a sense in which it seems to me he has still not been properly
understood, nor had his lessons been fully assimilated by analytic philosophers. I will
have more to say about his influence later.
2.4 Rawls’s theory of justice
The conception of moral philosophy in the positivist and post-positivist phases of ana-
lytic philosophy was extremely narrow. Strictly speaking, according to the positivists,
moral utterances could not be either true or false, so there was nothing that the philoso-
pher could say, qua philosopher, by way of making moral judgements. The task for the
moral philosopher was to analyse moral discourse, to analyse the meaning and use of
moral terms such as ‘good’, ‘ought’, ‘right’, ‘obligation’, etc. It is important to see that
this conception of moral philosophy was a strict logical consequence of the acceptance
of the distinction between evaluative and descriptive utterances. For if evaluative utter-
ances cannot be either true or false, and if first-order moral discourse consists in evalu-
ative utterances, and if the task of the philosopher is to state the truth, it follows that
the philosopher, qua philosopher, cannot make any first-order moral judgements. As a
philosopher, all he or she can do is the second-order task of analysing moral concepts.
Some philosophers of the positivist and post-positivist periods rejected this narrow
conception of moral philosophy, and there were a series of attacks mounted on the
distinction between evaluative and descriptive utterances, including some attacks by
myself in the mid-1960s (Searle 1964). It remained, however, for John Rawls to reopen
the traditional conception of political and moral philosophy with the publication of his
book A Theory of Justice in 1971. For the purposes of the present discussion, the im-
portant thing about Rawls’s work was not that he refuted the traditional dichotomy of
descriptive and evaluative utterances, but that he simply ignored it and proceeded to
develop a theory of political institutions of a sort that has a long philosophical tradi-
tion and which the positivists thought they had overcome. Rawls, in effect, revived the | Blackwell |
social contract theory, which had long been assumed to be completely defunct; but he
did it by an ingenious device: he did not attempt, as some traditional theorists had done,
to show that there might have been an original social contract, nor did he try to show
that the participation of individuals in society involved a tacit contract. Rather, he used
the following thought experiment as an analytic tool: think of the sort of society that
9
JOHN R. SEARLE
rational beings would agree to if they did not know what sort of position they them-
selves would occupy in that society. If we imagine rational beings, hidden behind a veil
of ignorance, who are asked to select and agree on forms of social institutions that
would be fair for all, then we can develop criteria for appraising social institutions on
purely rational grounds.
The importance of Rawls for our present discussion is not whether he succeeded in
developing new foundations for political theory, but the fact that his work gave rise to
a renewed interest in political philosophy, which was soon accompanied by a renewed
interest in the traditional questions of moral philosophy. Moral and political philoso-
phy had been confined to a very small realm by the positivist philosophers, and for that
reason seemed sterile and uninteresting. Very little work was done in that area, but
since the 1970s it has grown enormously, and is now a flourishing branch of analytic
philosophy.
2.5 Post-positivist philosophy of science
Throughout the positivist period the model of empirical knowledge was provided by the
physical sciences, and the general conception was that the empirical sciences proceeded
by the gradual but cumulative growth of empirical knowledge through the systematic
application of scientific method. There were different versions of scientific method,
according to the philosophers of that period, but they all shared the idea that scientific,
empirical propositions are essentially ‘testable’. Initially a proposition was thought
testable if it could be confirmed, but the most influential version of this idea is Popper’s
claim that empirical propositions are testable if they are falsifiable in principle. That is,
in order for a proposition to tell us how the world is as opposed to how it might be or
might have been, there must be some conceivable state of affairs that would render that
proposition false. Propositions of science are, strictly speaking, never verifiable – they
simply survive repeated attempts at falsification. Science is in this sense fallible, but it
is at the same time rational and cumulative.
This picture of the history of science was very dramatically challenged in Thomas
(1962). According to Kuhn, the
Kuhn’s book The Structure of Scientific Revolutions
history of science shows not a gradual and steady accumulation of knowledge but
periodic revolutionary overthrows of previous conceptions of reality. The shift from
Aristotelian physics to Newtonian physics, and the shift from Newtonian physics to
relativistic physics are both illustrations of how one ‘paradigm’ is replaced by another.
When the burden of puzzling cases within one paradigm becomes unbearable, a new
paradigm emerges, which provides not just a new set of truths but a whole new way
of looking at the subject matter. ‘Normal sciences’ always proceed by puzzle-solving
within a paradigm, but revolutionary breakthroughs, rather than puzzle-solving
within a paradigm, are matters of overthrowing one paradigm and replacing it with
another.
Just as Kuhn challenged the picture of science as essentially a matter of a steady
accumulation of knowledge, so Paul Feyerabend challenged the conception of there
being a unitary rational ‘scientific method’ (Feyerabend 1975). Feyerabend tried to
show that the history of science reveals not a single rational method but rather a series
of opportunistic, chaotic, desperate (and sometimes even dishonest) attempts to cope
10
| Blackwell |
CONTEMPORARY PHILOSOPHY IN THE UNITED STATES
with immediate problems. The lesson that Feyerabend draws from this is that we should
abandon the constraining idea of there being such a thing as a single, rational method
that applies everywhere in science; rather, we should adopt an ‘anarchistic’ view,
according to which ‘anything goes’. Reactions to Kuhn and Feyerabend, not surpris-
ingly, differ enormously among analytic philosophers. Kuhn sometimes seems to be
arguing that there is not any such thing as the real world existing independently of our
scientific theories, which it is the aim of our scientific theories to represent. Kuhn, in
short, seems to be denying realism. Most philosophers do not take this denial of realism
at all seriously. Even if Kuhn were right about the structure of scientific revolutions,
this in no way shows that there is no independent reality that science is investigating.
Again, most philosophers would accept Feyerabend’s recognition of a variety of
methods used in the history of science, but very few people take seriously the idea that
there are no rational constraints on investigation whatever. Nonetheless, the effect of
these authors has been important in at least the following respect. The positivists’ con-
ception of science as a steady accumulation of factual knowledge, and of the task of
the philosopher as the conceptual analysis of scientific method, has given way to an
attitude to science that is at once more sceptical and more activist. It is more sceptical
in the sense that few philosophers are looking for the one single method that pervades
every enterprise called ‘science’, but it is more activist in the sense that philosophy of
science interacts more directly with scientific results. For example, recent philosophi-
cal discussions about quantum mechanics, or about the significance of Bell’s theorem
within quantum mechanics, reveal that it is now impossible to say exactly where the
problem in physics ends and the problem in philosophy begins. There is a steady inter-
action and collaboration between philosophy and science on such philosophically
puzzling questions.
3 Some Recent Developments
The results of the changes that I have just outlined are to make analytic philosophy on
the one hand a more interesting discipline, but on the other hand a much less well-
defined research project. In the way that the verification principle formed the core
ideology of the logical positivists and in the way that the conceptual analysis formed
the core research project of the post-positivistic analytic philosopher, there is now no
ideological point of reference that is commonly agreed upon; nor is there a universally
accepted research programme. For example, conceptual analysis thirty years ago was
taken to be the heart of analytic philosophy, but now many philosophers would deny
that it is the central element in the philosophical enterprise. Some philosophers, indeed,
would say that the traditional enterprise of attempting to find logically necessary and
sufficient conditions for the applicability of a concept is misconceived in principle. They
think the possibility of such an enterprise has been refuted by Quine’s refutation of the
analytic–synthetic distinction, as well as Wittgenstein’s observation that many philo-
sophically puzzling concepts have not a central core or essence of meaning, but a
variety of different uses united only by a ‘family resemblance’. Many other philosophers
would say that conceptual analysis is still an essential part of the philosophical enter-
prise, as indeed it has been since the time of Plato’s dialogues, but it is no longer seen
11
JOHN R. SEARLE
to be the whole of the enterprise. Philosophy is now, I believe, a much more interesting
subject than it was a generation ago because it is no longer seen as something separate
from, and sealed off from, other disciplines. In particular, philosophy is now seen by | Blackwell |
most analytic philosophers as being adjacent to and overlapping with the sciences.
My own view, which I feel is fairly widely shared, is that words like ‘philosophy’ and
‘science’ are in many respects misleading, if they are taken to imply the existence of
mutually exclusive forms of knowledge. Rather, it seems to me that there is just knowl-
edge and truth, and that in intellectual enterprises we are primarily aiming at knowl-
edge and truth. These may come in a variety of forms, whether in history, mathematics,
physics, psychology, literary criticism or philosophy. Philosophy tends to be more
general than other subjects, more synoptic in its vision, more conceptually or logically
oriented than other disciplines, but it is not a discipline that is hermetically sealed off
from other subjects. The result is that many areas of investigation which were largely
ignored by analytic philosophers a generation ago have now become thriving branches
of philosophy, including cognitive science, the philosophy of biology and the philoso-
phy of economics. In what follows, I will confine my discussion to five major areas of
philosophical research: cognitive science, the causal theory of reference, intentionalis-
tic theories of meaning, truth-conditional theories of meaning, and Wittgenstein’s
conception of language and mind and his response to scepticism.
3.1 Philosophy and cognitive science
Nowhere is the new period of collaboration between philosophy and other disciplines
more evident than in the new subject of cognitive science. Cognitive science from its
very beginnings has been ‘interdisciplinary’ in character, and is in effect the joint prop-
erty of psychology, linguistics, philosophy, computer science and anthropology. There
is, therefore, a great variety of different research projects within cognitive science, but
the central area of cognitive science, its hardcore ideology, rests on the assumption that
the mind is best viewed as analogous to a digital computer. The basic idea behind cog-
nitive science is that recent developments in computer science and artificial intelligence
have enormous importance for our conception of human beings. The basic inspiration
for cognitive science went something like this: human beings do information process-
ing. Computers are designed precisely to do information processing. Therefore one way
to study human cognition – perhaps the best way to study it – is to study it as a matter
of computational information processing. Some cognitive scientists think that the com-
puter is just a metaphor for the human mind; others think that the human mind is
literally a computer program. But it is fair to say that without the computational
model there would not have been a cognitive science as we now understand it.
This conception of human cognition was ideally suited to the twentieth-century
analytic tradition in philosophy of mind because of the analytic tradition’s resolute
materialism. It was anti-mentalistic and anti-dualistic. The failure of logical behav-
iourism led not to a revival of dualism but to more sophisticated versions of material-
ism. I will now briefly summarize some of the recent developments in materialistic
philosophies of mind that led to the computational theory of the mind.
The logical behaviourists’ thesis was subject to many objections, the most important
being the objection that it ignores internal mental phenomena. In science and common
12
CONTEMPORARY PHILOSOPHY IN THE UNITED STATES
sense it seems more natural to think of human behaviour as being caused by internal
mental states rather than to think of the mental states as simply consisting of the
behaviour. This weakness in behaviourism was corrected by the materialist identity
thesis, sometimes called ‘physicalism’. According to the physicalist identity theory,
mental states are identical with states of the brain. We do not know in detail what these | Blackwell |
identities are, but the progress of the neurosciences makes it seem overwhelmingly
probable that every mental state will be discovered to be identical with some brain state.
In the early version of the identity thesis it was supposed that every type of mental state
would be discovered to be identical with some type of physical state, but after some
debate this began to seem more and more implausible. There is no reason to suppose
that only systems with neurons like ours can have mental states; indeed, there is no
reason to suppose that two human beings who have the same belief must therefore be
in the same neurophysiological state. So, ‘type–type identity theory’ naturally gave way
to ‘token–token identity theory’. The token identity theorists claimed that every par-
ticular mental state is identical with some particular neurophysiological state, even if
there is no type correlation between types of mental states and types of physical states.
But that only leaves open the question, ‘What is it that two different neurophysiologi-
cal states have in common if they are both the same mental state?’ To many analytic
philosophers it seemed obvious that the answer to our question must be that two
neurophysiological states are the same type of mental state if they serve the same
function in the overall ecology of the organism. Mental states on this view can be
defined in terms of their causal relations to input stimuli, to other mental states,
and to external behaviour. This view is called ‘functionalism’ and it is a natural
development from token–token identity theory.
However, the functionalist has to answer a further obvious question: ‘What is it
about the states that gives them the causal relations that they do have?’ If mental states
are defined in terms of their causal relations, then what is it about the structure of dif-
ferent neurophysiological configurations that can give them the same causal relations?
It is at precisely this point that the tradition of materialism in analytic philosophy con-
verges with the tradition of artificial intelligence. The computer provides an obvious
answer to the question that I have just posed. The distinction between the software and
the hardware, the program and the physical system that implements the program, pro-
vides a model for how functionally equivalent elements at a higher level can be realized
in or implemented by different physical systems at a lower level. Just as one and the
same program can be implemented by quite different physical hardware systems, so one
and the same set of mental processes can be implemented in different neurophysiologi-
cal or other forms of hardware implementations. Indeed, on the most extreme version
of this view, the mind is to the brain as the program is to the hardware. This sort of
functionalism came to be called ‘computer functionalism’ or ‘Turing machine func-
tionalism’, and it coincides with the strong version of ‘artificial intelligence’ (Strong
AI), the version that says having a mind just is having a certain sort of program.
I have refuted Strong AI in a series of articles (Searle 1980a, 1980b). The basic idea
of that refutation can be stated quite simply. Minds cannot be equivalent to programs
because programs are defined purely formally or syntactically and minds have mental
contents. The easiest way to see the force of the refutation is to see that a system, say
oneself, could learn to manipulate the formal symbols for understanding a natural
13
JOHN R. SEARLE
language without actually understanding that language. I might have a program that
enables me to answer questions in Chinese simply by matching incoming symbols
with the appropriate processing and output symbols, but nonetheless I still would not
thereby understand Chinese. However, though the project of computer functionalism
is almost certainly a failure, the results of the enterprise are in many respects quite | Blackwell |
useful. Important things can be learned about the mind by pursuing the computer
metaphor, and the research effort has not necessarily been wasted. The most exciting
recent development has been to think of mental processes not on the model of the con-
ventional serial digital computer, but rather to think of brain processes on the model
of parallel distributed processing computers. The most exciting recent development, in
my view, in cognitive science has been the development of such ‘neural net models’ for
human cognition.
In concluding this section, I want to point out that in my view the chief weakness
of analytical philosophy of mind, a weakness it shares with the past 300 years in the
philosophy of mind, has been its assumption that there is somehow an inconsistency
between mentalism and materialism. Analytic philosophers, along with the rest of
the Cartesian tradition, have characteristically assumed that ‘mental’ implies ‘non-
material’ or ‘immaterial’ and that ‘material’ or ‘physical’ implies ‘non-mental’. But if
one reflects on how the brain works, it seems that both of these assumptions are
obviously false. What that shows is that our whole vocabulary, our whole terminology
of the mental and physical, needs wholesale revision.
3.2 The causal theory of reference
A central question in analytic philosophy of language, since Frege (and indeed in phi-
losophy since the time of Plato), has been: How does language relate to the world? How
do words hook on to things? In answering this question, the analytic tradition had char-
acteristically found a connection between the notion of reference and the notion of
truth. An expression, such as a proper name, refers to or stands for or designates an
object because associated with that name is some descriptive content, some concept of
the object in question, and the object in question satisfies or fits that descriptive content.
The expression refers to the object only because the description is true of the object.
This is the standard reading of Frege’s famous distinction between sense and reference,
between Sinn and Bedeutung. Expressions refer to objects in virtue of their sense and
the sense provides a description, a ‘mode of presentation’, of the object in question.
Something analogously applies with general terms: general terms are true of an object
because each general term has associated with it a cluster of features, and the term will
be true of the object if the object in question has those features.
In the 1970s this conception of the relation between language and reality was
attacked by a number of philosophers, most prominently Donnellan (1970), Kripke
(1972) and Putnam (1975). A variety of arguments were mounted against the tradi-
tional conception of meaning and reference, but the common thread running through
these arguments was that the descriptive content associated with a word provided
neither necessary nor sufficient conditions for its application. A speaker might refer to
an object even though the associated description that he or she had was not true of that
object; a speaker might have a description that was satisfied by an object even though
14
CONTEMPORARY PHILOSOPHY IN THE UNITED STATES
that was not the object to which he or she was referring. The most famous version of
this argument was Putnam’s ‘twin earth’ example. Imagine a planet in a distant galaxy
exactly like ours in every respect except that on this planet what they call ‘water’ has
a different chemical composition. It is not composed of H2O but has an extremely com-
plicated formula that we will abbreviate as ‘XYZ’. Prior to 1750, prior to the time that
anyone knew the chemical composition of water, the people on twin earth had in their
minds exactly the same concept of water as the people on earth. Nonetheless our word
‘water’ does not refer to the stuff on twin earth. Our word ‘water’, whether or not we
knew it in 1750, refers to H2O; and this is a matter of objective causal relations in the | Blackwell |
world which are independent of the ideas that people have in their heads. Meanings on
this view are not concepts in people’s heads, but objective relations in the world. Well,
if associated ideas are not sufficient for meaning, what is? The answer given by the three
authors I have mentioned is that there must be some sort of causal connection between
the use of the word and the object or type of entity in the world that it applies to. Thus,
if I use the word ‘Socrates’, it refers to a certain Greek philosopher only because there
is a causal chain connecting that philosopher and my current use of the word. The
word ‘water’ is not defined by any checklist of features; rather, ‘water’ refers to what-
ever stuff in the world was causally related to certain original uses of the word ‘water’,
and these uses subsequently came to be accepted in the community and were then
passed down through a causal chain of communication.
There is a very natural way of connecting the computer functionalist conception of
the mind with the causal theory of reference. If the mind were a computer program,
and if meaning were a matter of causal connections to the world, then the way the
mind acquires meanings is for the system that implements the computer program to be
involved in causal interactions with the world.
3.3 Intentionalistic theories of meaning
Much of the best work in speech act theory done after the publication of Austin’s How
to Do Things with Words in 1962, and my Speech Acts in 1969, attempted to combine
the insights of Paul Grice’s account of meaning with the framework provided by the
theory of speech acts. In a series of articles beginning in the late 1950s (Grice 1957,
1968), Grice had argued that there is a close connection between the speaker’s inten-
tions in the performance of an utterance and the meaning of that utterance. In his
original formulation of this view, Grice analysed the speaker’s meaning in terms of the
intention to produce an effect on the hearer by means of getting the hearer to recog-
nize the intention to produce that very effect. Thus, for example, according to Grice, if
a speaker intends to tell a hearer that it is raining, then in the speaker’s utterance of
the sentence, ‘It is raining’, the speaker’s meaning will consist of his or her intention
to produce in the hearer the belief that it is raining by means of getting the hearer to
recognize his or her intention to produce that very belief. Subsequent work by Grice
altered the details of this account, but the general principle remained the same:
meaning is a matter of a self-referential intention to produce an effect on a hearer
by getting the hearer to recognize the intention to produce that effect. Grice combined
this analysis of meaning with an analysis of certain principles of conversational co-
operation. In conversation, people accept certain tacit principles, which Grice calls
15
JOHN R. SEARLE
‘Maxims of Conversation’ – they accept the principles that the speaker’s remarks will
be truthful and sincere (the maxim of quality), that they will be relevant to the
conversational purposes at hand (the maxim of relation), that the speaker will be clear
(the maxim of manner) and that the speaker will say neither more nor less than is
necessary for the purposes of the conversation (the maxim of quantity).
There has been a great deal of controversy about the details of Grice’s analysis of
meaning, but the basic idea that there is a close connection between meaning and
intention has been accepted and has proved immensely useful in analysing the struc-
ture of certain typical speech act phenomena. My own view is that Grice confuses that
part of meaning which has to do with representing certain states of affairs and certain
illocutionary modes, and that part of meaning that has to do with communicating
those representations to a hearer. Grice, in short, confuses communication with | Blackwell |
representation. However, the combination of an intentionalistic account of meaning,
together with rational principles of co-operation, is immensely fruitful in analysing
such problems as those of ‘indirect speech acts’ and figurative uses of language such
as metaphors. So, for example, in an indirect speech act, a speaker will characteristi-
cally mean something more than what he or she actually says. To take a simple
example, in a dinner table situation a speaker who says ‘Can you pass the salt?’ would
usually not just be asking a question about the salt-passing abilities of the hearer; he
or she would be requesting the hearer to pass the salt. Now the puzzle is this: how is it
that speakers and hearers communicate so effortlessly when there is a big gulf between
what the speaker means and what he or she actually says? In the case of metaphor, a
similar question arises: how does the speaker communicate so effortlessly his or her
metaphorical meaning when the literal meaning of the sentence uttered does not
encode that metaphorical meaning? A great deal of progress has been made on these
and other problems using the apparatus that Grice contributed to the theory of speech
acts.
One of the marks of progress in philosophy is that the results of philosophical analy-
sis tend to be appropriated by other disciplines, and this has certainly happened with
speech act theory. Speech act theory is now a thriving branch of the discipline of lin-
guistics, and the works of Austin and Grice, as well as my own, are as well known
among linguists as they are among philosophers.
3.4 Truth-conditional theories of meaning
Philosophers such as Quine and his former student, Donald Davidson, have always felt
that intentionalistic theories of meaning of the sort proposed by Grice and Searle were
philosophically inadequate, because the intentionalistic notions seemed as puzzling as
the notion of meaning itself and because they could necessarily involve linguistic
meaning in their ultimate analyses. So Quine and Davidson attempted to give accounts
of meaning that did not employ the usual apparatus of intentionality. The most influ-
ential version of this attempt is Davidson’s project of analysing meaning in terms of
truth conditions. The basic idea is that one knows the meaning of a sentence if one
knows under what conditions it is true or false. Thus, one knows the meaning of the
German sentence ‘Schnee ist weiss’ if one knows that it is true if and only if snow is
white. Now since a theory of meaning for a language should be able to state the
16
CONTEMPORARY PHILOSOPHY IN THE UNITED STATES
meaning of each of the sentences of the language, and since the meanings of the sen-
tences of the language are given by truth conditions, and since truth conditions can be
specified independently of the intentionalistic apparatus, it seems to Davidson that a
theory of truth (that is, a theory of the truth conditions of the sentences) of a language
would provide a theory of meaning for that language.
In order to carry out the project of explaining meaning in terms of truth, Davidson
employs the apparatus of Tarski’s semantic definition of truth, a definition that Tarski
had worked out in the 1930s. Tarski points out that it is a condition of adequacy on
any account of truth that for any sentence s and any language L, the account must
have the consequence that
s is true in L if and only if p,
where for s can be substituted the structural description of any sentence whatever, for
L, the name of the language of which s is a part, and for p, the sentence itself or a trans-
lation of it. Thus, for example, in English, the sentence ‘Snow is white’ is true if and
only if snow is white. This condition is usually called ‘convention T’ and the corre-
sponding sentences are called ‘T-sentences’.
Now Davidson notes that convention T employs the fact that the sentence named by
s has the same meaning as the sentence expressed by p, and thus Tarski is using the | Blackwell |
notion of meaning in order to define the notion of truth. Davidson proposes to turn this
procedure around by taking the notion of truth for granted, by taking it as a primitive,
and using it to explain meaning.
Here is how it works. Davidson hopes to get a theory of meaning for a speaker of a
language that would be sufficient to interpret any of the speaker’s utterances by getting
a theory that would provide a set of axioms which would entail all true T-sentences for
that speaker’s language. Thus, if the speaker speaks German, and we use English as a
meta-language in which to state the theory of the speaker’s language, Davidson claims
we would have an adequate theory of the speaker’s language if we could get a set of
axioms which would entail a true T-sentence stated in English for any sentence that the
speaker uttered in German. Thus, for example, our theory of meaning should contain
axioms which entail that the speaker’s utterance ‘Schnee ist weiss’ is true in the
speaker’s language if and only if snow is white. Davidson further claims that we could
make this into an empirical theory of the speaker’s language by proceeding to associ-
ate the speaker’s utterances with the circumstances in which we had empirical evidence
for supposing that the speaker held those utterances to be true. Thus, if we hear the
speaker utter the sentence ‘Es regnet’, we might look around and note that it was
raining in the vicinity, and we might then form the hypothesis that the speaker holds
true the sentence ‘Es regnet’ when it is raining in his or her immediate vicinity. This
would provide the sort of empirical data on which we would begin to construct a theory
of truth for the speaker’s language.
It is important to note that we are to think of this as a thought experiment and not
as an actual procedure that we have to employ when we try to learn German, for
example. The idea is to cash out the notion of meaning in terms of truth conditions,
and then cash out the notion of truth conditions in terms of a truth theory for a lan-
guage, which is a theory that would entail all the true T-sentences of the language. The
17
JOHN R. SEARLE
empirical basis on which the whole system rests is that of the evidence we could get
concerning the conditions under which a speaker holds a sentence to be true. If the
project could in principle be carried out, then we would have given an account of
meaning which employed only one intentionalistic notion, the notion of ‘holding true’
a sentence.
Over the past twenty years there has been quite an extensive literature on the
nature of this project and how it might be applied to several difficult and puzzling
sorts of sentences – for example, indexical sentences, sentences about mental states or
modal sentences. Enthusiasm for this project seems to have waned somewhat in recent
years.
In my view, the central weakness of Davidson’s enterprise is as follows: any theory
of meaning must explain not only what a speaker represents by his or her utterances,
but also how he or she represents them, under what mental aspects the speaker repre-
sents truth conditions. For this reason, a theory of meaning cannot just correlate a
speaker’s utterance with states of affairs in the world; it must explain what is going on
in the speaker’s head which enables the speaker to represent those states of affairs
under certain aspects with the utterances that the speaker makes. Thus, for example,
suppose that snow is composed of H2O molecules in crystalline form, and suppose the
colour white consists of light wave emissions of all wavelengths, then the sentence
‘Schnee ist weiss’ is true if and only if H2O molecules in crystalline form emit light of
all wavelengths. Now this second T-sentence is just as empirically substantiated as the
earlier example, ‘Schnee ist weiss’ is true if and only if snow is white. Indeed, it is a
matter of scientific necessity that the state of affairs described by the former is identi- | Blackwell |
cal with the state of affairs described by the latter. But the former example simply does
not give the speaker’s meaning. The speaker might hold true the sentence ‘Schnee ist
weiss’ under these and only these conditions and not know the slightest thing about
H2O molecules and wavelengths of light. The T-sentence gives the truth conditions, but
the specification of the truth conditions does not necessarily give the meaning of the
sentence, because the specification does not yet tell us how the speaker represents those
truth conditions. Does he or she represent them under the aspect of snow being white,
or what is the same fact in the world, does he or she represent them under the aspect
of frozen H2O crystals emitting light of all wavelengths? Any theory that cannot give
that information is not a theory of meaning.
There are various attempts to meet these sorts of objections, but, in my view, they
are not successful. In the end, all truth definitional accounts of meaning, like the be-
haviourist accounts which preceded them, end up with a certain ‘indeterminacy’ of
meaning. They cannot account in objective terms for all of the subjective details of
meaning, and both Davidson and Quine have acknowledged that their views result in
indeterminacy.
3.5 Wittgenstein’s legacy
Wittgenstein’s work covers such a vast range of topics, from aesthetics to math-
ematics, and covers these topics with so much depth and insight, that it continues to
be a source of ideas and inspiration for analytic philosophers and is likely to continue
to be so for many years to come. I will mention only three areas.
18
CONTEMPORARY PHILOSOPHY IN THE UNITED STATES
3.5.1 Philosophical psychology
One of Wittgenstein’s main areas of research was that of psychological concepts such
as belief, hope, fear, desire, want, expect and sensation concepts such as pain and
seeing. Perhaps his single most controversial claim in this area is that concerning a
private language. He claims that it would be logically impossible for there to be a lan-
guage that was private in the sense that its words could only be understood by the
speaker because they referred to the speaker’s private inner sensations and had no
external definition. Such a language would be absurd, he said, because for the applica-
tion of such words there would be no distinction between what seemed right to the
speaker and what really was right. But unless we can make a distinction between what
seems right and what really is right, we cannot speak of right or wrong at all, and hence
we cannot speak of using a language at all. ‘An inner process’, says Wittgenstein
(1953), ‘stands in need of outward criteria’. Wittgenstein is here attacking the entire
Cartesian tradition, according to which there is a realm of
inner private objects,
our inner mental phenomena, and the meanings of the words that stand for these
entities are entirely defined by private ostensive definitions. No other single claim of
Wittgenstein’s has aroused as much controversy as the ‘private language argument’.
It continues to be a source of fascination to contemporary philosophers, and many
volumes have been written about Wittgenstein’s analysis of psychological concepts.
3.5.2 Following a rule
Wittgenstein is part of a long tradition that emphasizes the distinction between the
modes of explanation of the natural sciences and the modes of explanation of human
behaviour and human cultural and psychological phenomena generally. His analysis
of this problem chiefly deals with the phenomenon of human behaviour which is influ-
enced or determined by mental contents, and, most importantly, by the phenomena
of human beings following a rule. What is it for a human being to follow a rule?
Wittgenstein’s analysis of this stresses the difference between the way that rules
guide human behaviour and the way that natural phenomena are results of causes. | Blackwell |
Wittgenstein throughout emphasizes the difference between causes and reasons, and he
also emphasizes the roles of interpretation and rule following. On the most extreme
interpretation of Wittgenstein’s remarks about following a rule, he is the proponent of
a certain type of scepticism. According to one view of Wittgenstein, he is arguing that
rules do not determine their own application, that anything can be interpreted to accord
with a rule, and consequently that anything can be interpreted to conflict with a rule. If
taken to its extreme, this argument would have the consequence that, logically speaking,
rules do not constrain human behaviour at all. And if that is right, then mental contents,
such as knowledge of meanings of words or principles of action or even beliefs and
desires, do not constrain human behaviour, because they are everywhere subject to an
indefinite range of different interpretations. Wittgenstein’s solution to this scepticism is
to propose that interpretation comes to an end when we simply accept the cultural
practices of the community in which we are imbedded. Interpretation comes to an end,
and we just act on a rule. Acting on a rule is a practice, and it is one that we are brought
up to perform in our culture. The sceptical implications of Wittgenstein’s account of rule
following are resolved by an appeal to a naturalistic solution: we are simply the sort of
beings who follow culturally and biologically conditioned practices.
19
JOHN R. SEARLE
This interpretation of Wittgenstein is largely due to Saul Kripke (1982) and it has
aroused considerable controversy. My own view is that Kripke has misinterpreted
Wittgenstein in certain crucial respects, but whether or not his interpretation is correct,
it has been a source of continuing discussion in contemporary philosophy.
3.5.3 Philosophical scepticism
Important work on philosophical scepticism has been continued by philosophers who
are inspired or provoked by Wittgenstein, notably Thomson Clarke and Barry Stroud.
These philosophers point out that a really serious analysis of our use of epistemic dis-
course shows that the problem of scepticism cannot be simply overcome by the usual
analytic philosopher’s methods of pointing out that the sceptic raises the demand
for justification beyond that which is logically appropriate. Clarke and Stroud claim
that the problem of scepticism goes deeper than this solution will allow. Following
Wittgenstein in investigating the depth grammar of
language, they find that any
solution to the sceptic’s predicament – that is, any justification for our claims to have
knowledge about the world – rests on a much deeper understanding of the difference
between ordinary or plain discourse and philosophical discourse. Work in this line of
research is continuing at present.
4 Overall Assessment
I have not attempted to survey all of the main areas of activity in contemporary ana-
lytic philosophy. Most importantly, I have left out contemporary work in ethics. Perhaps
of comparable importance, I have had nothing to say about purely technical work in
logic. There is, furthermore, a thriving branch of analytic philosophy called ‘action
theory’, which should be mentioned at least in passing. The general aim of analytic
action theory is to analyse the structure of human actions in terms of the causal rela-
tions between such mental states as beliefs, desires and intentions, and the bodily move-
ments which are in some sense constitutive of the actions. Finally, it is worth calling
attention to the fact that among analytic philosophers there has been a great revival of
interest in the history of philosophy. Traditional analytic philosophers thought of the
history of philosophy as mostly the history of mistakes. Some of the history of the
subject could be useful for doing real philosophy; but the overall conception was that
the history of philosophy had no more special relevance to philosophy than the history | Blackwell |
of mathematics to mathematics, or the history of chemistry to chemistry. This attitude
has changed recently, and there is now a feeling of the historical continuity of analytic
philosophy with traditional philosophy in a way that contrasts sharply with the origi-
nal view of analytic philosophers, who thought that they marked a radical, or indeed,
revolutionary break with the philosophical tradition.
It is too early to provide an assessment of the contribution that will be made by work
done in philosophy at the present time, or even in the past few decades. My own view
is that the philosophy of mind and social philosophy will become ever more central to
the entire philosophical enterprise. The idea that the study of language could replace
the study of mind is itself being transformed into the idea that the study of language
is really a branch of the philosophy of mind. Within the philosophy of mind, perhaps
20
CONTEMPORARY PHILOSOPHY IN THE UNITED STATES
the key notion requiring analysis is that of intentionality – that property of the mind
by which it is directed at or about or of objects and states of affairs in the world inde-
pendent of itself. Most of the work done by analytic philosophers in the philosophy of
mind has tended to cluster around the traditional mind–body problem. My own view
is that we need to overthrow this problem: in its traditional version, it was based on the
assumption that mental properties and physical properties were somehow different
from each other, and that therefore, there was some special problem not like other prob-
lems in biology as to how they could both be characteristics of the human person. Once
we see that so-called mental properties really are just higher-level physical properties
of certain biological systems, I believe this problem can be dissolved. Once it is dissolved,
however, we are still left with the task of analysing what is the central problem in the
philosophy of language and in cognitive science, as well as the philosophy of mind,
namely, the way that human representational capacities relate the human organism to
the world. What are called ‘language’, ‘mind’, ‘thinking’, ‘speaking’ and ‘depicting’ are
just different aspects of this mode of relating to reality.
I believe that the causal theory of reference will be seen to be a failure once it is
recognized that all representations must occur under some aspect or other, and that
the extensionality of causal relations is inadequate to capture the aspectual character
of reference. The only kind of causation that could be adequate to the task of reference
is intentional causation or mental causation, but the causal theory of reference cannot
concede that ultimately reference is achieved by some mental device, since the whole
approach behind the causal theory was to try to eliminate the traditional mentalism of
theories of reference and meaning in favour of objective causal relations in the world.
My prediction is that the causal theory of reference, though it is at present by far the
most influential theory of reference, will prove to be a failure for these reasons.
Perhaps the single most disquieting feature of analytic philosophy in the fifty-year
period that I have been discussing is that it has passed from being a revolutionary
minority point of view held in the face of traditionalist objections to becoming itself the
conventional, establishment point of view. Analytic philosophy has become not only
dominant but intellectually respectable, and, like all successful revolutionary move-
ments, it has lost some of its vitality in virtue of its very success. Given its constant
demand for rationality, intelligence, clarity, rigour and self-criticism, it is unlikely that
it can succeed indefinitely, simply because these demands are too great a cost for many
people to pay. The urge to treat philosophy as a discipline that satisfies emotional rather
than intellectual needs is always a threat to the insistence on rationality and intelli- | Blackwell |
gence. However, in the history of philosophy, I do not believe we have seen anything to
equal the history of analytic philosophy for its rigour, clarity, intelligence and, above
all, its intellectual content. There is a sense in which it seems to me that we have been
living through one of the great eras in philosophy.
References
Austin, J. L. 1962: How to do Things with Words. Oxford: Clarendon Press.
Donnellan, K. 1970: Proper Names and Identifying Descriptions. Synthèse , 21, 335–58.
Feyerabend, P. 1975: Against Method. London: Humanities Press.
Grice, H. P. 1957: Meaning. Philosophical Review, 66.
21
JOHN R. SEARLE
—— 1968: Utterer’s Meaning, Sentence-Meaning, and Word-Meaning. Foundations of Language,
4, 1–18.
Kripke, S. 1972: Naming and Necessity. In G. Harman and D. Davidson (eds) Semantics of Natural
Language, Dordrecht: Reidel.
—— 1982: Wittgenstein on Rules and Private Language. Cambridge, MA: Harvard University Press.
Kuhn, T. 1962: The Structure of Scientific Revolutions . Chicago: University of Chicago Press.
Putnam, H. 1975: The Meaning of ‘Meaning’. In his Philosophical Papers, Vol. 2: Mind, Language
and Reality, Cambridge: Cambridge University Press.
Quine, W. V. O. 1953: Two Dogmas of Empiricism. In his From a Logical Point of View, Cambridge,
MA: Harvard University Press.
Rawls, J. 1971: A Theory of Justice. Cambridge, MA: Harvard University Press.
Searle, J. R. 1964: How to Derive ‘Ought’ from ‘Is’. Philosophical Review, 73.
—— 1969: Speech Acts: An Essay in the Philosophy of Language. Cambridge: Cambridge University
Press.
—— 1979: Expression and Meaning. Cambridge: Cambridge University Press.
—— 1980a: Minds, Brains and Programs. Behavioral and Brain Sciences, 3, 417–24.
—— 1980b: Intrinsic Intentionality. Behavioral and Brain Sciences, 3, 450–6.
—— 1983: Intentionality: An Essay in the Philosophy of Mind. Cambridge: Cambridge University
Press.
Wittgenstein, L. 1953: Philosophical Investigations (translated by G. E. M. Anscombe). Oxford:
Blackwell.
22
Contemporary Philosophy: A Second Look
B E R NA R D W I L L I A M S
1 The Identity of Analytical Philosophy
Given the title of John Searle’s essay, this second introduction might have been expected
to complement the first geographically, by dealing with present philosophical develop-
ments in places other than the United States, but this is not in fact what it will try to
do. Philosophy in the United States, in other English-speaking parts of the world, and
in many other countries as well, is now very largely the same. In these places, there is
one philosophical culture, and inasmuch as it contains different approaches, and some
of the philosophy that is done within that culture is distinct from ‘analytical’ philoso-
phy, that itself is not a matter of geographical region.
It is true that ‘analytical’ philosophy, the style of philosophy described in Searle’s
essay and overwhelmingly represented in this volume, is often professionally distin-
guished (in job advertisements, for instance) from ‘continental’ philosophy, and this
does represent, in a clumsy way, something which until recently was true: that the ways
in which philosophy was done in France, Germany and other countries of continental
Europe were typically different from the ‘analytical’ style. To a much more limited
extent, that remains so. (Chapters 40–2 describe the situation in continental Europe.)
However, it is absurd to mark philosophical differences with these two labels. Apart from
involving a strange cross-classification – rather as though one divided cars into front-
wheel drive and Japanese – the labels are seriously misleading, in helping one to forget
lay in continental Europe (notably
that the origins of analytical philosophy itself
so, when its founding father is taken to be Frege and its greatest representative
Wittgenstein), and that the interests of ‘continental’ philosophy are not confined to
the European continent.
Moreover, it is not simply a matter of labelling. It is not that the distinction in itself | Blackwell |
is unproblematical, and only needs more aptly chosen titles to represent it. The
distinctions involved are obscure, and the titles serve to conceal this fact. The term
‘continental’ serves to discourage thought about the possible contrasts to analytical
philosophy, and so about the identity of analytical philosophy itself. At the same time,
the vague geographical resonance of the term does carry a message, that analytical
philosophy is familiar as opposed to exotic, and perhaps – if some older stereotypes
are in play – that it is responsible as opposed to frivolous. This is indeed what many
BERNARD WILLIAMS
analytical philosophers believe about it, and they believe it not so much in contrast to
activities going on in some remote continent, but, in many cases, as opposed to work
done in their local departments of literature. It is not true that work in other styles does
not exist in the heartlands of analytical philosophy; it merely does not exist in depart-
ments of philosophy. The distinctions involved are not geographical but professional,
and what is at issue is the identity of philosophy as a discipline.
In particular, what is at issue is the identity of philosophy as a subject that can sustain
ongoing, cumulative, research. If it can do this, it can make a claim which the human-
ities do not always find it easy to make, except to the extent that they are branches of
history: that there is something to be found out within their disciplines, that they can
add to knowledge. It has been part of the attraction of analytical philosophy that,
without the procedures of the experimental or theoretical sciences and with a more
human subject matter, it can claim to achieve results which command, if not agree-
ment, at least objective discussion, and which represent intellectual progress. It has
achievements that are not arbitrarily personal, and they compare favourably to those of
the social sciences (at least if one leaves aside the quite peculiar case of economics).
I do not think that these claims are empty. I think that the achievements of ana-
lytical philosophy are remarkable, and I agree with Searle that the subject is in various
ways more interesting than it was forty years ago. Its virtues are indeed virtues. I think
it is hard to be in good faith a teacher of philosophy unless you believe that there is
something worth doing that you, in virtue of your experience, can do and which you
can help other people to do. I think that the virtues of good philosophy are to a con-
siderable extent workmanlike. Quite certainly, no philosophy which is to be worthwhile
should lose the sense that there is something to be got right, that it is answerable to
argument and that it is in the business of telling the truth.
These things, I believe, are represented by the best of what is called analytical phi-
losophy, and to that extent I am committed to it. My own work has largely been in its
style. Yet, having now worked in it for a long time, and having, like Searle, seen it
change, I am a great deal more puzzled about it than I once was; in particular, I am
puzzled about the ways in which it must understand itself if it is to have those virtues,
and also about the costs of sustaining those virtues. There is one understanding of
these virtues which is certainly widespread among analytical philosophers, and which
directly serves the promise of ongoing research: that these are indeed the virtues of a
science. Some philosophers who are impressed by this conception of what they are
doing ritualize it into the forms of presentation familiar from the sciences. Sometimes
this is mere scientism, but in other cases it signals the fact that their branch of phi-
losophy is near neighbour to some science, such as quantum mechanics or cognitive
science.
But the virtues of workmanlike truthfulness which analytical philosophy typically | Blackwell |
cultivates are much more important than any attempt to make philosophy look like
a science. With many other branches of philosophy there is no plausible version of
sharing a party wall with science, and yet these virtues are still regarded as virtues. In
fact, even in the case of the more scientific areas of philosophy, it is obvious enough
that these virtues are not recommended only because they are possessed by its scien-
tific neighbours: they are taken to be intellectual virtues, good in the same way for
philosophy and for science.
24
CONTEMPORARY PHILOSOPHY: A SECOND LOOK
But how far can philosophy cultivate just these virtues and remain true to other
aspects of the legacy of the subject, to aims that it has pursued in the past? The sci-
ences aim to make claims that can and should be conveyed in ways that are minimally
expressive; they are not meant to convey feeling, or to display much literary imagina-
tion, or to speak (at least overtly) in a persuasive mode. But if we think in particular of
moral or political philosophy, is this ambition actually true to the traditions of the
subject, even as those are embodied in the historical canon of analytical philosophy
itself? Is it true to a tradition that contains Plato, Hobbes, Hume, Kant and (come to
that) John Stuart Mill? Is it in fact true to any great figure of that tradition, except
perhaps Aristotle? And if we are to take him as our model, we are left with many ques-
tions to consider – whether, for instance, the affectless treatises that we possess do rep-
resent his voice; if so, whether the tone does not represent a quite special view of ethical
life; and whether we should not weigh rather soberly the fact that the closest previous
imitation of Aristotle was to be found in a movement called scholasticism.
Particularly in moral and political philosophy, but not only there, there is a question
of what the procedures typical of analytical philosophy mean. There are many
virtuous and valuable things that they make possible, and at the same time there are
resources of philosophy in the past that they seemingly exclude, and it is important not
to assume that this balance is simply given to us, above all by an unquestionable and
transparent interpretation of the ideals of intellectual responsibility. I do not want to
suggest that the adoption of the analytical style is a mere abdication, a cowardly refusal
to adopt a more imaginative and committed manner which (critics sometimes suggest)
is obviously to hand. Still less is it simply a matter of scientistic camouflage. It is a
feature of our time that the resources of philosophical writing typically available to
analytical philosophy should present themselves so strongly as the responsible way of
going on, the most convincing expression of a philosopher’s claim on people’s atten-
tion. But that is an historical fact, and we should try to understand it as such. I do not
think that I adequately understand it, and, for that reason, I would not like to predict
what other possibilities there may prove to be for a philosophy that preserves the merits
of analytical philosophy.
In the rest of this essay, I shall try to give an outline of some principal concerns of
analytical moral and political philosophy. This will supplement the account that John
Searle has given of the state of the art in other areas, but I hope also that in describing
some of what analytical philosophy has recently done for these subjects, it may en-
courage readers to ask what new things it might be able to do.
2 Meta-ethics
Philosophical studies have often been understood, in the analytical tradition but not
only there, as being higher-order, in the sense that natural science, for instance, will
study natural phenomena, while the philosophy of science will study, from some par-
ticular points of view, the operations of science. Some of moral philosophy (or, as I shall | Blackwell |
also call it, ethics) is certainly a higher-order study. It discusses such things as the
nature of moral judgements, and asks whether they express genuine beliefs, whether
they can be objectively true, and so forth. Such higher-order questions are the concern
25
BERNARD WILLIAMS
of meta-ethics. At one time (thirty to fifty years ago) it was widely thought in analyti-
cal philosophy that ethics consisted only of meta-ethics. A powerful source of this con-
ception was the belief in a firm distinction between fact and value, to which Searle has
referred. However, the idea of ethics as simply meta-ethics does not follow directly from
the distinction between fact and value, and those who used the distinction to support
that idea needed two further assumptions.
One was that philosophy itself should be in the business of ‘fact’ (which, for the pur-
poses of the distinction, included theory) and not of value. This was connected with a
certain conception of philosophy, important to the identity of analytical philosophy, in
which it is taken to derive its authority from its theoretical stance as an abstract intel-
lectual enquiry. Some earlier philosophers, such as G. E. Moore, had indeed believed in
the distinction between fact and value, but had supposed that philosophers, in one way
or another, could have quite a lot to say substantively about values. The journey from
the fact–value distinction to a view of ethics as only meta-ethics involved the assump-
tion that this was impossible or inappropriate.
The second assumption involved in the journey was that meta-ethics itself could be
value-neutral, that the study of the nature of ethical thought did not commit one to
any substantive moral conclusions. A yet further assumption, which was not necessary
to the journey but did often accompany it, was that meta-ethics should be linguistic in
style, and its subject should be ‘the language of morals’. This latter idea has now almost
entirely disappeared, as the purely linguistic conception of philosophical study has
more generally retreated. Beyond that, however, there are now more doubts about the
extent to which meta-ethics can be value-neutral, and, in addition, philosophers simply
feel freer in making their own ethical commitments clear. Meta-ethics remains a part
of ethics, but most writings in philosophical ethics now will declare substantive moral
positions, either in close association with some meta-ethical outlook or in a more
free-standing manner.
Recent meta-ethical discussions have carried on the traditional interest in the objec-
tivity of ethics. In this connection, ‘moral judgements’ are often grouped together and
compared with everyday statements of fact or with scientific statements. Some theories
claim that moral judgements lack some desirable property that factual statements can
attain, such as objectivity or truth, while other theories claim that they can have this
property. These debates, particularly those conducted under the influence of positivism,
have tended to assimilate two different issues. One concerns the prospects of rational
agreement in ethical matters. The other concerns the semantic status of moral judge-
ments: whether they are, typically, statements at all (as opposed, for instance, to
prescriptions), and whether they aim at truth.
Objectivity is best understood in terms of the first kind of question. There clearly are
substantive and systematic disagreements about ethical questions, both between dif-
ferent societies or cultures, and within one society (particularly when, as now, the
culture of one society may be highly pluralist). Some of these disagreements may turn
out to be due to misunderstanding or bad interpretation and dissolve when local prac-
tices are better understood, but this is not true of all of them. Since ancient times it has
been suggested that these disagreements have a status different from disagreements
about facts or about the explanation of natural phenomena. With the latter, if the | Blackwell |
parties understand the question at issue, they see how after further enquiry they may
26
CONTEMPORARY PHILOSOPHY: A SECOND LOOK
end up in one of several positions: they may come to rational agreement on one answer
or another, they may recognize that such evidence as they can obtain underdeter-
mines the answer and leaves them with intelligible room for continued disagreement,
or they may advance in understanding to a point at which they see that the question
in its original form cannot be answered, for instance because it was based on a false
presupposition.
By contrast, it is suggested that we can understand an ethical dispute perfectly well,
and yet it be clear that it need not come out in any of these ways. Disagreeing about
an ethical matter, the parties may radically disagree about the kinds of considerations
that would settle the question, and the suggestion is that at the end of the line there
may be no rational way of arriving at agreement. This is the suggestion that ethical
claims lack objectivity. Some theories have associated this position with a view about
the semantic status of moral utterances. Emotivism, a theory closely associated with
positivism, held that moral utterances were merely expressions of emotion, not far
removed from expletives, and it took this, reasonably enough, not to be an objectivist
theory. In this case, the semantic account and the denial of objectivity went closely
together. However, it is a mistake to think that the two issues are in general simply
related to one another.
A clear illustration of this is Kant’s theory. Kant supposed that moral statements, or
at any rate the most basic of them, were actually prescriptions, and he understood the
fundamental principle of morality to be an imperative. However, when the issue is
expressed in terms of rational agreement or disagreement, Kant is quite certainly an
objectivist: the Categorical Imperative, together in some cases with empirical informa-
tion, determines for any rational agent what morality requires, and all rational agents
are in a position to agree on it. Another example of objectivity which is at least non-
committal about the semantics involved comes from virtue theory. Aristotle believed
that experienced and discriminating agents who had been properly brought up would
reach rational agreement in action, feeling, judgement and interpretation. He believed,
moreover, that this possibility was grounded in the best development and expression of
human nature, and that views about what counted as the best human development
could themselves command rational agreement. This certainly offers a kind of objec-
tivity, but it does not particularly emphasize agreement in belief; no doubt some agree-
ment in belief will matter, but so equally will agreement in feeling and in practical
decision.
However, even if objectivity need not imply rational agreement in belief, it may be
argued that the converse holds: that a theory which represents moral judgements as
basically expressing beliefs must be committed to objectivity. Beliefs, this argument
goes, are true or false. If moral judgements express beliefs, then some of them are true
and there is such a thing as truth in morality. So if people disagree about what to
believe, someone must be wrong. This certainly sounds as though there must be objec-
tivity. The difficulty with this argument is that it seems to be too easy to agree that moral
judgements admit of truth or falsehood. They are certainly called ‘true’ and ‘false’, as
even the emotivists had to concede, and the claim that nevertheless they are not really
true or false needs some deciphering. Emotivism itself offered a semantic analysis in
terms of which such judgements turn out not really to be statements, which certainly
gives some content to the claim that they are not really true or false. However, such
27
BERNARD WILLIAMS
| Blackwell |
analyses run into difficulty precisely because the air of being a statement that sur-
rounds moral judgements is not merely superficial – they behave syntactically just as
other kinds of statements do.
An alternative is to argue that moral judgements can indeed be true or false, but that
nothing interesting follows from this. On some theories of truth, sometimes called
‘redundancy’ theories, to claim that ‘P’ is true is to do no more than to assert that P.
Any theory of truth must accept the equivalence ‘‘‘P’’ is true if and only if P’; the pecu-
liarity of redundancy theories is to claim that this is all there is to the nature of truth.
If this is correct, then the truth or falsehood of moral judgements will follow simply
from their taking a statemental form, which allows them to be asserted or denied. Objec-
tivity will then either be understood as something that necessarily goes along with truth
and falsehood, in which case, on the redundancy theory, it will be no more interesting
or substantive than truth; or it will be more interesting and substantive – implying for
instance the possibility of rational agreement – in which case, on the redundancy view,
it will not follow just from the fact that moral judgements can be true or false.
It is widely, though not universally, agreed that an adequate theory of truth needs
to go beyond the redundancy view, but it is disputed how far it needs to go. Some argue
that if one takes seriously the claim that a given proposition is true, then this does imply
the idea that there could be convergence in belief on the proposition under favourable
circumstances. This approach brings the idea of truth itself nearer to that of objectiv-
ity as that has been introduced here. Others hold that a properly ‘minimalist’ theory of
truth need not bring in such a strong condition.
If objectivism and the mere truth of moral statements have often been assimilated
to one another, realism, equally, is often assimilated to one or both of them. Yet we
should expect realism, if it is an issue at all, to be a further issue. Elsewhere in phi-
losophy, for instance in the philosophy of mathematics or the philosophy of science, it
can be agreed that statements of a certain kind (about numbers, or about subatomic
particles) are capable of truth, and also that they can command rational agreement,
and yet it is thought by many philosophers that this does not answer the question
whether those statements should be interpreted realistically, where this means (very
roughly indeed) that the statements are taken to refer to objects that exist independently
of our thoughts about them. Even if it is not easy to give a determinate sense to such
questions, at any rate one would not expect realism to follow trivially from the claim
that moral statements can be true or false.
Some philosophers, influenced by the late John Mackie and in the line of Hume, deny
realism by claiming that the moral properties of people, actions and so forth, are not
‘in the world’ but are ‘projected’ on to it from our feelings and reactions. According to
the most familiar version of this view, secondary qualities such as colours are also pro-
jected on to the world, and this raises the question whether the metaphor has not mis-
located the most significant issues about moral properties. The theory implies that
ethical outlooks are ‘perspectival’ or related to human experience in ways in which
physical theory (at least) is not, but this does not take us very far: it will not tell us any-
thing very distinctive about ethical realism to know only that ethical concepts are per-
spectival in a sense in which colour concepts, or perhaps psychological concepts, are
also perspectival. An anti-realism that gives moral properties much the same status as
28
CONTEMPORARY PHILOSOPHY: A SECOND LOOK
colours will probably satisfy many moral realists. We need to ask how far the moral
concepts and outlooks of various human groups can intelligibly differ while the rest of | Blackwell |
their ways of describing the world, in particular their psychological concepts for
describing people’s behaviour, remain the same. Again, how far can their psychologi-
cal concepts themselves intelligibly vary, and how should we understand those
variations?
In considering such questions, it is helpful to abandon a very limiting assumption
which has been made up to this point in the discussion, namely, that all ‘moral judge-
ments’ are essentially of the same kind and stand in the same relation to such matters
as truth and objectivity. In considering moral disagreement, philosophers have con-
centrated on cases in which the parties express themselves in terms of the ‘thin’ ethical
concepts such as ‘good’, ‘right’ and ‘ought’. The parties share the same moral and other
concepts, and disagree about whether a given judgement should be asserted or denied:
they disagree, for instance, about whether capital punishment is wrong. To represent
disagreement in this way may seem to isolate in a helpful way its moral focus. But a lot
of moral discussion – to differing degrees in different societies – is conducted in
terms of ‘thick’ concepts, such as ‘brutality’ or ‘betrayal’ or ‘sentimentality’, and it is a
mistake to suppose that such concepts are merely convenient devices for associating a
bunch of empirical considerations with a thin ethical concept. It has been increasingly
accepted in recent discussions that the application of such concepts is guided by their
evaluative point, and that one cannot understand them without grasping that point.
(This does not mean that anyone who understands such a concept must have adopted
it as his or her own, but it does mean that he or she needs to have imaginatively iden-
tified, as an ethnographer does, with those who use it.) At the same time, however, such
concepts apply to some empirical states of affairs and not to others, and there is room
for truth, objectivity and knowledge to be displayed in their application.
If this is correct, then it may be more helpful to consider ethical disagreement, not
at the ultimate cutting edge of the practical judgements about what ought or ought not
to happen, but further back, in the network of more substantive and thicker concepts
that back up such judgements. Such concepts will typically serve more purposes than
expressing bare ‘moral judgements’. They may play a role, for instance, in a scheme of
psychological explanation. The question will then become, rather, why and to what
extent different cultures differ in their ethical concepts, and, more broadly, in the frame-
works of understanding that go with such concepts. Seen in this light, meta-ethical
questions move further away from being questions in the philosophy of language or the
theory of justification or epistemology, and become more like questions in the theory
of cultural understanding. Indeed they may become directly questions of cultural
understanding. The most basic question about objectivity may turn out to be the ques-
tion of the extent to which different human societies share an underlying determinate
framework of ethical concepts. By turning in such a direction, philosophical discussion
becomes more empirical and historical, more richly related to other disciplines, and
more illuminating. At the same time, it means that philosophers have to know about
more things, or people in other disciplines have to take on issues in philosophy. To that
extent, philosophy tends to lose a distinctive subject matter and its identity becomes
blurred.
29
BERNARD WILLIAMS
3 Ethical Theory
I have already said that analytical philosophers are happier than they once were to
recognize that what they say in moral philosophy is likely to have a substantive ethical
content, and that even meta-ethics is likely to have some such consequences. There is
a problem, however, about how this is related to the authority of philosophy. If philoso-
phers are going to offer moral opinions – within their subject, that is to say, and not | Blackwell |
simply as anyone offers moral opinions – they need to have some professional claim to
attention. They are not, as philosophers, necessarily gifted with unusual insight or
imagination, and they may not have a significantly wide experience or knowledge of
the world. Their claim to attention rests on their capacity for drawing distinctions,
seeing the consequences of assumptions, and so forth: in general, on their ability to
develop and control a theoretical structure. If the authority of philosophy lies in its
status as a theoretical subject, the philosopher’s special contribution to substantive
ethical issues is likely to be found in a theoretical approach to them. One of the most
common enterprises in moral philosophy at present is the development of various
ethical theories.
The aim of ethical theory is to cast the content of an ethical outlook into a theoreti-
cal form. An ethical theory must contain some meta-ethics, since it takes one view
rather than another of what the structure and the central concepts of ethical thought
must be, though it need not have an opinion on every meta-ethical issue. It is commit-
ted to putting forward in a theoretical form a substantive ethical outlook. In doing this,
ethical theories are to different degrees revisionary. Some start with a supposedly unde-
niable basis for ethics, and reject everyday moral conclusions that conflict with it. (It is
a good question why the basis should be regarded as undeniable if it has such con-
sequences.) Others, less dogmatically, consider the moral conclusions that would be
delivered by conflicting outlooks, and decide which outlook makes the most coherent
systematic sense of those conclusions that we (that is to say, the author and those
readers who agree with him or her) find most convincing. Unsystematized but carefully
considered judgements about what we would think it right to do in a certain situation,
or would be prepared to say in approval or criticism of people and their actions, are
often called in the context of such a method ‘moral intuitions’. (The term ‘intuition’
has a purely methodological force: it means only that these judgements seem to us, after
consideration, pre-theoretically convincing, not that they are derived from a faculty of
intuiting moral truths.) A preferred method is to seek what John Rawls has called a
‘reflective equilibrium’ between theory and intuitions, modifying the theory to accom-
modate robust intuitions, and discarding some intuitions which clash with the theory,
particularly if one can see how they might be the product of prejudice or confusion.
Moral theories are standardly presented as falling into three basic types, centring
respectively on consequences, rights and virtues. The first are unsurprisingly called
‘consequentialist’, and the last ‘virtue theories’. The second are often called ‘deonto-
logical’, which means that they are centred on duty or obligation, but this is a cross-
classification, since consequentialist theories also give prominence to an obligation,
that of bringing about the best consequences. (In the case of the most familiar and
the consequences is
popular consequentialist theory, utilitarianism, the value of
30
CONTEMPORARY PHILOSOPHY: A SECOND LOOK
expressed in terms of welfare or preference-satisfaction.) In terms of obligations, the
difference is rather that pure consequentialist theories present only one basic obliga-
tion, while the second type of theory has many. A more distinctive mark of difference
is to be found in the idea of a right: the second type of theory grounds many of an
agent’s obligations in the rights of others to expect certain behaviour from that agent,
a kind of consideration that utilitarians and other consequentialists regard as being at
best derivative, and at worst totally spurious.
Another way of understanding the division into three is in terms of what each | Blackwell |
theory sees as most basically bearing ethical value. For the first type of theory, it is good
states of affairs, and right action is understood as action tending to bring about good
states of affairs. For the second type, it is right action; sometimes what makes an action
right is a fact about its consequences, but often it is not – its rightness is determined
rather by respect for others’ rights, or by other obligations that the agent may have.
Virtue theory, finally, puts most emphasis on the idea of a good person, someone who
could be described also as an ethically admirable person. This is an important empha-
sis, and the notion of a virtue is important in ethics. However, once the types of theory
are distinguished in this way, it is hard to see them as all in the same line of business.
Consequentialist and rights theories aim to systematize our principles or rules of action
in ways that will, supposedly, help us to see what to do or recommend in particular
cases. A theory of the virtues can hardly do that: the theory itself, after all, is going to
say that what you basically need in order to do and recommend the right things are
virtues, not a theory about virtues. Moreover, virtuous people do not think always, or
usually, about the virtues. They think about such things as good consequences or
people’s rights, and this makes it clear that ‘virtue theory’ cannot be on the same level
as the other two types of theory.
4 Morality, Politics and Analytical Philosophy
Among moral concepts, that of rights is closest to law and also to politics, and phi-
losophical discussions of them often cross those boundaries. Given these relations, it is
not surprising that the kind of theory most often constructed to articulate the idea of
moral rights is contractualist, invoking the idea of an agreement that might be ratio-
nally arrived at by parties in some hypothetical situation in which they were required
to make rules by which they could live together. The inspiration of contractualist
theories goes back, in particular, to Kant. Kant’s own construction relies on some ideas
that are not shared by many modern theorists, in particular that a commitment to the
basic principle of morality (the so-called ‘Categorical Imperative’) is presupposed by the
very activity of a rational agent. It also involves a very obscure doctrine of freedom.
The modern theories inspired by Kantian ideas are less committed than Kant was to
showing that morality is ultimate rationality, and they allow also more empirical
material into the construction than Kant did.
The leading example of such a theory is that of John Rawls. His model of a set of
contracting parties reaching an agreement behind ‘a veil of ignorance’ has had an
immense influence on thinking about morality. It was designed for a purpose in politi-
cal philosophy, of constructing a theory of social justice. In Rawls’s theory, the veil of
31
BERNARD WILLIAMS
ignorance is introduced to disguise from each contractor his own particular advantages
and disadvantages and his own eventual position in the society that is being designed.
The political theory that uses this thought experiment is liberal, giving a high priority
to liberty and at the same time emphasizing redistribution of resources in the interests
of the disadvantaged. It is significant that when Rawls first produced his theory he saw
it in universalist terms, as offering a construction of social justice which would apply
to any society that met the conditions (very roughly) of being able to think about social
ideals and having the resources to implement them. Now, however, he has moved in
the direction of seeing the theory of justice as one that expresses the aspirations of a
particular social formation, the modern pluralist state.
Much recent political philosophy has centred on this liberal project, of defining terms
of just coexistence for people living in a pluralist society. One interpretation of that aim | Blackwell |
is to look for terms of coexistence that will not presuppose a common conception of the
good. On such an account, citizens can understand themselves as sharing a social exis-
tence although they have as individuals, or as members of communities less extensive
than the state, varying conceptions of a good life. Rawls has interpreted his own
purpose in those terms; it is expressed, for instance, in the fact that the parties in the
original position were supposed to make their decision on (broadly) self-interested
grounds, and not in the light of any antecedent conception of what a good society
would be. The values that they were taken to have were expressed only in the list of
‘primary goods’ in terms of which they made their choice, a list which made it clear
already that they set a high value on liberty, for instance, and did not assess everything
in terms of utility.
In fact, Rawls’s aim of making his theory as independent as possible from substan-
tial claims about the good does put it under some strain. At least in the first version of
the theory, the basic conception of justice included a large-scale commitment to eco-
nomic redistribution, and while this made it very welcome to many liberals (particu-
larly in the American sense of that word), it laid it open to the criticism from others
more inclined to libertarianism, such as Robert Nozick, that the theory incorporated
not only rightful terms of coexistence but a substantial and distinctive conception of
social justice as economic equality. In the later versions of Rawls’s theory, this concep-
tion is less prominent, but even more weight than before is laid on the idea that coex-
istence in a liberal pluralist society is not ‘a mere modus vivendi’ but a condition that
calls on important moral powers of toleration and respect for autonomy. This empha-
sis does seem to express a distinctive conception of the good, of a Kantian kind, to a
point at which it looks as though the condition of pluralism is not simply a contingency
of modern life, but an important vehicle of ethical self-expression.
Others, such as Ronald Dworkin, have pursued liberal theory, and in pluralist terms,
while accepting a commitment to a distinctive conception of the good. Others, again,
have claimed to reject liberal pluralism altogether and have turned in what is some-
times called a ‘communitarian’ direction. It is hard to tell in some cases where writers
of this tendency stand in relation to the politics of liberalism. While a few, notably
Alasdair MacIntyre, despairing of the whole enterprise, try only to diagnose our
condition and store some ethical goods for a better time, others seem to share in the
liberal undertaking but prefer, in opposition to Rawls’s Kantianism, a more Hegelian
type of discourse to express it.
32
CONTEMPORARY PHILOSOPHY: A SECOND LOOK
It is a rather odd feature of communitarian theories, at least if they take a tradi-
tionalist turn, that they recommend a politics that does not sit very easily with the exis-
tence of such theories themselves, except perhaps as a kind of interim measure. They
seek a politics in which people’s relations are formed by shared understandings which
to a considerable extent must be unspoken and taken for granted, and the exchange of
abstract political theories plays no obvious role in this conception of social life. (Hegel,
many of whose concerns are re-enacted in these debates, thought he had an answer to
this, in his conception of a society that could ultimately reconcile abstract under-
standing and concrete practice, but few current disputants are happy to pay the price
of admission to the Hegelian system.)
The liberals, on the other hand, have a conception of modern political life which at
least coherently embraces the existence of their own theories, since they understand
the modern state as a formation in which authority is peculiarly vested in discursive
argument, rather than in traditional or charismatic leadership. It is true that many | Blackwell |
political philosophers in the analytical tradition (unlike Rawls himself, and also unlike
Habermas, who comes from a sociological tradition) do not see the role of their theo-
ries in these terms, but rather as advancing trans-historical views about the demands
of a just political order. But even if liberals do not always recognize the point them-
selves, the role of theory in liberal political philosophy can be given a special justifica-
tion in terms of current political reality. Liberals coherently believe that the project
of political theory makes sense, because they are committed to thinking that in our
circumstances it makes sense to engage in a political activity of explaining the basic
principles of democratic government in such terms.
In this respect, moral philosophy is in an altogether worse situation. It typically lacks
an account of why the project of articulating moral theories makes any sense at all. As
many writers have pointed out, it bears little relation to the psychology of people’s
ethical lives, and inasmuch as it claims that turning morality into a theory makes it
more rational, there is a pressing question of what concept of rationality is being
invoked. To a limited extent, there may be an answer to that question, inasmuch as
some ethical questions, such as those raised by medical ethics, are public questions,
closely tied to politics and the law. In those cases, we need a public discourse to legiti-
mate some answers, since it is a public issue what should be permitted, and mere
appeals to ethical or professional authority will no longer do. But it would be a mistake
to suppose that in such cases we are presented with a pure concept of moral rational-
ity which we then apply to our historical circumstances. Rather, what counts as a ratio-
nal way of discussing such questions is influenced by the historical circumstances, and
above all by the need to give a discursive justification, in something like a legal style, for
procedures which increasingly are adopted in a public domain and can be challenged
in it.
Moreover, many important ethical issues are not of this kind at all. Morality has
always been connected not only with law and politics, but also with the meaning of an
individual’s life and his or her relations to other people. In these connections, the
authority of theory over the moral life remains quite opaque. Certainly, it will not do to
rely on the inference: philosophy must have something to say about the moral life;
the most responsible form of philosophy is analytical philosophy; what analytical phi-
losophy is best at is theory; so philosophy’s contribution to the moral life is theory.
33
BERNARD WILLIAMS
In rejecting this uninviting argument, some will want to attack the second premise.
However, it may be more constructive, and offer more of a challenge to thinking about
what one wants of philosophy, if one reconsiders the third premise, that what analyti-
cal philosophy does best is theory. Analytical philosophy’s own virtues, such as its
unfanatical truthfulness, could encourage it in ethics to remind us of detail rather than
bludgeon us with theory.
Truthfulness in personal life, and even in politics, is not necessarily opposed to the
exercise of the imagination. It is relevant here that an imaginative truthfulness is a
virtue in the arts. Writers in moral philosophy sometimes urge us to extend our ethical
understanding by turning to imaginative literature, in particular the novel. To the
extent that this is good advice, it is not because novels are convenient sources of psy-
chological information, still less because some of them are morally edifying. It is
because imaginative writing can powerfully evoke the strength of ethical considera-
tions in giving sense to someone’s life or to a passage of it, and, equally, present the pos- | Blackwell |
sible complexity, ambivalence and ultimate insecurity of those considerations. Good
literature stands against the isolation of moral considerations from the psychological
and social forces that both empower and threaten them. But this isolation of moral
considerations from the rest of experience is an illusion very much fostered by moral
philosophy itself; indeed, without that illusion some forms of moral philosophy could
not exist at all. So there are lessons here not just for philosophy’s use of other writing,
but for philosophical writing itself. The truthfulness that it properly seeks involves
imaginative honesty and not just argumentative accuracy.
Analytical philosophy, or some recognizable descendant of it, should be able to make
a richer contribution to ethics than has often been the case up to now. If it is to do so,
it will need to hold on to two truths which it tends to forget (not only in ethics, but most
damagingly there): that philosophy cannot be too pure, and must merge with other
kinds of understanding; and that being soberly truthful does not exclude, but may
actually demand, the imagination.
34
PART I
AREAS OF PHILOSOPHY
1
Epistemology
A. C. G R AY L I N G
For most of the modern period of philosophy, from Descartes to the present, epistemol-
ogy has been the central philosophical discipline. It raises questions about the scope and
limits of knowledge, its sources and justification, and it deals with sceptical arguments
concerning our claims to knowledge and justified belief. This chapter firstly considers dif-
ficulties facing attempts to define knowledge and, secondly, explores influential responses
to the challenge of scepticism. Epistemology is closely related to METAPHYSICS (chapter
2), which is the philosophical account of what kinds of entities there are. Epistemologi-
cal questions are also crucial to most of the other areas of philosophy examined in this
volume, from ETHICS (chapter 6) to PHILOSOPHY OF SCIENCE (chapter 9) and PHILOSO-
PHY OF MATHEMATICS (chapter 11) to PHILOSOPHY OF HISTORY (chapter 14). Chapters
on individuals or groups of philosophers from DESCARTES (see chapter 26) to KANT
(chapter 32) discuss classical epistemology, while several chapters about more recent
philosophers also follow epistemological themes.
Introduction
Epistemology, which is also called the theory of knowledge, is the branch of philosophy
concerned with enquiry into the nature, sources and validity of knowledge. Among the
chief questions it attempts to answer are: What is knowledge? How do we get it? Can
our means of getting it be defended against sceptical challenge? These questions are
implicitly as old as philosophy, although their first explicit treatment is to be found in
PLATO (c.427–347 BC) (see chapter 23), in particular in his Theaetetus. But it is primar-
ily in the modern era, from the seventeenth century onwards – as a result of the work
of DESCARTES (1596–1650) (chapter 26) and LOCKE (1632–1704) (chapter 29) in asso-
ciation with the rise of modern science – that epistemology has occupied centre-stage
in philosophy.
One obvious step towards answering epistemology’s first question is to attempt a
definition. The standard preliminary definition has it that knowledge is justified true
belief. This definition looks plausible because, at the very least, it seems that to know
something one must believe it, that the belief must be true, and that one’s reason for
A. C. GRAYLING
believing it must be satisfactory in the light of some criterion – for one could not be said
to know something if one’s reasons for believing it were arbitrary or haphazard. So each
of the three parts of the definition appears to express a necessary condition for knowl-
edge, and the claim is that, taken together, they are sufficient.
There are, however, serious difficulties with this idea, particularly about the nature
of the justification required for true belief to amount to knowledge. Competing pro- | Blackwell |
posals have been offered to meet the difficulties, either by adding further conditions or
by finding a better statement of the definition as it stands. The first part of the follow-
ing discussion considers these proposals.
In parallel with the debate about how to define knowledge is another about how
knowledge is acquired. In the history of epistemology there have been two chief schools
of thought about what constitutes the chief means to knowledge. One is the ‘rational-
ist’ school (see chapters 26 and 27), which holds that reason plays this role. The other
is the ‘empiricist’ (see chapters 29, 30 and 31), which holds that it is experience, prin-
cipally the use of the senses aided when necessary by scientific instruments, which does
so.
The paradigm of knowledge for rationalists is mathematics and logic, where neces-
sary truths are arrived at by intuition and rational inference. Questions about the
nature of reason, the justification of inference, and the nature of truth, especially nec-
essary truth, accordingly press to be answered.
The empiricists’ paradigm is natural science, where observation and experiment are
crucial to enquiry. The history of science in the modern era lends support to empiri-
cism’s case; but precisely for that reason philosophical questions about perception,
observation, evidence and experiment have acquired great importance.
But for both traditions in epistemology the central concern is whether we can trust
the routes to knowledge they respectively nominate. Sceptical arguments suggest that
we cannot simply assume them to be trustworthy; indeed, they suggest that work is
required to show that they are. The effort to respond to scepticism therefore provides a
sharp way of understanding what is crucial in epistemology. Section 2 below is accord-
ingly concerned with an analysis of scepticism and some responses to it.
There are other debates in epistemology about, among other things, memory, judge-
ment, introspection, reasoning, the ‘a priori–a posteriori’ distinction, scientific method
and the methodological differences, if any, between the natural and the social sciences;
however, the questions considered here are basic to them all.
1 Knowledge
1.1 Defining knowledge
There are different ways in which one might be said to have knowledge. One can know
people or places, in the sense of being acquainted with them. That is what is meant
when one says, ‘My father knew Lloyd George’. One can know how to do something,
in the sense of having an ability or skill. That is what is meant when one says, ‘I know
how to play chess’. And one can know that something is the case, as when one says, ‘I
know that Everest is the highest mountain’. This last is sometimes called ‘propositional
knowledge’, and it is the kind epistemologists most wish to understand.
38
EPISTEMOLOGY
The definition of knowledge already mentioned – knowledge as justified true belief
– is intended to be an analysis of knowledge in the propositional sense. The definition
is arrived at by asking what conditions have to be satisfied if we are correctly to describe
someone as knowing something. In giving the definition we state what we hope are the
necessary and sufficient conditions for the truth of the claim ‘S knows that p’, where
‘S’ is the epistemic subject – the putative knower – and ‘p’ a proposition.
The definition carries an air of plausibility, at least as applied to empirical knowl-
edge, because it seems to meet the minimum we can be expected to need from so con-
sequential a concept. It seems right to expect that if S knows that p, then p must at least
be true. It seems right to expect that S must not merely wonder whether or hope that
p is the case, but must have a positive epistemic attitude to it: S must believe that it is
true. And if S believes some true proposition while having no grounds, or incorrect
grounds, or merely arbitrary or fanciful grounds, for doing so, we would not say that S
knows p; which means that S must have grounds for believing p which in some sense | Blackwell |
properly justify doing so.
Of these proposed conditions for knowledge, it is the third that gives most trouble.
The reason is simply illustrated by counter-examples. These take the form of cases in
which S believes a true proposition for what are in fact the wrong reasons, although
they are from his or her own point of view persuasive. For instance, suppose S has
two friends, T and U. The latter is travelling abroad, but S has no idea where. As for T,
S saw him buying and thereafter driving about in a Rolls Royce, and therefore believes
that he owns one. Now, from any proposition p one can validly infer the disjunction
‘p or q’. So S has grounds for believing ‘T owns a Rolls or U is in Paris’, even though, ex
hypothesi, he has no idea of U’s location. But suppose T in fact does not own the
Rolls – he bought it for someone else, on whose behalf he also drives it. And further
suppose that U is indeed, by chance, in Paris. Then S believes, with justification, a true
proposition: but we should not want to call his belief knowledge.
Examples like this are strained, but they do their work; they show that more
needs to be said about justification before we can claim to have an adequate account of
knowledge.
1.2 Justification
A preliminary question concerns whether having justification for believing some p
entails p’s truth, for, if so, counter-examples of the kind just mentioned get no purchase
and we need not seek ways of blocking them. There is indeed a view, called ‘infallibil-
ism’, which offers just such a resource. It states that if it is true that S knows p, then S
cannot be mistaken in believing p, and therefore his justification for believing p guar-
antees its truth. The claim is, in short, that one cannot be justified in believing a false
proposition.
This view is rejected by ‘fallibilists’, who claim that one can indeed have justification
for believing some p although it is false. Their counter to infallibilism turns on identi-
fying a mistake in its supporting argument. The mistake is that whereas the truth of ‘S
knows that p’ indeed rules out the possibility that S is in error, this is far from saying
that S is so placed that he cannot possibly be wrong about p. It is right to say: (1) ‘it is
impossible for S to be wrong about p if he knows p’, but it is not invariably right to say
39
A. C. GRAYLING
(2) ‘if S knows p, then it is impossible for him to be wrong about p’. The mistake turns
on thinking that the correct wide scope reading (1) of ‘it is impossible’ licenses the
narrow scope reading (2) which constitutes infallibilism.
An infallibilist account makes the definition of knowledge look simple: S knows p if
his belief in it is infallibly justified. But this definition renders the notion of knowledge
too restrictive, for it says that S can justifiably believe p only when the possibility of p’s
falsity is excluded. Yet it appears to be a commonplace of epistemic experience that
one can have the very best evidence for believing something and yet be wrong (as the
account of scepticism given below is at pains to show), which is to say that fallibilism
seems the only account of justification adequate to the facts of epistemic life. We need
justification can give us an adequate
therefore to see whether fallibilist theories of
account of knowledge.
The problem for fallibilist accounts is precisely the one illustrated by the Rolls Royce
example above, and others similar to it (so-called ‘Gettier examples’, introduced in
Gettier 1963), namely, that one’s justification for believing p does not connect with the
truth of p in the right way, and perhaps not at all. What is required is an account that
will suitably connect S’s justification both with his belief that p and with p’s truth.
What is needed is a clear picture of ‘justified belief ’. If one can identify what justi-
fies a belief, one has gone all or most of the way to saying what justification is; and en
route one will have displayed the right connection between justification, on the one | Blackwell |
hand, and belief and truth on the other. In this connection there are several standard
species of theory.
Foundationalism
One class of theories of justification employs the metaphor of an edifice. Most of our ordi-
nary beliefs require support from others; we justify a given belief by appealing to another
or others on which it rests. But if the chain of justifying beliefs were to regress without
terminating in a belief that is in some way independently secure, thereby providing a
foundation for the others, we would seem to lack justification for any belief in the chain.
It appears necessary therefore that there should be beliefs which do not need justifica-
tion, or which are in some way self-justifying, to serve as an epistemic underpinning.
On this view a justified belief is one which either is, or is supported by, a founda-
tional belief. The next steps therefore are to make clear the notion of a ‘foundation’ and
to explain how foundational beliefs ‘support’ non-foundational ones. Some way of
understanding foundationalism without reliance on constructional metaphors is
needed.
It is not enough barely to state that a foundational belief is a belief that requires no
justification, for there must be a reason why this is the case. What makes a belief inde-
pendent or self-standing in the required way? It is standardly claimed that such beliefs
justify themselves, or are self-evident, or are indefeasible or incorrigible. These are not
the same things. A belief might be self-justifying without being self-evident (it might take
hard work to see that it justifies itself). Indefeasibility means that no further evidence or
other, competing, beliefs, can render a given belief insecure. Yet this is a property that
the belief might have independently of whether or not it is self-justifying. And so on. But
what these characterizations are intended to convey is the idea that a certain immunity
from doubt, error or revision attaches to the beliefs in question.
40
EPISTEMOLOGY
It might even be unnecessary or mistaken to think that it is belief that provides the
foundations for the edifice of knowledge: some other state might do so. Perceptual states
have been offered as candidates, because they appear to be suitably incorrigible – if one
seems to see a red patch, say, then one cannot be wrong that one seems to see a red patch.
And it appears plausible to say that one’s belief that p needs no further justification or
foundation than that things appear to one as p describes them to be.
These suggestions bristle with difficulties. Examples of self-evident or self-justifying
beliefs tend to be drawn from logic and mathematics – they are of the ‘x is x’ or ‘one plus
one equals two’ variety, which critics are quick to point out give little help in grounding
contingent beliefs. Perceptual states likewise turn out to be unlikely candidates for foun-
dations, on the grounds that perception involves the application of beliefs which them-
selves stand in need of justification – among them beliefs about the nature of things and
the laws they obey. What is most robustly contested is the ‘myth of the given’, the idea
that there are firm, primitive and original data which experience supplies to our minds,
antecedent to and untainted by judgement, furnishing the wherewithal to secure the rest
of our beliefs.
There is a difficulty also about how justification is transmitted from foundational
beliefs to dependent beliefs. It is too strong a claim to say that the latter are deducible from
them. Most if not all contingent beliefs are not entailed by the beliefs that support them;
the evidence I have that I am now sitting at my desk is about as strong as empirical
evidence can be, yet given the standard sceptical considerations (such as, for example,
the possibility that I am now dreaming) it does not entail that I am sitting here.
If the relation is not a deductive one, what is it? Other candidate relations – inductive | Blackwell |
or criterial – are by their nature defeasible, and therefore, unless somehow supplemented,
insufficient to the task of transmitting justification from the foundations to other beliefs.
The supplementation would have to consist of guarantees that the circumstances that
defeat non-deductive justification do not in fact obtain. But if such guarantees – under-
stood, to avoid circularity, as not being part of the putative foundations themselves – were
available to protect non-deductive grounds, then appeal to a notion of foundations looks
simply otiose.
1.3 Coherence
Dissatisfaction with foundationalism has led some epistemologists to prefer saying that
a belief is justified if it coheres with those in an already accepted set. The immediate
task is to specify what coherence is, and to find a way of dealing in a non-circular way
with the problem of how the already accepted beliefs came to be so.
Hard on the heels of this task comes a number of questions. Is coherence a negative
criterion (that is, a belief lacks justification if it fails to cohere with the set) or a posi-
tive one (that is, a belief is justified when it coheres with the set)? And is it to be under-
stood strongly (by which coherence is sufficient for justification) or weakly (by which
coherence is one among other justifying features)?
The concept of coherence has its theoretical basis in the notion of a system, under-
stood as a set whose elements stand in mutual relations of both consistency and (some
kind of) interdependence. Consistency is of course a minimum requirement, and goes
without saying. Dependence is more difficult to specify suitably. It would be far too
41
A. C. GRAYLING
strong – for it would give rise to assertive redundancy – to require that dependence
means mutual entailment among beliefs (this is what some have required, citing
geometry as the closest example). A more diffuse notion has it that a set of beliefs is
coherent if any one of them follows from all the rest, and if no subset of them is
logically independent of the remainder. But this is vague, and anyway seems to require
that the set be known to be complete before one can judge whether a given belief
coheres with it.
A remedy might be to say that a belief coheres with an antecedent set if it can be
inferred from it, or from some significant subset within it, as being the best explanation
in the case. To this someone might object that not all justifications take the form of
explanations. An alternative might be to say that a belief is justified if it survives com-
parison with competitors for acceptance among the antecedent set. But here an objec-
tor might ask how this can be sufficient, since by itself this does not show why the belief
merits acceptance over equally cohering rivals. Indeed, any theory of justification has
to ensure as much for candidate beliefs, so there is nothing about the proposal that
distinctively supports the coherence theory. And these thoughts leave unexamined
the question of the ‘antecedent set’ and its justification, which cannot be a matter of
coherence, for with what is it to cohere in its turn?
1.4 Internalism and externalism
Both the foundationalist and coherence theories are sometimes described as ‘internal-
ist’ because they describe justification as consisting in internal relations among beliefs,
either – as in the former case – from a vertical relation of support between supposedly
basic beliefs and others dependent upon them, or – as in the latter – from the mutual
support of beliefs in an appropriately understood system.
Generally characterized, internalist theories assert or assume that a belief cannot be
justified for an epistemic subject S unless S has access to what provides the justification,
either in fact or in principle. These theories generally involve the stronger ‘in fact’
requirement because S’s being justified in believing p is standardly cashed in terms of
his having reasons for taking p to be true, where having reasons is to be understood in | Blackwell |
an occurrent sense.
Here an objection immediately suggests itself. Any S has only finite access to what
might justify or undermine his beliefs, and that access is confined to his particular view-
point. It seems that full justification for his beliefs would rarely be available, because his
experience would be restricted to what is nearby in space and time, and he would be
entitled to hold only those beliefs which his limited experience licensed.
A related objection is that internalism seems inconsistent with the fact that many
people appear to have knowledge despite not being sophisticated enough to recognize
that thus-and-so is a reason for believing p – that is the case, for example, with
children.
A more general objection still is that relations between beliefs, whether of the foun-
dationalist or coherence type, might obtain without the beliefs in question being true
of anything beyond themselves. One could imagine a coherent fairy tale, say, which in
no point corresponds to some external reality, but in which beliefs are justified never-
theless by their mutual relations.
42
This uneasy reflection prompts the thought that there should be a constraint on
theories of justification, in the form of a demand that there should be some suitable
connection between belief possession and external factors – that is, something other
than the beliefs and their mutual relations – which determines their epistemic value.
This accordingly prompts the idea of an alternative: externalism.
EPISTEMOLOGY
1.5 Reliability, causality and truth-tracking
Externalism is the view that what makes S justified in believing p might not be anything
to which S has cognitive access. It might be that the facts in the world are as S believes
them to be, and that indeed they caused S to believe them to be so by stimulating his
or her sensory receptors in the right kind of way. S need not be aware that this is how
his or her belief was formed. So S could be justified in believing p without it.
One main kind of externalist theory is reliabilism, the thesis – or cluster of theses –
having it that a belief is justified if it is reliably connected with the truth. According to
one influential variant, the connection in question is supplied by reliable belief-forming
processes, ones which have a high success rate in producing true beliefs. An example of
a reliable process might be normal perception in normal conditions.
Much apparent plausibility attaches to theories based on the notion of external
linkage, especially of causal linkage, between a belief and what it is about. An example
of such a theory is Alvin Goldman’s (1986) account of knowledge as ‘appropriately
caused true belief ’, where ‘appropriate causation’ takes a number of forms, sharing
the property that they are processes which are both ‘globally’ and ‘locally’ reliable – the
former meaning that the process has a high success rate in producing true beliefs,
the latter that the process would not have produced the belief in question in some ‘rel-
evant counterfactual situation’ where the belief is false. Goldman’s view is accordingly
a paradigm of a reliabilist theory.
An elegant second-cousin of this view is offered by Robert Nozick (1981). To the
conditions
(1)
p is true
and
(2) S believes p
Nozick adds
(3)
if p were not true, S would not believe p
and
(4)
if p were true, S would believe it.
Conditions (3) and (4) are intended to block Gettier-type counter-examples to the
justified true belief analysis by annexing S’s belief that p firmly to p’s truth. S’s belief
43
A. C. GRAYLING
that p is connected to the world (to the situation described by p) by a relation Nozick
calls ‘tracking’: S’s belief tracks the truth that p. He adds refinements in an attempt to
deflect the counter-examples that philosophers are always ingenious and fertile at
devising.
If these theories seem plausible it is because they accord with our pre-theoretical
views. But as one can readily see, there are plenty of things to object to in them, and a | Blackwell |
copious literature does so. Their most serious flaw, however, is that they are question-
begging. They do not address the question of how S is to be confident that a given belief
is justified; instead they help themselves to two weighty realist assumptions, one about
the domain over which belief ranges and the other about how the domain and S are
connected, so that they can assert that S is justified in believing a given p even if what
justifies him lies outside his own epistemic competence. Whatever else one thinks of
these suggestions, they do not enlighten S, and therefore do not engage the same
problem that internalist theories address.
But worst of all – so an austere critic might say – the large assumptions to which
these theories help themselves are precisely those that epistemology should be exam-
ining. Externalist and causal theories, in whatever guise and combination, are better
done by empirical psychology where the standard assumptions about the external
world and S’s connections with it are premised. Philosophy, surely, is where these
premises themselves come in for scrutiny.
1.6 Knowledge, belief and justification again
Consider this argument: ‘If anyone knows some p, then he or she can be certain that
p. But no one can be certain of anything. Therefore no one knows anything.’ This argu-
ment (advanced in this form by Unger 1975) is instructive. It repeats Descartes’s
mistake of thinking that the psychological state of feeling certain – which someone can
be in with respect to falsehoods, such as the fact that I can feel certain that Arkle will
win the Derby next week, and be wrong – is what we are seeking in epistemology. But
it also exemplifies the tendency in discussions of knowledge as such to make the defi-
nition of knowledge so highly restrictive that little or nothing passes muster. Should
one care if a suggested definition of knowledge is such that, as the argument just quoted
tells us, no one can know anything? Just so long as one has many well-justified beliefs
which work well in practice, can one not be quite content to know nothing? For my
part, I think one can.
This suggests that in so far as the points sketched in preceding paragraphs have inter-
est, it is in connection with the justification of beliefs and not the definition of knowledge
that they do so. Justification is an important matter, not least because in the areas of
application in epistemology where the really serious interest should lie – in questions
about the PHILOSOPHY OF SCIENCE (chapter 9), the PHILOSOPHY OF HISTORY (chapter 14)
or the concepts of evidence and proof in LAW (see chapter 13) – justification is the
crucial problem. That is where epistemologists should be getting down to work. By
comparison, efforts to define ‘knowledge’ are trivial and occupy too much effort in
epistemology. The disagreeable propensity of
the debate generated by Gettier’s
counter-examples – anticipated beautifully in Russell’s review of James (Russell 1910:
95) – to proceed on a chessboard of ‘-isms’, as exemplified above, is a symptom.
44
The general problem with justification is that the procedures we adopt, across all
walks of epistemic life, appear highly permeable to difficulties posed by scepticism. The
problem of justification is therefore in large part the problem of scepticism; which is
precisely why discussion of scepticism is central to epistemology.
EPISTEMOLOGY
2 Scepticism
Introduction
The study and employment of sceptical arguments might in one sense be said to define
epistemology. A chief epistemological aim is to determine how we can be sure that our
means to knowledge (here ‘knowledge’ does duty for ‘justified belief ’) are satisfactory.
A sharp way to show what is required is to look carefully at sceptical challenges to our
epistemic efforts, challenges which suggest ways in which they can go awry. If we are
able not just to identify but to meet these challenges, a primary epistemological aim will | Blackwell |
have been realized.
Scepticism is often described as the thesis that nothing is – or, more strongly, can be –
known. But this is a bad characterization, because if we know nothing, then we do not
know that we know nothing, and so the claim is trivially self-defeating. It is more telling
to characterize scepticism in the way just suggested. It is a challenge directed against
knowledge claims, with the form and nature of the challenge varying according to the
field of epistemic activity in question. In general, scepticism takes the form of a request
for the justification of those knowledge claims, together with a statement of the reasons
motivating that request. Standardly, the reasons are that certain considerations suggest
that the proposed justification might be insufficient. To conceive of scepticism like this is
to see it as being more philosophically troubling and important than if it is described as
a positive thesis asserting our ignorance or incapacity for knowledge.
2.1 Early scepticism
Some among the thinkers of antiquity – Pyrrho of Elis (c.360–c.270 BC) and his school,
and Plato’s successors in his Academy – expressed disappointment at the fact that cen-
turies of enquiry by their philosophical predecessors seemed to have borne little fruit
either in cosmology or ethics (this latter was broadly construed to include politics).
Their disappointment prompted them to sceptical views. The Pyrrhonians argued that
because enquiry is arduous and interminable, one should give up trying to judge what
is true and false or right and wrong; for only thus will we achieve peace of mind.
A less radical form of scepticism overtook Plato’s successors in the Academy. They
agreed with Pyrrho that certainty must elude us, but they tempered their view by
accepting that the practical demands of life must be met. They did not think it a work-
able option to ‘suspend judgement’ as Pyrrho recommended, and therefore argued that
we should accept those propositions or theories which are more PROBABLE (pp. 308–11)
than their competitors. The views of these thinkers, known as Academic sceptics, are
recorded in the work of Sextus Empiricus (c.150–c.225).
45
A. C. GRAYLING
In the later Renaissance – or, which is the same thing, in early modern times – with
religious certainties under attack and new ideas abroad, some of the sceptical argu-
ments of the Academics and Pyrrhonians acquired a special significance, notably as a
result of the use to which René Descartes put them in showing that they are powerful
tools for investigating the nature and sources of knowledge.
In Descartes’s day the same person could be both astronomer and astrologer, chemist
and alchemist, or physician and magician. It was hard to disentangle knowledge from
nonsense; it was even harder to disentangle those methods of enquiry which might
yield genuine knowledge from those that could only deepen ignorance. So there was an
urgent need for some sharp, clean epistemological theorizing. In his Meditations (1986)
Descartes accordingly identified epistemology as an essential preliminary to physics and
mathematics, and attempted to establish the grounds of certainty as a propaedeutic to
science. Descartes’s first step in that task was to adapt and apply some of the traditional
arguments of scepticism. (I shall comment on his use of scepticism again later.)
The Anatomy of Scepticism
Sceptical arguments exploit certain contingent facts about our ways of acquiring,
testing, remembering and reasoning about our beliefs. Any problem that infects the
acquisition and employment of beliefs about a given subject matter, and in particular
any problem that infects our confidence that we hold those beliefs justifiably, threatens
our hold on that subject matter.
The contingent facts in question relate to the nature of perception, the normal human
vulnerability to error and the existence of states of mind – for example, dreaming
and delusion – which can be subjectively indistinguishable from those that we | Blackwell |
End of preview. Expand
in Dataset Viewer.
README.md exists but content is empty.
Use the Edit dataset card button to edit it.
- Downloads last month
- 6