10 Formalism and Informalism in Theories of Reasoning and Argument
In this chapter I questioned the notion that argument interpretation and evaluation are matters of science, as distinct from art. Another version of this material was printed in the Computers and Philosophy Newsletter (Stanford University, Volume 4, December 1989, submitted 1987). That context was an early discussion of artificial intelligence and its prospects. Following the reflections of critics such as Hubert Dreyfus, I was inclined to be skeptical about the matter. Dreyfus and others argued that to understand discourse, we needed considerable background knowledge and awareness of context. His skepticism was supported by that of my computer scientist husband Anton Colijn, who was fond of saying that artificial intelligence was just around the corner, and that was where it would always be. (He still says this but less frequently than before.) I argued here for the claim that strict rules cannot be provided for algorithms required by artificial intelligence. But perhaps the presumption that strict (universal, exception-less) rules would be needed was somewhat naïve. I admit that I am not in a position to know.
In the eighties skeptical persons such as myself questioned the idea that algorithmic processes could support machine translation. Now, decades later, machine translation exists and even the skeptics of earlier decades have been known to resort to ‘Google Translate’ on occasion. The results, I gather, tend to be helpful but by no means elegant or smooth.
Current discussions and concerns about artificial intelligence are wide-ranging. Some thinkers, including Stephen Hawking and the Oxford philosopher, Nick Bostrom, fear AI capable of super-human intelligence and motivated to destroy humankind. How relevant are philosophical questions about the connection between consciousness, thought, and intelligence? Mind, machine and body? What is the significance of facts about artificial intelligence systems beating experts at the games of chess (Garry Kasparov) and Go (Lee Sedol)? Many worry about widespread automation eliminating needed jobs. More limited concerns involve issues of legal and moral responsibility for accidents involving self-driving cars, or a flaw in an expert system leading to a faulty diagnosis and premature death. The reflections in this chapter are, at best, marginally relevant to these current problems and anxieties.
When this chapter was written, I thought that one needed considerable background knowledge and a good sense of context and nuance to understand whether an argument was put forward and what that argument was. I also thought that there was usually one correct answer to such questions. Now I would continue to support the first presumption while being less confident about the second. The notion of general rules holding most but not all of the time, being sound, other things being equal (ceteris paribus) is reasonable. Of late, the phrase pro tanto seems to have replaced ceteris paribus, but the message remains the same.
If I were to question a central aspect of this chapter it would concern the assumption that strict exception-less rules for interpretation of discourse are not to be found. That claim, made in the latter part of the chapter, feels over-confident to me today. Perhaps the cues and hints used by human interpreters can be coded. I still feel inclined to doubt it, but admit that my doubt is based on an imperfect understanding of the possibilities.
In our century, logic is typically identified with formal logic, and formal logic is the study of proofs and rules of inference in axiomatized formal systems. Logic is also regarded as the science of argument assessment, as a study that will teach us how to understand and appraise the justificatory reasoning that people actually use. Logic is supposed to be both scientific and practical. Some texts advise that logic ‘operates like a machine fulfilling its function no matter who is pressing the button’, that it is a ‘science for evaluating argument’, that it is too objective to depend on ‘insights, intuitions, or feelings’. There is a tension in these views of logic. We cannot have it both ways – that logic is entirely formal and yet applies to real argumentation. Either logic includes much that is nonformal or it tells us only a small amount of what we need to know to understand and evaluate arguments.
In fact, argument evaluation is more an art than a science. It is not something that can be mechanized, and that has important consequences for the development of artificial intelligence systems. Understanding and evaluating arguments is an important part of human intelligence. If this cannot be done by rules and if artificial intelligence (even in its most sophisticated forms) must proceed by the application of rules, then there will be a necessary incompleteness to artificial intelligence. There will be a range of tasks it cannot fully accomplish. Such, at least, is the implication of the analysis developed here. To be sure, we can sometimes recast natural argumentation in the symbolism of formal systems and reach a conclusion about the deductive merits of the inferences upon which they depend. Frequently this is not possible, and even when it is, accomplishing the task presupposes significant nonformal insights.
Argument interpretation and evaluation form an art, an art requiring insight and judgment. This art can be cultivated by practice and enhanced by the teaching of rules of various kinds, but it cannot be exhaustively characterized by articulated rules — formal or otherwise. This can be shown even if we ignore the aspect of premise evaluation, which would be admitted by all to require general knowledge of the world. Even when we restrict ourselves to the interpretation of language and the evaluation of inferences, arguing well and criticizing well require more than the mechanical application of rules. It is not reasonable to expect mechanical decisions for the understanding and evaluation of substantive argumentation.
The lack of formalization in informal logic neither makes that subject an intellectual sham nor shows it to be a primitive stage on the way to a full understanding of argument. To speak of informal logic is not to contradict oneself but to acknowledge what should be obvious: that the understanding of natural arguments requires substantive knowledge and insight not captured in the rules of axiomatized systems. The informal fallacies, historically a central topic in informal logic, involve mistakes in reasoning that are relatively common, but neither formal nor formally characterizable in any useful way. The fact that an account of an informal fallacy makes it out to be just that does not show that it is imprecise or lacking in rigour.
The development, understanding, and evaluation of argumentation is far from being a mechanical task. It is a misunderstanding both of argumentation and of the human intelligence that constructs it to think that what goes on must be exhaustively representable in formal rules. To put the point provocatively, though computers may derive formulae, they don’t construct, understand, or appraise substantive argumentation. The evaluation of a colloquial nonformal argument is something quite different from the appraisal of a formal derivation by applying formal rules of inference.
l. Interpretation as a Nonformal Process
Comprehension is an unformalizable process striving towards an unspecifiable achievement and is accordingly attributed to the agency of a centre seeking satisfaction in the light of its own standard.1
To understand an argument, we must first understand the language in which its constituent sentences are expressed. The understanding of natural language presupposes much background knowledge that is substantive, not merely syntactic. It also requires the ability to grasp the meaning of aberrant and odd combinations not prescribed by the ‘rules’. Difficulties in completely formalizing rules for the understanding of natural language as it is actually used are well known and have been much discussed.
The literary analyst Stanley Fish, among others, has pointed out the extent to which meaning depends on context. Fish argues convincingly that even such apparently simple expressions as ‘private members only’ can be given an amazing variety of interpretations, provided we invent a corresponding variety of contexts of use. He resists the idea that any one context is ‘normal’ in a sense that would make the meaning of an expression in that context its real or literal meaning.
A sentence neither means anything at all, nor does it always mean the same thing; it always has the meaning that has been conferred on it by the situation in which it is uttered. Listeners always know what speech act is being performed, not because there are limits to the i1locutionary uses to which sentences can be put, but because in any set of circumstances the illocutionary force a sentence may have will already have been determined.2
Machine translation has bogged down, except in carefully restricted domains, because of the necessity of referring to an apparently infinite amount of background knowledge.3
Less commonly remarked are those aspects of understanding that are pertinent to the identification of sentences as comprising an argument. To see a sequence of sentences as an argument is not only to understand the meaning of those sentences but to regard some of them as put forward to offer rational support for others. This understanding requires a notion of ‘logical flow’, and the imputation of an intent to justify a claim or claims to an identified, or hypothetical, arguer. We have seen that pragmatic factors serve to distinguish argument from explanation. The same may be said about the distinction between argument and description, narration, exemplification, jokes, and so on.
The relevance of context, human purposes, and background information to the understanding of an argument makes it impossible to identify arguments solely by reference to general lists of semantic and syntactic cues. The need for argument, the presence of argument, and the direction of argument are determined on the basis of our sense of what is going on. Consider the following example from Stuart Hampshire’s book on Spinoza.
A philosopher has always been thought of as someone who tries to achieve a complete view of the universe as a whole, and of man’s place in the universe; he has traditionally been expected to answer those questions about the design and purpose of the universe, and of human life, which the various special sciences do not claim to answer; philosophers have generally been conceived as unusually wise or all-comprehending men whose systems are answers to those large, vague questions about the purposes of human existence which present themselves to most people at some period of their lives. Spinoza fulfills all these expectations.4
Few persons schooled in the history of philosophy would interpret Hampshire as arguing here. Yet many introductory philosophy students understood the passage as an argument with the unstated conclusion that Spinoza was a philosopher. If we regarded this claim as one Hampshire was trying to justify, and if we thought Hampshire was the sort of writer who would argue about philosophy by pointing out what ordinary people have often thought, we might read the passage this way. (Spinoza fulfills the common expectation of what philosophers are like; therefore Spinoza is a philosopher.) Such a reading is implausible, but it takes background knowledge to see that. This is not to deny, of course, that in some other context it might be appropriate to try to prove that Spinoza is a philosopher. For example, if one were disputing with a logical positivist one might try to prove that point. But in that context the ordinary person’s casual assumptions as to what philosophy is would hardly be a plausible starting point for the argument.
Similar points may be made about the identification of premises and conclusions. Once we have understood the sentences of a discourse and have understood that the discourse is a justificatory one, there remains the further task of determining which sentences are premises and which are conclusions. Such matters are taken for granted but are by no means purely routine. Consider, for instance, the following relatively well-ordered and simple passage, which caused many undergraduates significant difficulty.
Philosophers of science are fond of claiming that a theory or model can never he disproved by a new fact, or even a set of facts, but only by a new and more comprehensive theory. While this may be a useful rule of thumb, it suggests misleadingly that individual findings cannot have revolutionary reverberations. In fact, when a solar eclipse in 1919 showed that certain predictions by Einstein of the way light would be deflected were correct, the theory of relativity gained immeasurably in stature. Conversely, proponents of the theory that intelligence is inherited suffered a severe blow when data presented by Sir Cyril Burt were shown to be fraudulent.5
This passage is the opening paragraph of an article which describes the artistic skills of an autistic child called Nadia, whose developmental pattern in drawing was so unusual as to upset previously confirmed theories on how children’s drawing develops.6 Many students, asked to analyze the passage, identified ‘it suggests misleadingly that individual findings cannot have revolutionary reverberations’ as the conclusion. Others thought that the conclusion was to be found in the first sentence.
The first two sentences serve as background; the author describes a common view, and then points out that this view is misleading in its suggestion that individual findings cannot have revolutionary implications in science. To say that it is misleading is to suggest that the implied view is false. Indeed, that point is what can be supported by the next two sentences. The words ‘In fact’ indicate that at this point the author turns from the ‘misleading implication’ of a common view to his own line of thought. ‘Conversely’, introducing the final sentence, marks the contrast between a confirming instance and a disconfirming instance. To properly understand this passage, we have to be sensitive to the tone of the background comments (in ‘while this may be’), see that the author is rebutting the misleading implication of the standard view. We have to see how the two instances described serve as the basis for this rebuttal. The passage mixes description, argument, and meta-comment; the meta-level comment is not properly part of the argument, though it helps us to identify that argument.
This passage, though not especially tricky or exciting, indicates how much understanding is involved in the extraction of an argument from natural discourse. Such extraction requires semantic knowledge, syntactic knowledge, background factual knowledge, contextual awareness, and a general sense of how things ‘hang together logically’. Logically competent listeners and readers do much to extract the ‘logic’ of an argument from natural discourse, even in a case that is fairly straightforward.
Still more subtle semantic and background knowledge is involved in the understanding of the following argument from John Locke. The argument is a valid modus tollens and would naturally be read as such by philosophically trained readers. However, considerable recasting of the original wording is required in order to set it out explicitly in this form.
Syllogism is not the great instrument of reason. For if syllogism must be taken for the only proper instrument and means of knowledge, then before Aristotle there was not one man that did or could know anything by reason; and that since the invention of syllogism there is not one of ten thousand that does. But God has not been so sparing to men as to make them barely two-legged creatures, and left it to Aristotle to make them rational.7
Understanding that Locke uses a modus tollens inference here requires understanding ‘syllogism is the great instrument of reason’ and ‘syllogism must be taken for the only proper instrument and means of knowledge’ to mean the same thing, in this context. (They would not necessarily mean the same thing in other contexts.) We must also take the last sentence to be a rhetorically emphatic and original way of saying that it is not knowledge of the syllogism that makes men rational; this claim denies the consequent of the second sentence. At this point we have an argument whose validity can be appraised mechanically. However, to cast the argument in the form in which this is possible, we must make significant and subtle interpretive moves. These presume background knowledge, a sense for rhetorical flourish, and much more.
Such nonformal capabilities are, of course, also called into play when argumentative discourse relies on unstated premises or conclusions. Howard Posposel quoted the following argument from a philosophical paper by Hans Hahn, thinking it suitable for students to practice symbolization and the application of validity tests to natural material. Its proper interpretation, however, presumes a degree of sensitivity and knowledge which would seem to make it unsuitable for that role.
The old conception of logic is approximately as follows: logic is the account of the most universal properties of things, the account of those properties which are common to all things; just as ornithology is the science of birds, zoology the science of all animals, biology the science of all living beings, so logic is the science of all things, the science of being as such. If this were the case, it would remain wholly unintelligible whence logic derives its certainty. For we surely do not know all things. We have not observed everything, and hence we cannot know how everything behaves.8
To understand this passage argumentatively we must read the long first sentence as background, see that the final sentence encapsulates a subargument, see that the subargument supports the claim in the second last sentence, see that the second last sentence together with an unstated premise supports the conditional sentence, and supply a final conclusion from an interpretation of the conditional sentence. To say the least, a lot of work is involved. The result:
-
We have not observed everything,
thus, -
We cannot know how everything behaves.
thus, -
We do not know all things.
So, -
If logic were the science of the most universal properties of all things, logic would have no certainty.
-
Missing Premise: Logic has certainty.
Therefore, -
Missing Conclusion: Logic is not the science of the most universal properties of all things.
The conclusion, which is implicit, is derived from a stated premise and an implicit premise by modus tollens. There is an argument here, and we can understand it, but our understanding presumes deletion, rearranging, and addition.
The first sentence is regarded as background and deemed not to be a premise because it is a description of an old conception not endorsed by the author. The rearranging of the last two sentences, yielding two subarguments, is based on the presence of ‘hence’ as an indicator word and the logical relations we perceive between not observing everything, not knowing how everything behaves, and not knowing everything. The addition of the premise is based on background knowledge as to what philosophers in general and Hahn in particular typically assume about logic, as well as on our perception that the addition of such a premise would make sense of the stated material: it generates a deductively valid argument yielding the implicit conclusion. The conclusion is added on the basis of the wording (‘if this were the case’) and background knowledge as to what logical positivists thought about logic and metaphysics. We can see that the interpretation of this passage as expressing a deductively valid argument is a complex and intricate task. It is by no means purely mechanical. In fact, all these examples indicate how much background knowledge, subtle verbal knowledge, and sense of logical direction are involved, merely in the identification of an argument.
The primary task of logic is the appraisal of inferences. In arguments, premises are the basis for inferring conclusions, and logic proper has the task of telling us whether it is legitimate to infer the conclusions from the premises. But extensive interpretation is needed for us to see where inferences are. This interpretive process is preformal and, on many accounts, prelogical.
What is uncontroversially logical is inference appraisal. Here too insight is required. We may use rules to evaluate inferences, but we have to see the argument as one of a type to know which sorts of rules to apply. The problem has often been raised, by Massey, Finocchiaro, and others.9 It may be illustrated by the following simple example.
If it’s raining, the streets are wet. The streets are awfully wet, so I guess it has been raining.
We can find here a flawed deductive inference. If we regard the reasoning as deductive we will see it as a case of the fallacy of affirming the consequent. We can also look at it as a case of inference to the best explanation. Our sense of what is going on in the argument, of what the argument ‘hinges on’, how the premises are supposed to lead to the conclusion, is presumed by our application of rules. We not only have to identify the premises and conclusion, we have to isolate the basic argumentative structure, seeing which terms are crucial for the logical workings of the argument and which are not so as to understand how the premises are supposed to logically lead to the conclusion.
Rules do not tell us how to interpret or apply themselves. To think that they did would be to commit ourselves to an infinite regress of rules. This point, associated in our day with the later writings of Wittgenstein, was in fact anticipated by Kant.
If it (general logic) sought to give instructions how we are to subsume under these rules, that is, to distinguish whether something does or does not come under them, that could only be by means of another rule. This in turn, for the very reason that it is a rule, again demands guidance from judgment. Thus it appears that, though understanding is capable of being instructed, and of being equipped with rules, judgment is a peculiar talent which can be practiced only, and cannot be taught. It is the specific quality of so-called mother-wit; and its lack no school can make good. For although an abundance of rules borrowed from the insight of others may indeed be offered to, and as it were, grafted upon, a limited understanding, the power of rightly employing them must belong to the learner himself; and in the absence of such a natural gift no rules that may be prescribed to him for this purpose can ensure against misuse.10
Before we appraise inferences using rules (formal or otherwise), both interpretive and classificatory work is required. Such work presumes substantive knowledge, sensitivity to context, appreciation of nuances of meaning in context, recognition of subarguments, addition of implicit premises and conclusions, and the classification of arguments and subarguments as being of one type or another. This work is not done by the application of formal rules. To suppose that it is not only runs counter to what we know about the relevance of context to meaning, it is introspectively implausible. Furthermore, to require rules for every move will lead to a regress of rules.
Some who seek a formal understanding of natural language would no doubt insist that in principle rules can be articulated to handle all of this. One can insist that whatever is done must be done by the application of rules. However, there is little to be said in favour of such a view and at least an infinite regress argument working against it.11 Given the sensitivity to context and nuances of meaning, substantive background knowledge, and sense of logical direction required to identify argumentation in natural discourse, the insistence that ‘in principle’ there are formal rules that cover all of this seems a priori and unwarranted.
Human beings do not seem to themselves or to observers to be following strict rules. On the contrary, they seem to be using global perceptual organization, making pragmatic distinctions between essential and inessential operations, appealing to paradigm cases, and using a shared sense of the situation to get their meanings across. Of course all this orderly but apparently non-rule-guided activity might nonetheless be the result of unconsciously followed rules. But when one tries to understand this as a philosophical proposal that all behavior must be understood as following from a set of instructions, one finds a regress of rules for applying rules.12
2. Form, Structure, and Logical Analogies
It is only in the context of a specific argument that we can say that a sentence ought to be analyzed as, say, relational rather than categorical. In some other argument, the same sentence might be properly be analyzed in opposite fashion.13 (Stephen Barker, in Elements of Logic.)
Using a logical analogy, we can on occasion refute an argument by showing that another argument, relevantly similar to it, is inadequate. The second argument must duplicate the logical structure of the first and be flawed in an obvious way. Either it must have true premises and a false conclusion, or it must in some other way be transparently ‘absurd’. The technique of refutation by logical analogy is nonformal, in the sense that it does not require translation from natural into formal language. It may be used by untrained people in a sensitive and revealing way. It is sometimes described as ‘the method of counterexample’ and thought to reveal deductive invalidity. Such a construal suggests that the technique reveals deductive relationships, but in an informal way.14
Interestingly, the technique of logical analogy is applicable to nondeductive arguments as well as to deductive ones. The suggestion that A caused B because A preceded B can be countered by logical analogy; yet causal reasoning is standardly regarded as inductive. The suggestion that homosexuality is wrong because it is unnatural can be countered by logical analogy, and the most natural way of taking that argument would be as conductive in the sense of offering a relevant but not sufficient reason for the conclusion.
The technique of logical analogy is pertinent to our present topic in two ways. First, it reveals again the web of nonformal judgments that enter into the understanding of an argument. Second, it typically serves to isolate as the structure of an argument something that is not formal in any standard sense of that term. This phenomenon raises the question of the feasibility or desirability of formally expressing rules for material inferences. The use of logical analogy illustrates how nonformal judgments are presumed in the understanding of arguments, because it requires that we distinguish those aspects of the argument that are essential to its inferential relationships from those that are incidental. Of course, this same distinction is also tacitly at work when we represent natural arguments in formal languages. What we regard as the correct logical form of an argument depends on preformal judgments about how that argument works – which are its significant and which its insignificant features. In logical analogies, the structure of an argument is identified and duplicated, and an inference is criticized by parallel argument.
Let us consider two examples. The first is taken from a book by British doctor and child expert, Penelope Leach. Discussing the issue of whether group child care arrangements are suitable for children under three years of age, Leach says:
Many people would argue that while all of the foregoing is, or may be true, toddlers who are actually committed to group-care soon grow out of being toddlers and therefore become socialized more quickly than they would have done at home. ‘The others will lick him into shape and he’ll learn by imitating them’ … People who take this line are usually those who want very much to believe that group care is acceptable for the very young, and who therefore use the observable fact that they do survive and develop, one way or another, as evidence to support it. So go back to that thirteen year-old who finds herself in charge of the family. [Leach alludes here to the case of a thirteen year old girl who cooks and cares for younger siblings after the death of her mother.] She too will adapt. She too will learn ‘how to behave’, will find ways of managing and will, after a fashion, develop.
Does that prove such responsibilities are good for her? That these are the optimum conditions for adolescents and a useful way of short-circuiting its normally tumultuous path? No, of course not. Nobody would argue that, because nobody has any stake in the thirteen-year-olds running families. But it is the same argument. Just as it is more appropriate for that girl to acquire maternal and household responsibilities out of mature sexuality than tragic deprivation, so it is better for the toddler to acquire socialized behavior out of self-motivated maturity rather than sad necessity.15
As the crux of the argument she criticizes, Leach identifies the inference from the fact that children can develop and adapt in group care to the conclusion that group care is acceptable for these children. She marks the significant move in the argument as being that of inferring acceptability from de facto developmental adequacy as attested by adaptation and survival. This ‘core’ of the original argument is then paralleled in the case of the teen-age girl. In that analogous case, Leach asserts, one would not infer the conclusion from the premise; hence one should not do it in the original case (the toddler) either. The logically parallel argument is without force; the original argument – called ‘the same argument’ – is therefore also without force.
A second example comes from Stanley Cavell’s discussion of C.L. Stevenson’s noncognitivism in ethics.
For then (that is, on a noncognitivist account) we are going to have to set up a display of humorous tolerance and allow that some ‘ethical’ disagreements cannot be ‘settled’ ‘rationally‘ on such grounds as this: whatever reasons are offered them, when ‘an oversexed, emotionally independent adolescent argues with an undersexed, emotionally dependent one about the desirability of free love’, their disagreement may be ‘permanently unresolved’. You might as well say that if these two went on permanently arguing about whether men do or do not descend from apes, then the science of biology would lack an ‘exhaustive’ or ‘definitive’ method of proof.16
Cavell identifies Stevenson’s line of argument as one in which de facto failure by two mis-matched individuals to resolve a dispute is grounds for the permanent unresolvability of that dispute and ultimately a basis for concluding that the subject in which the dispute occurs lacks a definitive method of proof. Cavell draws a logical analogy by mirroring this line of reasoning, substituting biology for ethics. Assuming that his audience will be unwilling to infer either that biological disputes are permanently unresolvable or that biology lacks a definitive method of proof, Cavell takes himself to have refuted the original line of reasoning.17
Refutation by logical analogy is based on duplicating the ‘core’ of an argument while varying some or all non-essential features. In the toddler example, the core of the argument is: ‘x survives in C; therefore C is acceptable for x’, with ‘survives in’ and ‘acceptable for’ being the focal concepts of the inference. Cavell’s arguments have as their core: ‘x and y, who are temperamentally mismatched, disagree about z; therefore disputes about z are irresolvable; therefore subject S, in which z is located, has no definitive method of proof.’
In its natural use, the technique of logical analogy makes this logical core apparent by repetition rather than by articulation. The logical essentials of the argument are repeated in the parallel argument and we ‘see’ them as we see sameness of shape in a blue circle and a red circle. The common structure is identified without being represented as a separate item. This common structure is the core of the argument. It is the part of the argument that must be preserved in the logical analogue. It is the aspect essential to the way the premises and conclusion are to connect in the original argument. When we represent this core, substituting letters for variable elements in the argument, we have what might be called a primitive formalization of the argument.
At this point, a question arises as to whether this logical ‘core’ should be regarded as the form of the argument. On some accounts of logical form, this would be the case.18 However, the central terms (‘survives in’, ‘acceptable for’, ‘temperamentally mismatched’, ‘has no definitive method of proof’, and so on) are not in any standard sense logical words. They are not syncategorematic, as are ‘and’, ‘or’, ‘not’, ‘if then’, ‘necessary’, and ‘possible’, nor even close to being syncategorematic as are such terms as ‘know’ and ‘ought’, around which epistemic and deontic logics have been developed.19 There is no formal system that would encompass the above structures as its basic structures in any way analogous to that in which modus ponens is a basic structure in propositional logic. Furthermore, for reasons we shall explore later any attempt to construct such a formal system would seem misguided.20 The shared structure is not a formal structure; it is a meaning structure, one shared by several arguments and shareable by more. Thus, commonality is not formality.
Successfully using the technique of logical analogy means identifying the core of an argument, the forms or meanings on which its connection of premises and conclusion depends, and reproducing this core in another argument in which some or all of the other elements are varied. This understanding requires the ability to see that some terms are essential to the way an argument is supposed to work, whereas others are incidental. We might term this the capacity for logical insight.
This capacity is presumed even when we formally represent such simple arguments as ‘Peter is sad; Joe is sad; therefore Peter and Joe are sad’. Representing this argument as ‘R; S; therefore R,S’, we indicate that it depends on the way assertion and conjunction work, and not on who Peter and Joe are, or on the fact that they are both said sad. Nor does it depend on the fact that they are both said to possess the same characteristic, or on the fact that this characteristic is sadness. Barker’s comment that the same sentence may have different ‘forms’ when it appears in different arguments is illustrated here. If the argument were ‘Peter is sad; Joe is sad; therefore there are at least two sad people’, we would have to formalize so as to reveal that it is one and the same property both Peter and Joe possess. Also we would need to indicate that Peter and Joe are distinct individuals.
Distinguishing the essential from the incidental can require going behind surface grammatical structure. For instance, few philosophers would deem these two arguments to share the same logical form:
-
Joe is famous; anyone who is famous is rich; therefore Joe is rich.
-
Sherlock Holmes is fictional; anyone who is fictional is real; therefore Sherlock Holmes is real.
Even though (2) is, on one level, semantically parallel to (1), there is something ‘fishy’ about (2). We might say that in (2) the second premise is clearly false, and regard the difference between (2) and (1) as being solely due to that fact. Such a view would mean regarding (1) and (2) as having the same form, that of a deductively valid argument. However, few will wish to proceed in this way. We will hesitate, because (2) uses ‘fictional’ and ‘real’, where (1) uses standard predicates. To grant that (2) has the same form as (1) requires us to ignore this difference, and the difference is too important to gloss over in this way.
This example illustrates again how preformal judgments enter into decisions as to what the form of an argument is. Formal analyses often help us to determine whether arguments are valid or not. However, they also articulate our preformal judgments of validity and invalidity, and provide a vehicle in which we express those judgments. If the validity of an argument is pre-formally controversial, any treatment which represents it as having the form of a deductively valid argument will also be controversial.
Whether the ‘core’ of an argument is represented formally or semantically, the identification of that core depends on our sense of how the argument is supposed to work, our preanalytic beliefs as to whether it does work,
and related philosophical judgments. Thus, logical perception is not a mechanical matter. Furthermore, when we isolate the inference structure of an argument, it will not always be a formal structure, not in any conventional sense of ‘formal’ at least. As the examples from Leach and Cavell illustrate, logical analogies may reveal structures relating terms that are not logical terms and that hold little promise as central terms for a formal system. Should we construct formal systems to articulate and ‘make precise’ the nonformal judgments that logical analogies enable us to make? Or can we rest content with concrete evidence of the inadequacy of material inferences? This is the same question that arises when we consider the frequent claim that informal fallacies will not be properly understood until they are given a formal analysis.
3. Prospects for Formalizing Informal Fallacies
Finocchiaro contends that ‘there are probably no common errors in reasoning’, meaning that there is no sense of ‘the same error’ that allows the same error to occur frequently. Lambert and Ulrich, in a recent text, say that for informal fallacies ‘even when one learns to recognize alleged examples of the fallacies, it is difficult to see what common factor makes them all instances of the same fallacy.’2l Thus we see a concern for what different instances of the same informal fallacy have in common.
It appear that there is a dilemma here for the informal fallacies approach. If two different arguments share a feature, F, which characterizes the reasoning in both, then isn’t F (as shared) a formal feature? If that were the case, the analysis of informal fallacies as informal would be mistaken in principle. We have already seen, however, that the logical core of an argument, though necessarily general, is not necessarily formal. Would formalization be useful at this point?
To consider this question in the concrete, let us look at two arguments which might be said to exemplify the informal fallacy of ‘two wrongs’. The first was used by a professor in France who sought to defend philosophy programs against the accusation that they were not turning out competent graduates. He said:
Our degree is not recognized, but we have more students than ever. They come because they think they might learn something. Sure, there are idiots. And I have given credits to them. There are bigger idiots in the government. Is it up to me to be more rigorous than the electorate?22
Here we have a defense of the practices of philosophers on the grounds that the electorate has selected idiots to serve in the government. Granting that we commonly regard it as undesirable for ‘idiots’ to serve a crucial public role, and that the philosopher’s degree has been deemed inadequate, it appears that this author is defending what seems inadequate (standards among philosophy professors in France) by appealing to something else that is accepted (idiots in government), but should not be.
The second argument concerns the Canadian seal hunt, and is taken from a letter to the editor.
I am a Newfoundlander, and I cannot help but feel some animosity toward those people who approach the seal hunt issue from a purely emotional stance. Surely this is not the way they look in their butcher’s freezer, when they are looking for pork chops. Yet the slaughtering method approved by the Department of Health officials for swine is hideous, and nowhere near as humane as the dispatching of a young seal.23
In this passage the writer implicitly defends the seal hunt by pointing out that worse methods of slaughtering animals are condoned. He makes it clear that he thinks these worse methods are wrong (‘hideous’), and infers that there can be no rational basis for opposition to the seal hunt (opponents approach it from a ‘purely emotional stance’). One wrong is accepted, so another comparable one should not be criticized.
For convenient reference, let us call the first argument the French argument and the second one the Newfoundland argument. These arguments have many similarities and differences.24 So far as their reasoning is concerned, they have something in common. The following statement characterizes the reasoning in both:
W: From the existence and tacit acceptance of one wrong, it is inferred that another comparable wrong should not be criticized.
In the French argument, the inference is from the electorate’s condoning incompetence in the government to the implicit conclusion that attacks on standards in philosophy programs are inappropriate. In the Newfoundland argument, it is from the condoning of hideous slaughter methods for pigs to the implicit conclusion that the slaughter of seals should not be criticized. W describes the logical core of both arguments. If the inference described in W is a mistake, then both arguments embody this mistake. If it is a common mistake in reasoning, then both arguments commit it. If the fallacy is informal, there is no mystery as to how it is possible that two cases of ‘the same’ informal fallacy can have something in common. What they have in common is (among other things) that W characterizes both.
At this point the question is whether the fallacy is informal. It is commonly regarded as such, to be sure. In specifying W, we use some words conventionally identified as logical words: ‘and’, ‘or’, ‘not’, and so on. But clearly it is not on these terms that the inference depends. The thrust of the argument is from the acceptance of one wrong to the illegitimacy of criticizing another comparable wrong. The problem with such arguments is with relevance, the relevance of the acceptance of the first wrong to the acceptability of the second one. The relevance problem arises because the acceptance of one wrong thing seems to have no bearing on the legitimacy of criticizing a distinct thing. To show that it is inappropriate to criticize some action, we have to show something specifically about either that action (that it is right) or that criticism (that it will be counterproductive, that it is hypocritical, that it is ill-founded …). We might argue effectively by analogy if we compared the case with another, obviously similar, in which the action was right or the criticism of the action misguided. (If A is permissible and B is relevantly similar to A, then B is permissible. Or, if it would be inappropriate to criticize A and B is relevantly similar to A, it would be inappropriate to criticize B.) However, as described in W, analogy reasoning is misused. If the cited wrong action is wrong and the action under consideration is relevantly similar to the cited action, this shows that the action under consideration is wrong – a conclusion that runs contrary to the aim of the arguments.
The mistake described in W does not seem to be a formal mistake. Relevance is a nonformal matter; we have to judge what is relevant and not relevant in order to formalize. There are no formal rules for the proper use of analogies. If the mistake is one of relevance and improper use of analogy, the fallacy is informal.
However, it is trivially possible to formalize W, and it may be on the basis of this that many people regard the status of informal fallacies as informal as one that is temporary. To formally express W, we simply make the requisite stipulations. Let ‘x’ and ‘y’ range over items for which moral appraisal and acceptance are appropriate. Given all of these definitions, we can now represent the inference described in W as being based on the following conditional statement:
Quasi formal W: For all x and for all y, if x is wrong and is accepted and y is purportedly wrong, then if x is tacitly accepted and x and y are comparable actions, y should not be criticized.
A significant anomaly here is that the pivotal terms that constitute the core of the argument will be predicates, not logical operators. With the constituent terms of the two wrongs arguments – and with most logical analogies, where material inference is under scrutiny there is little that can be said about the merits of the argument in formal terms.
There is a sense in which one can represent anything formally; one can stipulate definitions and plug in logical symbols for the logical words used. The real question is not whether W and comparable substantive principles are in any sense formalizable, but rather whether it is useful to formalize them.
If one could construct a formal system in which the key terms appeared in the axioms and rules, and if within that system, one could prove that the inference described in W is an incorrect inference, then a formalization might be genuinely useful and revealing. However, the ‘if’ here is a big one. The requisite terms would be impossible to define with the precision a formal system would require. Several, and perhaps all, are essentially contestable or have vague perimeters. This means that one could not state axioms that would correspond fully with extra-formal judgments of meaning and truth. Also, as noted above, one cannot really operate logically with the concepts, save through more standard logical operators such as ‘and’, ‘not’, ‘or’, and so on. The proposed ‘system’ would do little with its key terms, save assert or deny the truth of conditional statements which would most reasonably be interpreted as representing material inferences.
If one were to construct such a system and apply it in order to determine the logical merits of such arguments as the French argument and the Newfoundland argument, it is virtually certain that key judgments would remain extra-formal. For instance, the judgment that two cases are ‘comparable’ will have to be made.25 The idea that the existence of one wrong is irrelevant to the criticism of another comparable wrong may well have to be put into the system as an axiom. If it is, then the sense in which then the system will be able to provide a justification for that judgment and raise it above the level of intuition, will be tenuous.
A complicating factor at this point is that most formal systems are systems of deductive logic. It will be appropriate to appraise such inferences as that in ‘two wrongs’ arguments using these systems only if we have pre-formally judged that those arguments are deductive. That is, we must have interpreted the logical relations intended in the argument as being such that they would work deductively or not at all. For many natural arguments, such an interpretation is implausible. Thus, even granted that a pertinent formal system could be constructed, its applicability would be in question.
Leaving such speculations, we can say that both the French argument and the Newfoundland argument are grounded on a questionable material inference, that described in W. To say that they are thus grounded is to presume first that both arguments are intended to cast in doubt the legitimacy of criticizing actions or policies by citing accepted wrongs; second, that the material inference in each is describable by W; and third that that material inference is incorrect. Quite obviously, no formal system is going to handle the first two aspects; it is only the third that is in question here. Once we have isolated the inference and described it as being of a general type, we approach the territory where logic in the classical sense could have something to say. It is there, if anywhere, that we expect formalization to be useful.
The problem is that the inference is material and substantive rather than logical. To defend the judgment that such an inference is incorrect, we will have to argue that the tacit acceptance of something by people does not show that that thing itself is right, much less that something else, however closely analogous to it, is right. We will have to argue that consistency should not be pushed to the point of demanding that existing evils be accepted because some evils are accepted. All of this connects directly to topics in moral philosophy. We will rely on principles that may be controversial; we will use terms that are hard to define with precision and are essentially contestable. Our explanation as to why the inference is mistaken, and why ‘two wrongs’ constitutes a fallacy will not be a completely straightforward descriptive one that all competent observers will accept. Obviously, there will be normative components.
The same kind of point can be made for many other informal fallacies. Consider for instance, the ad hominem. In an ad hominem argument, an inference is made from characteristics of an arguer to the falsehood or implausibility of his claim or theory or the validity of his argument. Ad hominem arguments are considered to be fallacious because, in general, such a connection does not hold up. Typically, negative features of an arguer or his background or circumstances have no bearing on the substantive truth of his claims or on the merits of his arguments. However, there are complicated exceptions. For example, the arguer’s theory or claim may be about himself and may be rendered inductively unlikely by some feature of his character or background. In addition, considerations of the credibility and expertise of the arguer often bear on the acceptability of his claims to audiences, especially when the audiences cannot judge the claims independently for circumstantial reasons.26 What counts as an allegation about an arguer, what counts as the content of his conclusion, theory, or argument, what counts as evidence for or against truth, what counts as for or against acceptability, how that is related to truth – all these matters are relevant to a full and accurate account of ad hominem. Although someone’s being a liar is irrelevant to the question of whether slaves built the Pyramids, his being a liar may be relevant to the question of the acceptability, in some context, of his claim that slaves built the Pyramids. To develop these ideas we will need to make substantive epistemic judgments. Similar points can be made about the misuse of authority and the argument from ignorance, and indeed about most other traditional informal fallacies. One could, in principle, articulate all these nonformal judgments and contestable decisions in a formal theory. However, much reasoning would go on outside that theory and precious little within it. The theory might give an impression of rigor, but serve little other purpose.
The absence of formal theory in such contexts produces in some circles an intense desire for formality and precision. But far from the potentially contentious nature of such accounts revealing the need for formal theories of the informal fallacies, it constitutes a powerful reason against their development. This is because the development and use of formal theories tends to hide controversies rather than admit or resolve them. It buries controversial assumptions in definitions and rules. Key issues come to be disguised by technical apparatus, and immune from criticism as something that is simply part of the system. Debatable principles and decisions are not eliminated by the development and application of formal systems. They are merely relocated, so that the uninitiated have more trouble identifying them and the initiated are trained to forget about them. Formal accounts may give the impression that controversial questions have been resolved and essentially contestable terms defined with final precision. However, that impression would be misleading. A formal system purporting to represent and rationalize judgments about the fallaciousness of ‘ two wrongs’, ad hominem, inappropriate authority, or straw man would not be more precise than a nonformal account. It might appear more precise, but it would be pseudo-precise, because the appearance of rigor would misrepresent the phenomena.
4. On the Necessity and Limits of Rules
Let us consider the following argument:
-
If there is an activity that human beings do intelligently then there is a rule or set of rules describing how that activity is done or should be done.27
-
If there is a rule describing how some activity is done or should be done, then that rule can be articulated by someone – if not by the agent simultaneous with his engagement in the activity, then by that agent at another time, or by someone else.28
-
If a characterizing rule or set of rules can be articulated, then it can be formalized, at least in some relatively weak sense of ‘formalized’.29
-
If a rule or set of rules can be weakly formalized, then at least in principle it is possible to set out a mechanical decision procedure for the activity it describes.
Therefore, -
Any activity that human beings do intelligently can, in principle, be captured by a set of rules which set out a mechanical decision procedure.
Or, reason can be mechanized.
This argument, which seem to have wide appeal, contradicts the results of preceding sections here. How are we to get around it?
First, some clarification. We need to distinguish four types of rules. Which type is being referred to will matter very much for our appraisal of the argument.
The relevant types are:
-
Strict formal rules. These are rules which hold universally and can be applied by observing purely typographical criteria. Such rules are found only in formal axiomatized systems. To operate within those systems is to manipulate symbols according to these rules. Example: Every occurrence of ‘-‘ may be replaced by ’v’.
-
Strict material rules. These are rules that hold universally but are not purely typographical. Example: ‘Every living human being has a heart.’
-
General rules. These are rules which hold most of the time, but which have a ‘ceteris paribus‘ clause. Example: ‘Weakened people who have not had shots against smallpox and who are exposed to smallpox will catch smallpox.’30
-
Rules of thumb: A rule of thumb is a rough guideline for action, a kind of policy we may follow in a case where we do not have the time or expertise to rely on rules of the other types. It lacks any theoretical rationale, or may (more typically) be based on guesses or unanalyzed experience. Example: Prospective tenants who are very anxious to show you references are not a good bet. To operate according to a rule of thumb is to be entirely prepared that things should not go as the (rough) rule would indicate; we know the rule is at best a rough approximation.
If we look back at the argument introducing this section, and ask how ‘rule’ is being used, we can see that it cannot be in sense (a) or in sense (d). In sense (a) the first premise would be false.31 In sense (d), it would also be false, but for a different reason. The rule of thumb would not fully characterize the intelligent activity qua intelligent. It is not this sort of rule that a person who insists that intelligent activities must be subsumed under rules has in mind. A rule of thumb allows too much to be haphazard and unexplained by the rule. The idea behind the argument, expressed in the first premise, is precisely contrary to this. It is that what is done intelligently must be systematic, cannot be random, must be done for a reason, where that reason is general.
What remains are (b) and (c). If the reference in the argument is to rules in sense (b) or a sense that allows for general rules, as well as strict formal and material rules, then premises (2), (3) and (4) would clearly be true. However, premise (1) is false if the only admissible rules are formal rules. One is hard pressed even to find exceptionless rules outside the domain of formal systems. Even the counterparts of formal rules of logic have exceptions that have to be ruled out by provisos and tacit conditions of understanding, when we take them to apply to ordinary speech and thought. (As when we explain why ‘he’s attractive, but he’s not’, is not a contradiction and why ‘boys will be boys’ and ‘business is business’ are not tautologies.) It is well known that scientific laws give predictions for ‘normal circumstances’, predictions that hold ‘other things being equal’. Society’s prescriptive laws can be couched in absolute terms and thought to apply without proviso, but such a conception cannot account for actual legal judgment where unusual circumstances are often taken into account. Here, whether we speak of the judgments that are in fact made, or of those judgments that ought to be made, the point remains. (For example, even where one is strictly liable, as in the case of storekeepers selling adulterated milk, a storekeeper who was bound and hypnotized and as a result persuaded to adulterate his milk would not be penalized in the normal way.)
Thus, to make premise (1) come out as true, we must be using ‘rule’ in sense (c), or in some further sense which allows that rules could be of type (a) or (b) or (c). The crucial point is that intelligent activity presumes either general rules or strict rules. It is not enough to extend the range from strict formal rules to strict material rules. We must move away from strict rules as a fixed requirement of intelligent activity. Thus, we must understand the first premise as follows:
-
If there is an activity that human beings do intelligently then there is a rule or set of rules such that those rules are either formal, material or general, or a combination of these, and such that those rules describe how that activity is done or should be done.
On this understanding, premise (1) may be true.32
However, we now have a flexible understanding of rules. This more flexible understanding of what rules involve affects the truth of the other premises. A ceteris paribus rule can be used by a person with judgment, as he or she will have a sense for those cases where things are not ‘equal’ and the rule does not apply. But can a ceteris paribus rule be used mechanically? Can it be weakly formalized? If we are to formally express the rule, we cannot just leave the ceteris paribus clause as it stands. We have to spell it out – state explicitly what other things might bear on the situation and how. We must enumerate all the unusual circumstances that would make the rule inapplicable. This makes the argument break down. Spelling out all these conditions is simply not possible, because the range, and subtle variations in combinations of factors, are too great. There will always be events and circumstances we have not predicted and could not predict in advance, and features that turn out to be relevant, that we would never have considered mentioning.
The problem is that any interpretation of rules and rule-following which will make it plausible to see all human intelligent activity as rule-governed will make it false that such rules can be formalized and programmed. The argument as stated exploits the vagueness of ‘rule’ and seems persuasive only for this reason.
That there is a limit to rules has been shown many times and many ways – by Carroll, by Kant, by Wittgenstein, and by Godel. Their results are well-known but have not had the impact they should have on attempts to mechanize reason.33 Let us review the results. Kant, in the passage quoted earlier, argued that it is one thing to have a rule and another to decide when to apply it. For instance, if your rule is that one will contradict himself whenever he asserts and denies the same thing, and that one should not contradict himself, you will have to decide whether ‘it’s raining and it’s not’ amounts to assertion and denial of the same thing. The same sort of problem is well-known in arithmetical contexts. Though one and one make two, one raindrop and another raindrop do not make two raindrops. As one author put it, ‘numbers as realities misbehave.’34 This doesn’t refute arithmetic because we do not take raindrops and their mingling to be the sort of phenomena to which the rule of arithmetic apply.
Kant said that the proper application of rules required ‘judgment’ or ‘mother wit’, something he regarded as unteachable. He argued against further rules telling you how to apply rules, on the grounds that this demand leads to an infinite regress of rules. Suppose that we had a rule telling us to apply arithmetical truths only to discrete entities. This could be helpful, but we would have to know how to apply this rule. We would only face the necessity of making another decision about raindrops: are they discrete entities? At some point we have to decide whether the entities to which the rule might be applied do or do not instantiate the categories used to express the rule. Those categories are couched in stable, universal terms, yet reality is particular, variable, and fluctuating. A new case may differ from others, and yet seem significantly similar. Does the rule apply? The matter requires judgment and intelligent decision, not a mechanical metarule telling us how to apply the first rule.
Where Kant called for ‘mother wit’, Wittgenstein, seeing the same problem, appealed to custom, form of life, and the training of others who will not respond in the way you want if you do not go on in the appropriate way. ‘Rules must come to an end somewhere’, he famously said. The Wittgensteinian terminus is our common culture in which people mutually coordinate to delimit the possibilities. We live together in a way that permits the application of arithmetic to apples and oranges, but not to raindrops or the union of ovum and sperm.
Custom is not the only suggestion for bottoming out, of course. Douglas Hofstadter, seeks to end the regress in a common neural structure in human brains, saying:
Rules get used and messages do get understood. How come?… Since they are physical entities, our brains run without being told how to run.35
However we propose to end the regress of rules, it has to end somehow. The limitation emphasized by Wittgenstein and by Kant in the passage quoted is that rules do not tell you how to apply themselves, and further rules cannot always do this, on pain of regress.
This theme about the need for rules to end is not quite the same limitation pointed out by Lewis Carroll in “What Achilles Said to the Tortoise”. There, the stubborn Tortoise refuses to use rules of deductive inference in the normal way. He wants every rule he would use as a basis for inference written into his argument explicitly as another premise. Granting ‘A and B’, and granting ‘If A and B, then Z’, is not enough to make the Tortoise infer Z by modus ponens. To do so, he would have to use modus ponens tacitly. He would have to use logic without saying what logic tells him to do. This the stubborn beast refuses to do. He will not put his trust in logic. Remarking that ‘whatever logic is good enough to tell me is worth writing down’, the Tortoise demands an articulation of the requisite conditional. Meeting this demand results in a new argument with the premises: ‘A and B; if A and B, then Z; if A and B and if A and B then Z. then Z’. Here again, to infer Z, he would have to use a rule not stated within the argument as a premise. The Tortoise demands that that rule be made explicit, and generates a fourth argument. In the nature of the case, the Tortoise cannot be satisfied. He will never infer Z from A and B – despite the fact that A and B entail Z, and that he recognizes the logical truth that if A and B, then Z.
The Carroll Dialogue shows that even when the mind is operating according to clearcut rules, such rules cannot all be made explicit in the context of use. They can be made explicit in another context – an observer context. But when an observer reasons, he himself will necessarily reason according to some rules that are not explicitly stated. These rules, in turn, can be made explicit in another context. There can be no one context in which all rules of reasoning are explicit at once. To draw an inference that is in accordance with a rule of inference is in one sense to use a rule of inference. But it cannot require stating that rule of inference explicitly as a premise in one’s reasoning. Contrary to the Tortoise, there are necessarily some things logic ‘tells you’ that cannot be written down.
At this point, Carroll’s dialogue offers intimations of Godel’s result, which states that for any formal system of interesting complexity there is a necessary incompleteness. There are always statements expressible within the system and informally provable to be true, yet not provable according to the strict formal rules of the system itself. This fact about the limits of formal rules cannot be eliminated by constructing stronger systems, for such systems will only present the same kind of problem at a higher level. New statements are constructible with the same property – informal provability in the face of formal intractability. Critics of fallacy analysis have urged that the necessary incompleteness of any list of fallacies is a serious problem. If this problem were devastating for fallacies, it would undermine positive proof theory as well.36
Kant and Wittgenstein showed how something other than explicit rules is presumed by the application of rules. Godel showed that in any given formal system of interesting richness, informally derivable truths will exceed formally derivable ones. Carroll showed that in an argument, movement from premises to conclusion presumes the tacit acceptance of rules not, in that context, made explicit. Derivable truths will exceed formally derivable ones. Given these well-known results, the idea that aspects of argumentation and its appraisal are not fully mechanical should not be surprising. These theoretical results coincide beautifully with indications
from concrete problems that arise in argument analysis and evaluation.
The ease with which such results are conveniently forgotten in our culture and the influence of the pre-Godelian idea that reasoning must be something bound by formal rules may be due to scientistic bias. We feel such a perverse attraction for mechanistic and technologically fashionable models of human intelligence that we forget what some of our most profound and subtle reasoners have shown us about reasoning itself. Rules are not perfectly strict, and they take us only so far.
Notes
1. Michael Polanyi, Personal Knowledge, (New York: Harper Torchbooks. 1959). p. 398.
2. Stanley Fish, ‘Normal Circumstances and Other Special Cases’, in Is There a Text in This Class? (Cambridge, Mass: Harvard University Press, 1980), pp. 291-292. I wish to endorse only the point that context is relevant to the determination of meaning, not the further point, sometimes implied by Fish, that discourse means whatever the reader or listener makes it mean.
3. Compare Hubert L. Dreyfus, What Computers Can’t Do: A Critique or Artificial Reason. (New York: Harper and Row, 1972), Chapters I, 5, and 6. I owe much to this discussion.
4. Stuart Hampshire, Spinoza, (Middlesex, Eng.: Penguin Books, 1951), p. 12.
5. Howard Gardner, ‘Nadia’s Challenge’, Psychology Today, November, 1980.
6. Students asked to analyze the passage had had several weeks of lectures on argument structure and were given the same background information stated here.
7. John Locke, An Essay Concerning Human Understanding. Edited by A.D. Woozley. (New York: New American Library, 1964). Book Four, Chapter XXVII, pp. 417-4I8.
8. Howard Posposel, Arguments: Deductive Logic Exercises, (Englewood Cliffs, N.J.: Prentice Hall,1971).
9. Compare my discussion in ‘Four Reasons There are No Fallacies.’
10. Kant, Critique of Pure Reason, transl. by Norman Kemp Smith (New York: St. Martin’s Press, 1965), pp. 177-8.
11. The issue is discussed in section 4 below.
12. Hubert Dreyfus, What Computers Can’t Do, p. 198.
13. Stephen Barker, Elements of Logic, Third Edition, (New York: McGraw-Hill, 1980), p.74.
14. I have discussed this point at greater length in ‘Logical Analogies’, in Informal Logic. 1986.
15. Penelope Leach, Who Cares?, (Middlesex, Eng: Penguin Books, 1979), p. 82.
16. Stanley Cavell, The Claim of Reason, (New York: Oxford University Press, 1979), p. 270. Cavell uses this type of argument frequently. Additional examples may be found on pp. 278-9 and 304-5, and indeed, throughout the book.
17. I don’t wish to commit myself to these assumptions here. Suffice it to say that if the parallel is apt, either both arguments are logically weak or both are logically strong.
18. Compare, for instance, Rolf George’s account in ‘Bolzano’s Consequence, Relevance, and Enthymemes’, Journal of Philosophical Logic, Vol. 12, (1983).
19. Of course philosophers have no non-arbitrary way of limiting this list of logical words. The ‘and so on’ clause is very important. It is sometimes said that logical words arc syncategorematic. In the light of developing epistemic and deontic logics, some would accept ‘know’, ‘believe’, and ‘ought’ as logical words. These are no more syncategorematic than ‘survives’ or ‘good’ or ‘acceptable’ or ‘produces’, which are not commonly regarded as logical words. Compare the discussions in Arthur Pap, Semantics and Necessary Truth (New Haven and London: Yale University Press, 1958), pp. 130-143, and 157; and Susan Haack. Philosophy of Logics, (Cambridge, Eng.: Cambridge University Press, 1978), pp.22-27.
20. The matter is discussed in section 3 below.
21. Compare the discussion in ‘Four Reasons There are No Fallacies?’, above.
22. Cited in the Canadian Association of University Teachers Bulletin for September, 1978.
23. Letter to the editor. Toronto Globe and Mail, January 3, 1979.
24. This point is pedagogically and philosophically significant. Given an exemplar of an informal fallacy and asked to analyze a number of passages, a person is often inclined to focus his or her attention on aspects of those passages that are similar to the exemplar. Varying the exemplar can highlight different resemblances and differences. This duck-rabbit sort of phenomenon partially explains why the same argument may be seen as exemplifying several different fallacies at once and is an important objection to teaching argument analysis by teaching fallacies (formal or informal). The reader’s logical perception is distorted by his attempts to see the exemplar in the passage and his judgment is confused by the possibility of the passage exemplifying several ‘different’ mistakes at once. Compare my discussion in ‘Who Says There are No Fallacies?’, Informal Logic Newsletter, Volume V number 1 (December, 1982).
25. For problems that may arise about this judgment, see Leo Groarke, ‘When Two Wrongs Make a Right’, Informal Logic Newsletter, Vol. V, Number I (December, 1982), pp. 10 – 13.
26. Compare my discussion in ‘Ad Hominem: Revising the Textbooks’, Teaching Philosophy, Volume 6, Number I (January, 1983), pp. 13-24. See also Robert H. Ennis, ‘The Believability of People’. The Educational Forum, March, 1974, pp. 347-354.
27. In some contexts the distinction between how activities are done (performance) and how they should be done (norms of competence) is of great importance. However, it does not affect the argument here, provided one sticks to one or the other meaning consistently.
28. The qualification is inserted to avoid trivial refutation on the basis of limitations of attention.
29. Formalized in a weak sense does not require that the rule be embedded in an interesting or useful formal system.
30. In fact, it is extremely difficult to find exceptionless rules outside formal systems. To do so, one must delimit the scope of the rule in such a way as to exclude all exceptions. Mechanisms for doing this often seem ad hoc. First, the exception is noted, then a restriction on the application, scope, or meaning of appropriate terms is adopted so that this apparent exception does not count as a genuine exception. This procedure indicates that one’s sense that an exception must be made is logically prior to one’s articulation of the exception clause. Compare Van McGee. ‘A Counterexample to Modus Ponens’, Journal of Philosophy LXXXII, no. 9 (September, 1985), pp. 462-470.
31. Reasons for this judgment should be apparent from section I above. See also Hubert Dreyfus, What Computers Can’t Do: A Critique of Artificial Reason.
32. I do not wish to commit myself to saying that it is true, because in fact, I am not sure that there is good reason even to go this far. The interpretation is plausible; the statement might be true and it gives a moderately charitable version of the argument that merits study and analysis.
33. I refer to extreme optimism in some circles – most notably those of artificial intelligence and cognitive science research – about the prospects for programming human intelligence and creative thought and also, at a less profound level, to the idea, still accepted in some circles. that there is a contradiction lurking in the very concept of informal logic.
34. Douglas Hofstadter, Gödel, Escher, Bach (New York: Basic Books, 1979), p. 56.
35. Ibid., p. 170.
36. This point has been nicely and emphatically made by Dennis Rohatyn in an unpublished paper, ‘Can Fallacy Theory be Saved?’ (1985). Rohatyn puts it this way: ‘If the best we can hope for from systems of unsurpassable rigor is to generate solvably unsolvable problems then a fortiori we should not make demands on fallacy theory which are in principle unfulfillable … Granting that the meta-theory of first-order quantification is subject to Goedelian limiting conditions, it makes no sense for Massey or anyone else to rule out pursuing fallacy theory on grounds of incompleteness.’ (p. 8 of manuscript)