Wednesday, August 26, 2009

A rant about "deductive"

Don't diss the logician

I’m on my way back from The Second Conference on Concept Types and Frames in Language, Cognition and Science in Dusseldorf. It was a nice conference that gathered linguists, cognitivists, philosophers of science and logicians interested in the functional approach to concepts.

One of the things that surprised me was that both experienced cognitivists (like Paul Thagard) and younger researchers still stick to the distinction between inductive and deductive types of reasoning and attach that much importance to it. Interestingly, “deductive” in their use has a pejorative content and the term is sometimes used condescendingly to emphasize that whatever it is that logicians do is boring and useless and that pretty much the only source of insight and real knowledge are “inductive inferences” taking place in “the real brain”. So, here’s a short rant about this sort of attitude (Frederik is reading over my shoulder and tossing in his remarks).

To start with, I don’t think I know a logician alive who still uses the word “deductive” in any serious ahistorical context. This is because the notion is so worn out that different people associate it with many different things. Instead, more specific terms are used that separately capture different things that you might mean when you say “deductive”.

Roughly, a consequence operation is, for instance, often simply thought of as a set of pairs of sets of sentences. It is called structural if it’s closed under substitution. That’s one thing that you might have in mind: deductive means defined in terms of rules (and maybe axioms) which essentially make no distinction between formulas of the same syntactic form. Another way you can think about these things is to require that a deductive consequence should be simply truth-preserving (vaguely: it’s impossible that the consequence is false when the premises are true). This interpretation is not syntactic, but rather model-theoretic. A truth-preserving consequence doesn’t have to be structural and a structural consequence doesn’t have to be truth-preserving. Another sense you might associate with being deductive is being both structural and truth-preserving (in which case, you still get a multitude of consequence operations, depending on what language and model theory you pick, and what you take to belong to your logical vocabulary). Yet another interpretation you can take is to say that something is a deductive consequence of a given set of premises if it follows from them by classical logic – this notion is sometimes used by those cognitivists who think that logic is classical logic. Although this consequence is structural, whether it’s truth-preserving when it comes to natural language is a matter of what you think about the correctness of certain natural language inferences. For instance, you might be a relevantist – in which case you’re inclined to say that the classical logic allows you to infer too much. Yet another notion simply requires a deductive consequence to satisfy Tarski’s conditions, or some of them, or some of them and some other conditions of a similar type. Yet another idea is to make no reference to a formal system whatsoever and assume that a sentence A is a deductive consequence of a sentence B iff “If B, then A” is analytic (standard qualms about analyticity aside). So in general, the logician’s conceptual framework is full of notions more precise than “deductive”, and the word “deductive” seems unclear and a tad outdated.

But let us even suppose we fix on the notion of being deductive as being validated by classical logic (this seems to be the best you can do if you want to make it easy for the cognitivits to argue that deductive inferences are uninformative). Why on earth would you think that deductive reasoning can only give you boring and useless consequences that you already were aware of, unless you say so because what you take to be the most prominent example of a deduction is one of the slightly obvious syllogisms, most likely employing Socrates and his mortality?

The thing is, human beings are not logically omniscient (I myself, for instance, often feel dumb when I stare at a deductive proof I can’t grasp after half an hour). In fact, the history of mathematics is a good source of examples where prima facie well-understood premise sets led to surprising consequences. Just because the truth of a conclusion is guaranteed by the truth of the premises doesn’t mean that once we believe the premises we actually are aware that they lead to this conclusion. Take the Russell’s paradox. A rather bright dude named Frege spent years without noticing a fairly simple reasoning whose conclusion was to him somewhat surprising. Take Godel’s incompleteness theorem(s). A rather known set of mathematical truths together with a bit of slightly complicated deductive reasoning led to one of the most important discoveries in the 20th century logic, which stunned a bunch of other not-too-dumb mathematicians. If you still think that deductive inferences give nothing but boring and obvious conclusions, think again!

Two points about the opposition between the deductive and the inductive. First of all, unless you define inductive as non-deductive, the distinction is not exhaustive. For instance, if inductive inferences are supposed to be those that lead to a general conclusion, we’re missing non-deductive inferences with particular conclusions (like in History, one uses certain general assumptions and knowledge about present facts to surmise something particular about the past). In this sense, the deductive-reductive distinction introduced by the Lvov-Warsaw school sounds a bit neater (look it up).

Another thing is that people often speak of inductive inferences as if they didn’t have anything to do with deduction (the following point was made by Frederik). Quite to the contrary, certain facts about what is deducible and what isn’t lie always in the background when you’re assessing the plausibility of an inductive inference. For instance, you want the generalization you introduce to explain certain particular data you’re generalizing from, and one of the most obvious analysis of explanation uses the notion of deducibility. Also, you don’t want your new generalization to contradict your other data and other generalizations you have introduced before: but hey, isn’t the notion of consistency highly dependent of your notion of derivability?

Having said that, I also have to emphasize that this doesn't mean that I take non-deductive inferences (whatever they are) to be uninteresting; indeed, the question of how we come to accept certain beliefs other than by deducing them (whatever this consists in) from other beliefs is a very hard and interesting problem. What I oppose to, rather, is drawing cut and dry lines between these types of reasoning and saying that only one of them is interesting.

Saturday, August 22, 2009

A book on adaptive logics in progress...

Diderik Batens is working on a book about adaptive logics. He made drafts of first few chapters available online and invites comments. Here.

Monday, August 17, 2009

Frames, Frames and Frames

1. The paper on dynamic frames has been accepted and is forthcoming in the Logic Journal of the IGPL. As I understand their self-archiving policy, it can't be publicly accessible for 12 months after it's published by OUP. Hence, I'm making the final version available now, it'll be available till the official publication. If you feel like grabbing it before it disappears, it's here.

2. In the same vein, in Ghent this Friday (August 21) we're having a mini-workshop on frame theory. If you're around at that time, feel free to swing by. There's gonna be an outing afterwards.

Title: Frames, Frames and Frames

Time: Friday, August 21. 17:00-19:00 (There will be three talks, 30 minutes each + discussion)

Room 2.19, Centre for Logic and Philosophy of Science, Universiteit Gent, Blandijnberg 2


1. Capturing dynamic frames. It's based on the paper I just mentioned: I explain what frames are, how certain frames can be expressed by sets of first-order, formulas, and how an adaptive strategy can be applied to a reasoning with a conceptual framework when faced with an anomaly.
2. Induction from a single instance and dynamic frames. It reports the content of a joint paper with Frederik Van De Putte; basically, we discuss how the background knowledge needed for a distinction between plausible and implausible cases of induction from a single instance can be formulated within frame theory, and how the theory provides a nice framework for talking about this sort of reasoning as relying on certain second-order inferences.
3. Similarity and dynamic frames. I'm talking about Bugajski's algebraic semantics for similarity relation, indicate its weaknesses, and provide a relational semantics that's simpler and which satisfies more of Williamson's requirements for 4-place similarity relation. Then, I discuss Bugajski's argument to the effect that interesting similarity structures can be generated by a set of properties only if those properties aren't sharp. To criticize it, I describe how non-trivial similarity structures can be generated by sets of sharp properties, if these are viewed within the framework of dynamic frame theory.

Monday, August 10, 2009

NCM 09 (part 2)

... and the postponed report on Non-Classical Mathematics 2009 continues...

The second talk was given by Giovanni Sambin. He talked about his minimalist foundation and about a way constructive topology can be developed over a minimalist foundation. It's quite interesting to see how much stuff can be done constructively. Also, Giovanni is a devoted and areally charming constructivist. I was chatting with him at a pub one night and only by finding myself almost converted to constructivism I knew it was time to go home.

By the way, among many inaccurate things that are being said about Godel's theorem (like these) you can find a remark that Godel's incompleteness and undefinability proofs/theorems don't work in intuitionistic mathematics. Actually, they do. And the person to talk to is Giovanni, who worked out all the details making sure everything is constructive.

Arnon Avron talked about a new approach to Predicative Set Theory. Roughly,the underlying principles of the predicative mathematics are:
I. Higher-order constructs are acceptable only when introduced through non-circular definitions referring only to constructs introduced by previous definitions.

II. The natural number sequence is a well understood concept and as a totality it constitutes a set.
It is well known that Feferman has pursued the project and has shown how a large part of classical analysis can be developed within it. The system, however, is not too popular, partially because it uses a rather complex hierarchy of types, which makes the theory more complicated than, say, ZFC.

Arnon Avron discussed an attempt to simplify predicative mathematics by getting rid of the type hierarchy and developing a type-free predicative set theory. The idea is that the comprehension schema is restricted to those formulas that satisfy a syntactically defined safety relation between formulas and variables. The relation resembles a syntactic approximation to the notion of domain-independence used in database theory, and the intuition is that acceptable formulas define a concept in an acceptable way independent of any extension of the universe.

A replacement for Journal Wiki

Here is a new database (by Andrew Cullison) gathering data pertaining to philosophy journal experience. I mentioned its being in preparation before. Now it seems to be up and running (although, the journal wiki data import is yet about to happen). It is certainly more user-friendly than its predecessor.