The Axiom of Determinacy, Forcing Axioms, and the Nonstationary Ideal by W. Hugh Woodin (De Gruyter Series in Logic and Its Applications: De Gruyter ) The starting point for this monograph is the previously unknown connection between the Continuum Hypothesis and the saturation of the non-stationary ideal on 1; and the principle result of this monograph is the identification of a canonical model in which the Continuum Hypothesis is false. This is the first example of such a model and moreover the model can be characterized in terms of maximality principles concerning the universal-existential theory of all sets of countable ordinals. This model is arguably the long sought goal of the study of forcing axioms and iterated forcing but is obtained by completely different methods, for example no theory of iterated forcing whatsoever is required. The construction of the model reveals a powerful technique for obtaining independence results regarding the combinatorics of the continuum, yielding a number of results which have yet to be obtained by any other method.
This monograph is directed to researchers and advanced graduate students in Set Theory. The second edition is updated to take into account some of the developments in the decade since the first edition appeared, this includes a revised discussion of alpha-logic and related matters.
As always I suppose, when contemplating a new edition one must decide whether to rewrite the introduction or simply write an addendum to the original introduction. I have chosen the latter course and so after this paragraph the current edition begins with the original introduction and summary from the first edition (with comments inserted in italics and some other minor changes) and then continues beginning on page 18 with comments regarding this edition.
The main result of this book is the identification of a canonical model in which the Continuum Hypothesis (CH) is false. This model is canonical in the sense that Gödel's constructible universe L and its relativization to the reals, L(R), are canonical models though of course the assertion that L(R) is a canonical model is made in the context of large cardinals. Our claim is vague, nevertheless the model we identify can be characterized by its absoluteness properties. This model can also be characterized by certain homogeneity properties. From the point of view of forcing axioms it is the ultimate model at least as far as the subsets of 1 are concerned. It is arguably a completion of 5'(w1), the powerset of w1.
This model is a forcing extension of L(R) and the method can be varied to produce a wide class of similar models each of which can be viewed as a reduction of this model. The methodology for producing these models is quite different than that behind the usual forcing constructions. For example the corresponding partial orders are countably closed and they are not constructed as forcing iterations. We provide evidence that this is a useful method for achieving consistency results, obtaining a number of results which seem out of reach of the current technology of iterated forcing.
The analysis of these models arises from an interesting interplay between ideas from descriptive set theory and from combinatorial set theory. More precisely it is the existence of definable scales which is ultimately the driving force behind the arguments. Boundedness arguments also play a key role. These results contribute to a curious circle of relationships between large cardinals, determinacy, and forcing axioms. Another interesting feature of these models is that although these models are generic extensions of specific inner models (L (1k) in most cases), these models can be characterized without reference to this. For example, as we have indicated above, our canonical model is a generic extension of L(R). The corresponding partial order we denote by Pmax. In Chapter 5 we give a characterization for this model isolating an axiom (:). The formulation of (:) does not involve Pmax, nor does it obviously refer to L (R). Instead it specifies properties of definable subsets of 5'(w1).
The original motivation for the definition of these models resulted from the discovery that it is possible, in the presence of the appropriate large cardinals, to force (quite by accident) the effective failure of CH. This and related results are the subject of Chapter 3. We discuss effective versions of CH below.
Gödel was the first to propose that large cardinal axioms could be used to settle questions that were otherwise unsolvable. This has been remarkably successful particularly in the area of descriptive set theory where most of the classical questions have now been answered. However after the results of Cohen it became apparent that large cardinals could not be used to settle the Continuum Hypothesis. This was first argued by Levy and Solovay.
Nevertheless large cardinals do provide some insight to the Continuum Hypothesis. One example of this is the absoluteness theorem of Woodin (1985). Roughly this theorem states that in the presence of suitable large cardinals CH "settles" all questions with the logical complexity of CH.
More precisely if there exists a proper class of measurable Woodin cardinals then E i sentences are absolute between all set generic extensions of V which satisfy CH.
The results of this book can be viewed collectively as a version of this absoluteness theorem for the negation of the Continuum Hypothesis (—CH).
We begin with the following question.
Is there a family of stationary subsets of 1 such that is nonstationary whenever A not equals B ?
The analysis of this question has played (perhaps coincidentally) an important role in set theory particularly in the study of forcing axioms, large cardinals and determinacy.
The nonstationary ideal on w1 is w2-saturated if there is no such family. This statement is independent of the axioms of set theory. We let J„ denote the set of subsets of w1 which are not stationary. Clearly J is a countably additive uniform ideal on w1. If the nonstationary ideal on w1 is w2-saturated then the boolean algebra is a complete boolean algebra which satisfies the w2 chain condition. Kanamori (2008) surveys some of the history regarding saturated ideals, the concept was introduced by Tarski.
The first consistency proof for the saturation of the nonstationary ideal was obtained by Steel and VanWesep (1982). They used the consistency of a very strong form of the Axiom of Determinacy (AD), see (Kanamori 2008) and Moschovakis (1980) for the history of these axioms.
Steel and VanWesep proved the consistency of The nonstationary ideal on col is w 2-saturated" assuming the consistency of is regular".
AD R is the assertion that all real games of length w are determined and 0 denotes the supremum of the ordinals which are the surjective image of the reals. The hypothesis was later reduced by Woodin (1983) to the consistency of ZF + AD. The arguments of Steel and VanWesep were motivated by the problem of obtaining a model of ZFC in which w2 is the second uniform indiscernible. For this Steel defined a notion of forcing which forces over a suitable model of AD that ZFC holds (i. e. that the Axiom of Choice holds) and forces both that w2 is the second uniform indiscernible and (by arguments of VanWesep) that the nonstationary ideal on col is w2-saturated. The method of (Woodin 1983) uses the same notion of forcing and a finer analysis of the forcing conditions to show that things work out over L(1R). In these models obtained by forcing over a ground model satisfying AD not only is the nonstationary ideal saturated but the quotient algebra IP (wi)1,11,s has a particularly simple form.
We have proved that this in turn implies ADL(R) and so the hypothesis used (the consistency of AD) is the best possible.
The next progress on the problem of the saturation of the nonstationary ideal was obtained in a series of results by Foreman, Magidor, and Shelah (1988). They proved that a generalization of Martin's Axiom which they termed Martin's Maximum actually implies that the nonstationary ideal is saturated. They also proved that if there is a supercompact cardinal then Martin's Maximum is true in a forcing extension of V. Later Shelah proved that if there exists a Woodin cardinal then in a forcing extension of V the nonstationary ideal is saturated. This latter result is most likely optimal in the sense that it seems very plausible that "The nonstationary ideal on col is w2-saturated" is equiconsistent with there exists a Woodin cardinal".
There was little apparent progress on obtaining a model in which w2 is the second uniform indiscernible beyond the original results. Recall that assuming that for every real x, x# exists, the second uniform indiscernible is equal to 81, the supremum of the lengths of Al prewell orderings. Thus the problem of the size of the second uniform indiscernible is an instance of the more general problem of computing the effective size of the continuum. This problem has a variety of formulations, two natural versions are combined in the following:
The second of these formulations involves the notion of a universally Baire set of reals which originates in (Feng, Magidor, and Woodin 1992). Universally Baire sets are discussed briefly in Section 10.3. We note here that if there exists a proper class of Woodin cardinals then a set A c R is universally Baire if and only if it is "-weakly homogeneously Suslin which in turn is if and only if it is °°-homogeneously Suslin. Another relevant point is that if there exist infinitely many Woodin cardinals with measurable above and if A c R is universally Baire, then and so A belongs to an inner model of AD. The converse can fail.
More generally one can ask for any bound provided of course that the bound is a "specific" coc, which can be defined without reference to 2k40.
For example every El prewellordering has length less than co2 and if there is a measurable cardinal then every El prewellordering has length less than co3. A much deeper theorem of is that if every projective set is determined then every projective prewellordering has length less than co.. This combined with the theorem of Martin and Steel on projective determinacy yields that if there are infinitely many Woodin cardinals then every projective prewellordering has length less than co.. The point here of course is that these bounds are valid independent of the size of 2t0.
The current methods do not readily generalize to even produce a forcing extension of L ) (without adding reals) in which ZFC holds and co3 < OLOEt). Thus at this point it is entirely possible that co3 is the bound and that this is provable in ZFC. If a large cardinal admits an inner model theory satisfying fairly general conditions then most likely the only (nontrivial) bounds provable from the existence of the large cardinal are those provable in ZFC; i. e. large cardinal combinatorics are irrelevant unless the large cardinal is beyond a reasonable inner model theory.
By the results of (Woodin 2010b) if inner model theory can be extended to the level of one supercompact cardinal then the existence of essentially all large cardinals is consistent with 81 = w3.
It follows from the results of (Steel and VanWesep 1982) and (Woodin 1983) that such a partial order P exists in the case of 81, more precisely, assuming there is a partial order P E L(R) such that for all transitive models M of AD+ containing R, if G c P is M-generic then if a large cardinal admits a suitable inner model theory then the existence of the large cardinal is consistent with 81 = co2. We shall prove a much stronger result in Chapter 3, showing that if 8 is a Woodin cardinal and if there is a measurable cardinal above 8 then there is a semiproper partial order P of cardinality 8 such that this result which is a corollary of Theorem 1.1, stated below, and Theorem 2.64, due to Shelah, shows that this particular instance of the Effective Continuum Hypothesis is as intractable as the Continuum Hypothesis.
Foreman and Magidor initiated a program of proving that 81 < co2 from various combinatorial hypotheses with the goal of evolving these into large cardinal hypotheses, (Foreman and Magidor 1995). By the (initial) remarks above their program if successful would have identified a critical step in the large cardinal hierarchy.
Foreman and Magidor proved among other things that if there exists a (normal) w3-saturated ideal on co2 concentrating on a specific stationary set then 81 < co2. In Chapter 9 we improve this result slightly showing that this restriction is unnecessary; if there is a measurable cardinal and if there is an w3-saturated (uniform) ideal on (02 then 81 < w2.
An early conjecture of Martin is that 8„1 = Nn for all n follows from reasonable hypotheses. 8,i, is the supremum of the lengths of A prewellorderings.
The following theorem "proves" the Martin conjecture in the case of n = 2.
Theorem 1.1. Assume that the nonstationary ideal on col is w2-saturated and that there is a measurable cardinal. Then 81 = co2 and further every club in col contains a club constructible from a real. q
If one assumes in addition that 1R4 exists then Theorem 1.8 can be reformulated as follows. For each n E 0) let Un be a set which is Ei definable in the structure where (rii : i < n) is an increasing sequence of Silver indiscernibles of L ( ), and such that Un is universal.
Theorem 1.9. Assume AD" ) and that R# exists. Suppose that for each 112 sentence in the language for the structure for some G C Pmax which is L(R)-generic. q
Thus in the statement of Theorem 1.9 one only refers to a structure of countable signature.
These theorems suggest that the axiom:
(*) AD holds in L(R) and L(D' (co 1)) is a Pmax-generic extension of L(R);
is perhaps, arguably, the correct maximal generalization of Martin's Axiom at least as far as the structure of 5' (col) is concerned. However an important point is that we do not know if this axiom can always be forced to hold assuming the existence of suitable large cardinals.
Conjecture. Assume there are w2 many Woodin cardinals. Then the axiom (*) holds in a generic extension of V. q
Because of the intrinsics of the partial order Pmax, this axiom is frequently easier to use than the usual forcing axioms. We give some applications for which it is not clear that Martin's Maximum suffices. Another key point is:
There is no need in the analysis of L(1R)Pmax for any machinery of iterated forcing. This includes the proofs of the absoluteness theorems.
Further
The analysis of L (R)P max requires only ADL(R).
For the definition of Pmax that we shall work with the analysis will require some iterated forcing but only for ccc forcing and only to produce a poset which forces MAmi .
In Chapter 5 we give three other presentations of Pmax based on the stationary tower forcing. The analysis of these (essentially equivalent) versions of Pmax require no local forcing arguments whatsoever. This includes the proof of the absoluteness theorems.
Also in Chapter 5 we shall discuss methods for exploiting (*), giving a useful reformulation of the axiom. This reformulation does not involve the definition of Pmax• We shall also prove that, assuming (*). This we accomplish by finding a 112 sentence which if true in the structure implies (in ZF + DC) that there is a surjection which is definable in the structure from parameters. This sentence is a consequence of Martin's Maximum and an analogous, but easier, argument shows that assuming ADL(R), it is true in L(R)Pmax . Thus the axiom (*) implies 2m0 = R2. Actually we shall discuss two such sentences, (PAC and *Ac. These are defined in Section 5.1 and Section 5.3 respectively.
Starting in Chapter 6, we shall define several variations of the partial order Pa.. Interestingly each variation can be defined as a suborder of a reformulation of IPmax. The reformulation is P* and it is the subject of Section 5.5. A slightly more general reformulation is Pg. and in Section 5.6 we prove a theorem which shows that essentially any possible variation, subject to the constraint that in the resulting model, is a suborder of PL.
The variations yield canonical models which can be viewed as constrained versions of the Pmax model. Generally the constrained versions will realize any 112 sentence in the language for the structure
which is (suitably) consistent with the constraint; i. e. unless one takes steps to prevent something from happening it will happen. This is in contrast to the usual forcing constructions where nothing happens unless one works to make it happen.
One application will be to establish the consistency with ZFC that the nonstationary ideal on col is col-dense. This also shows the consistency of the existence of an col - dense ideal on col with —.CH. Further for these results only the consistency of ZF + AD is required. This is best possible for we have proved that if there is an col-dense ideal on col then.
More precisely we shall define a variation of JPmax, which we denote Amax, and assuming ADL(R) we shall prove that
L(R)Qmax ZFC + "The nonstationary ideal on col is col -dense".
Again ADLM suffices for the analysis of L(R)Qmax and there are absoluteness theorems which characterize the Qmax-extension.
Collectively these results suggest that the consistency of ADLM is an upper bound for the consistency strength of many propositions at col, over the base theory,
ZFC + "For all x E R, x# exists" + "81 = co2".
However there are two classes of counterexamples to this.
Suppose that r # exists and that L(R#) AD. For each sentence ck such that
L(R)
the following:
• There exists a sequencecan be expressed by a E2 sentence in (H(w2), E) which can be realized by forcing with a Pmax variation over L(]R#). There must exist a choice of .0 such that this E2 sentence cannot be realized in the structure (H(w2), E) of any set generic extension of L(R). This is trivial if the extension adds no reals (take to be any tautology), otherwise it is subtle in that if then we conjecture that there is a partial order P E L(1l) such that
The second class of counterexamples is a little more subtle, as the following example illustrates. If the nonstationary ideal on col is col-dense and if Chang's Conjecture holds then there exists a countable transitive set, M, such that
M ZFC + "There exist co + 1 many Woodin cardinals",
(and so M 1 ADL(n) and much more). The application of Chang's Conjecture is only necessary to produce
X -<E2 H(w2)
such that X fl w2 has ordertype col. The subtle and interesting aspect of this example is that
L(R)Qmax k Chang's Conjecture,
but by the remarks above, this can only be proved by invoking hypotheses stronger than ADL(R).
In fact the assertion,
• L(R)Qmax 1 Chang's Conjecture,
is equivalent to a strong form of the consistency of AD. This is the subject of Section 9.4.
The statement that the nonstationary ideal on col is col -dense is a E2 sentence in
(H(w2), E, INS)•
This is an example of a (consistent) E2 sentence (in the language for this structure) which implies —CH. Using the methods of Section 10.2 a variety of other examples can be identified, including examples which imply c = w2.
Thus in the language for the structure
(H(w2), E> ,SINS)
there are (nontrivial) consistent E2 sentences which are mutually inconsistent. This is in contrast to the case of n2 sentences.
It is interesting to note that this is not possible for the structure
(H(w2), E),
provided the sentences are each suitably consistent. We shall discuss this in Chapter 8, (see Theorem 10.159), where we discuss problems related to the problem of the relationship between Martin's Maximum and the axiom (*).
The results we have discussed suggest that if the nonstationary ideal on col is w2-saturated, there are large cardinals and if some particular sentence is true in L(IP (NO) then it is possible to force over L (R) (or some larger inner model) to make this sentence true (by a forcing notion which does not add reals). Of course one cannot obtain models of CH in this fashion. The limitations seem only to come from the following consequence of the saturation of the nonstationary ideal in the presence of a measurable cardinal:
• Suppose C c col is closed and unbounded. Then there exists x E R such that < col I L,[x] is admissible} c C.
This is equivalent to the assertion that for every x E R, x# exists together with the assertion that every closed unbounded subset of col contains a closed, cofinal subset which is constructible from a real.
Motivated by these considerations we define, in Chapter 7 and Chapter 8, a number of additional Pma, variations. The two variations considered in Chapter 7 were selected simply to illustrate the possibilities. The examples in Chapter 8 were chosen to highlight quite different approaches to the analysis of a IF'max variation, there we shall work in "L"-like models in order to prove the lemmas required for the analysis.
It seems plausible that one can in fact routinely define variations of Pma, to reproduce a wide class of consistency results where c = w2. The key to all of these variations is really the proof of Theorem 1.1. It shows that if the nonstationary ideal on cD1 is w2-saturated then H(w2) is contained in the limit of a directed system of countable models via maps derived from iterating generic elementary embeddings and (the formation of) end extensions.
Here again there is no use of iterated forcing and so the arguments generally tend to be simpler than their standard counterparts. Further there is an extra degree of freedom in the construction of these models which yields consequences not obviously
obtainable with the usual methods. The first example of Chapter 7 is the variation, Smax, which conditions the model on a sentence which implies the existence of a Suslin tree. The sentence asserts:
• Every subset of col belongs to a transitive model M in which o holds and such that every Suslin tree in M is a Suslin tree in V.
If AD holds in L(R) and if G c Smax is L(R)-generic then in L(R)[G] the following strengthening of the sentence holds:
• For every A c col there exists B c col such that A E L[B] and such that if T E L[B] is a Suslin tree in L[B], then T is a Suslin tree.
In L(R)[G] every subset of wi belongs to an inner model with a measurable cardinal (and more) and under these conditions this strengthening is not even obviously consistent.
The second example of Chapter 7 is motivated by the Borel Conjecture. The first consistency proof for the Borel Conjecture is presented in (Laver 1976). The Borel Conjecture can be forced a variety of different ways. One can iterate Laver forcing or Mathias forcing, etc. In Section 7.2, we define a variation of Pmax which forces the Borel Conjecture. The definition of this forcing notion does not involve Laver forcing, Mathias forcing or any variation of these forcing notions. In the model obtained, a version of Martin's Maximum holds. Curiously, to prove that the Borel Conjecture holds in the resulting model we do use a form of Laver forcing. An interesting technical question is whether this can be avoided. It seems quite likely that it can, which could lead to the identification of other variations yielding models in which the Borel Conjecture holds and in which additional interesting combinatorial facts also hold.
1.4 Extensions of inner models beyond L (R)
In Chapter 9 we again focus primarily on the P.-extension but now consider extensions of inner models strictly larger than L (R). These yield models of (*) with rich structure for H(co3); i. e. with "many" subsets of (02.
The ground models that we shall consider are of the form L(F, R) where F C 53(R) is a pointclass closed under borel preimages, or more generally inner models of the form L(S, F, R) where F C (R) and S C Ord. We shall require that a particular form of AD hold in the inner model, the axiom is AD+ which is discussed in Section 9.1. It is by exploiting more subtle aspects of the consequences of AD+ that we can establish a number of combinatorially interesting facts about the corresponding extensions.
Applications include obtaining extensions in which Martin's Maximum holds for partial orders of cardinality c, this is Martin's Maximum(c), and in which w2 exhibits some interesting combinatorial features.
Actually in the models obtained, Martin's Maximum++ (c) holds. This is the assertion that Martin's Maximum++ holds for partial orders of cardinality c where Martin's Maximum++ is a slight strengthening of Martin's Maximum. These forcing axioms, first formulated in (Foreman, Magidor, and Shelah 1988), are defined in Section 2.5.
Recasting the Pmax variation for the Borel Conjecture in this context we obtain, in the spirit of Martin's Maximum, a model in which the Borel Conjecture holds together with the largest fragment of Martin's Maximum(c) which is possibly consistent with the Borel Conjecture.
Another reason for considering extensions of inner models larger than L(R) is that one obtains more information about extensions of L(R). For example the proof that the point is that if it is consistent to have ADL(R) and OMR) > co3 then presumably this can be achieved in a forcing extension of L(R). This in turn would suggest there are generalizations of Pmax which produce generic extensions of L(R) in which c > W2. There are many open questions in combinatorial set theory for which a (positive) solution requires building forcing extensions in which c > w2.
The potential utility of JPmax variations for obtaining models in which is either enhanced or limited by the following theorem of S. Jackson. This theorem is an immediate corollary of Theorem 1.3(2) and Jackson's analysis of measures and ultrapowers in L(R) under the hypothesis of ADL(I' ).
Theorem 1.10 (Jackson). Assume the nonstationary ideal on col is w2-saturated and that there exist w many Woodin cardinals with a measurable cardinal above them all.
One of the main open problems of Shelah's pcf theory is whether there can exist a set, A, of regular cardinals such that I A I < Ipcf(A) I (satisfying the usual requirement that I AI < min(A)).
Common to all Pmax variations is that Theorem 1.3(2) holds in the
resulting models and so the conclusions of Theorem 1.10 applies to
these models as well. Though, recently, a more general class of
"variations" has been identified for which Theorem 1.3(2) fails in the models obtained. These latter examples are
variations only in
the sense that they also yield canonical models in which CH fails,
cf. Theorem 10.185.
I end with a confession. This book was written intermittently over a 7 year period beginning in early 1992 when the initial results were obtained. During this time the exposition evolved considerably though the basic material did not. Except that the material in Chapter 8, the material in the last three sections of Chapter 9 and much of Chapter 10, is more recent. Earlier versions contained sections which, because of length considerations, we have been compelled to remove.
This account represents in form and substance the evolutionary process which actually took place. Further a number of proofs are omitted or simply sketched, especially in Chapter 10. Generally it seemed better to state a theorem without proof than not to state it at all. In some cases the proofs are simply beyond the scope of this book and in other cases the proofs are a routine adaptation of earlier arguments. Of course in both cases this can be quite frustrating to the reader. Nevertheless it is my hope that this book does represent a useful introduction to this material with few relics from earlier versions buried in its text.
By the time (May, 1999) of this writing a number of papers have appeared, or are in press, which deal with Famx or variations thereof. P. Larson and D. Seabold have each obtained a number of results which are included in their respective Ph. D. theses, some of these results are discussed in this book.
Shelah and Zapletal consider several variations, recasting the absoluteness theorems in terms of "II2-compactness" but restricting to the case of extensions of L (R), (Shelah and Zapletal 1999).
More recently Ketchersid, Larson, and Zapletal (2007) isolate a family of explicit Namba-like forcing notions which can, under suitable circumstances, change the value of 81 even in situations where CH holds. These examples are really the first to be isolated which can work in the context of CH. Other examples have been discovered and are given in (Doebler and Schindler 2009).
Finally there are some very recent developments (as of 1999) which involve a generalization of w-logic which we denote Q-logic. Arguably -logic is the natural limit of the lineage of generalizations of classical first order logic which begins with w-logic and continues with /3-logic etc.
We (very briefly) discuss Q-logic (updated to 2010) in Section 10.4 and Section 10.5. In some sense the entire discussion of Pmax and its variations should take place in the context of Q -logic and were we to rewrite the book this is how we would proceed. In particular, the absoluteness theorems associated to Fmax and its variations are more naturally stated by appealing to this logic. For example Theorem 1.4 can be reformulated as follows.
In fact, using a-logic one can give a reformulation of (*) which does not involve forcing at all, this is discussed briefly in Section 10.4.
Another feature of the forcing extensions given by the (homogeneous) Pmax variations, this holds for all the variations which we discuss in this book, is that each provides a finite axiomatization, over ZFC, of the theory of H(co2) (in a-logic). For Pmax, the axiom is (*) and the theorem is the following.
Theorem 1.12. Suppose that there exists a proper class of Woodin cardinals. Then for each sentence 0, either.
This particular feature underscores the fundamental difference between the method of Pmax variations and that of iterated forcing. We note that it is possible to identify finite axiomatizations over ZFC of the theory of (H(co2), E) which cannot be realized by any ]Pax variation. Theorem 10.185 indicates such an example, the essential feature is that 81 < co2 but still there is an effective failure of CH. Nevertheless it is at best difficult through an iterated forcing construction to realize in (H(w2), E) v[G] a theory which is finitely axiomatized over ZFC in a-logic. The reason is simply that generally the choice of the ground model will influence, in possibly very subtle ways, the theory of the structure (H(w2), e) vEG1 . There is at present no known example which works, say from some large cardinal assumption, independent of the choice of the ground model.
a-logic provides the natural setting for posing questions concerning the possibility of such generalizations of Pmax, to for example cot, i. e. for the structure H(w3), and beyond. The first singular case, H(co,o+), seems particularly interesting.
There is also the case of col but in the context of CH. One interesting result (but as of 2010, this is contingent on the AD+ Conjecture), with, we believe, potential implications for CH, is that there are limits to any possible generalization of the ]Pax variations to the context of CH; more precisely, if CH holds then the theory of H(w2) cannot be finitely axiomatized over ZFC in a-logic.
In the 10 years since what was written above as the introduction to the first edition of this book there have been quite a number of mathematical developments relevant to this book and I find myself again in Germany on sabbatical from Berkeley working on this book. This edition contains revisions that reflect these developments including the deletion of some theorems now not relevant because of these developments or simply because the proofs, sketched or otherwise, were simply not correct. Finally I stress that I make no claim that this revision is either extensive or thorough and I regret to say that it is not — I feel that the entire subject is at a critical crossroads and as always in such a situation one cannot be completely confident in which direction the future lies. But it is this future that dictates which aspects of this account should be stressed.
First and most straightforward, the theorems related to ow (w2), such as the theorem that Martin's Maximum implies ow (w2), have all been rendered irrelevant by a remarkable theorem of (Shelah 2008) which shows that ow (w2) is a consequence of 2w1 = w2. Shelah's result shows that assuming Martin's Maximum(c), or simply assuming that 2w1 = W2, then the nonstationary ideal at W2 cannot be semi-saturated on the ordinals of countable cofinality. It does not rule out the possibility that there exists a uniform semi-saturated at W2 on the ordinals of countable cofinality. On the other hand, the primary motivation for obtaining such consistency results for ideals at W2 in the first edition was the search for evidence that the consistency strength of the theory was beyond that of the existence of a superstrong cardinals. Dramatic recent results (Sargsyan 2009) have shown that this theory is not that strong, proving that the consistency of this theory follows from simply the existence of a Woodin cardinal which is a limit of Woodin cardinals. Therefore in this edition the consistency results for semi-saturated ideals at W2 are simply stated without proof. The proofs of these theorems are sketched at length in the first edition but based upon an analysis in the context of AD+ of HOD which is open without requiring that one work relative to the minimum model of but of course the sketch in the case of obtaining the consistency that YN, is semi-saturated is not correct — that error was due to a careless misconception regarding iterations of forcing with uncountable support. As indicated in the first edition the analysis of HOD in the context of AD+ is not actually necessary for the proofs, it was used only to provide a simpler framework for the constructions.
Ultimately of far more significance for this book is that recent results concerning the inner model program undermine the philosophical framework for this entire work. The fundamental result of this book is the identification of a canonical axiom for —.CH which is characterized in terms of a logical completion of the theory of H(co2) (in S2-logic of course). But the validation of this axiom requires a synthesis with axioms for V itself for otherwise it simply stands as an isolated axiom. This view is reinforced by the use of the S2 Conjecture to argue against the generic-multiverse view of truth (Woodin 2009). I remain convinced that if CH is false then the axiom (*) holds and certainly there are now many results confirming that if the axiom (*) does hold then there is a rich structure theory for H(co2) in which many pathologies are eliminated. But nevertheless for all the reasons discussed at length in (Woodin 2010b), I think the evidence now favors CH.
The picture that is emerging now based on (Woodin 2010b) and (Woodin 2010a) is as follows. The solution to the inner model problem for one supercompact cardinal yields the ultimate enlargement of L. This enlargement of L is compatible with all stronger large cardinal axioms and strong forms of covering hold relative to this inner model. At present there seem to be two possibilities for this enlargement, as an extender model or as strategic extender model. There is a key distinction however between these two versions. An extender model in which there is a Woodin cardinal is a (nontrivial) generic extension of an inner model which is also an extender model whereas a strategic extender model in which there is a proper class of Woodin cardinals is not a generic extension of any inner model. The most optimistic generalizations of the structure theory of L (R) in the context of AD to a structure theory of L(VA+1) in the context of an elementary embedding, with critical point below Al require that V not be a generic extension of any inner model which is not countably closed within V. Therefore these generalizations cannot hold in the extender models and this leave the strategic extender models as essentially the only option. Thus there could be a compelling argument that V is a strategic extender model based on natural structural principles. This of course would rule out that the axiom (*) holds though if V is a strategic extender model (with a Woodin cardinal) then the axiom (*) holds in a homogeneous forcing extension of V and so the axiom (*) has a special connection to V as an axiom which holds in a canonical companion to V mediated by an intervening model of AD+ which is the manifestation of 0-logic. An appealing aspect to this scenario is that the relevant axiom for V can be explicitly stated now — and in a form which clarifies the previous claims — without knowing the detailed level by level inductive definition of a strategic extender model (Woodin 2010b): in its weakest form the axiom is simply the conjunction of:
(1) There is a supercompact cardinal.
(2) There exist a universally Baire set such that for all
1-12-sentences (equivalently, for all E2-sentences).
As with the previous scenarios this scenario could collapse but any scenario for such a collapse which leads back to the validation of the axiom (*) seems rather unlikely at present.
Deduction: Introductory Symbolic Logic, 2nd Edition by Daniel
Bonevac
(Blackwell) Near the end of the eighteenth century, Immanuel Kant wrote that
logic was a closed and completed subject, to which nothing significant had been
contributed since the time of Aristotle and to which nothing significant
remained to be contributed. Many logic students today receive a similar
impression from their introductory logic courses, except that Russell and
Whitehead have assumed the venerated position that Aristotle held in Kant's
time.
But logic is not cut‑and‑dried. It is open‑ended and filled
with activity, excitement, and controversy. Research on understanding reasoning
in natural language is active and growing; many critical issues remain to be
settled. Exposing students to the excitement of logical research is the chief
aim of this book.
Deduction is a comprehensive introduction to contemporary
symbolic logic. It presents classical first‑order logic as efficiently and
elegantly as possible. It uses a tree system based on the work of Gentzen, Beth,
Hintikka, and Jeffrey, as well as a natural deduction system inspired by that of
Kalish and Montague. Both are very natural and easy to learn, with minimal
complexity in rules. In particular, the definition of a formula excludes free
variables, and the deduction system uses Show fines; the combination allows
rules (e.g., univeral introduction) to be stated very simply.
Deduction’s main innovation, however, is its final part, which contains
chapters on variants and extensions of classical logic: many‑valued logic
(including fuzzy and intuitionistic logic), modal logic, deontic logic,
counterfactuals, common‑sense reasoning, and quantified modal logic. These have
been areas of great logical and philosophical interest over the past 40 years,
but few other textbooks treat them in any depth. Deduction makes these areas
accessible to introductory students. All chapters have discussions of the
underlying semantics in intuitive terms. All present both truth tree and
deduction systems. The treatments of many‑valued logic and common‑sense
reasoning are new to this edition.
These variants and extensions of classical logic merit
attention for several reasons. First, the resources of natural language far
outstrip those of classical logic. Many arguments in English and other natural
languages depend on nontruth‑functional connectives that have no representation
in classical systems. Second, classical logic at best roughly approximates what
is arguably the most important connective in English, if. Devising more accurate
theories of if has motivated much research in contemporary logic, including
that covered in the chapters on modal logic, counterfactuals, and common‑sense
reasoning in this book. Finally, studying variants and extensions of classical
logic helps one understand what it is like to think as a logician. It
communicates a mode of thinking, illustrating how logicians construct and
evaluate theories. For that reason, it is perhaps the best way to understand
what logic is.
Several features of
Deduction
prove especially useful in the classroom. First, the semantic and
proof‑theoretic techniques presented here are simple and powerful. The truth
tree method is very easy to teach and to learn. The natural deduction system is
as straightforward as such a system can be. The pattern of rules is easy to
understand; most connectives have introduction and exploitation rules. The
system's form greatly simplifies deductive rules and strategies. Neither the
tree system nor the deduction system uses free variables; instantiation always
involves a constant or closed function term. Indeed, formulas never contain free
variables. Formulas always correspond to sentences.
Second,
Deduction
takes natural language more seriously than many logic texts do. To teach
symbolization, for example, Bonevac
does more than offer examples and suggestions; he outlines strategies that
almost add up to an algorithm for a fragment of English. That helps students
learn symbolization faster and more successfully. It teaches them about the
limitations of logical systems, as they see what translates into formal notation
only with a loss of meaning and what does not translate at all. Finally, it
helps students become more careful readers and writers.
Third,
Deduction
supplies instructors and students with a large bank of problems of varying
levels of difficulty. Many are crafted to illustrate logical techniques, but
many are also drawn from real‑world texts on a wide variety of topics. The
problem sets of this edition have been carefully graded from simple to quite
difficult examples.
From Frege to Godel:
A Source Book in Mathematical Logic, 1879-1931
by Jean Van Heijenoort (Harvard University Press) This reprint of the classic
anthology is nonparallel guide to principle texts. Gathered together in this
book are the fundamental texts of the great classical period in modern logic. A
complete translation of Gottlob Frege's Begriffsschrift--which opened a great
epoch in the history of logic by fully presenting propositional calculus and
quantification theory--begins the volume. The texts that follow depict the
emergence of set theory and foundations of mathematics, two new fields on the
borders of logic, mathematics, and philosophy. Essays trace the trends that led
to Principia mathematica, the appearance of modern paradoxes, and topics
including proof theory, the theory of types, axiomatic set theory, and
Löwenheim's theorem. The volume concludes with papers by Herbrand and by Gödel,
including the latter's famous incompleteness paper. "There can be no doubt that
the book is a valuable contribution to the logical literature and that it will
certainly spread the knowledge of mathematical logic and its history in the
nineteenth and twentieth centuries." --Andrzej Mostowski, Synthese "It is
difficult to describe this book without praising it...[From Frege to Gödel] is,
in effect, the record of an important chapter in the history of thought. No
serious student of logic or foundations of mathematics will want to be without
it." --Review of Metaphysics
Lectures in Mathematical Logic: The Algebra of Models Volume 1 by Waltar Felscher (Gordon and Breach Science Publishers) These three books of Lectures on Mathematical Logic, destined for students of mathematics or computer science, in their third or fourth year at the university, as well as for their instructors. The were written as the traditional combination of textbook and monograph: while the titles of chapters and sections will sound familiar to those moderately acquainted with logic, their content and its presentation is likely to be new also to the more experienced reader. In particular, concepts have been set up such that the proofs do not act as mousetraps, from which the reader cannot escape without acknowledging the desired result, but make it obvious why they entail their result with an inner necessity. In so far, the book expects from the reader a certain maturity; while accessible to students, it also will be helpful to lecturers who look for motivations of procedures which they may only have come to know in the form of dry prescriptions.
Lectures in Mathematical Logic: The Algebra of Models Volume 1 is a beginners' course devoted to the semantical relation of consequence, studied first for equations and then for propositional logic, open predicate logic and quantifier logic. Considering semantical satisfaction in structures, it must be set theoretical in the widest sense, and its central theorems turn out to be equivalent to a set theoretical principle, the prime ideal axiom. The simple case of the logic of equations in Chapter 6 suggests an algebraic approach, connecting semantical consequence with the generation of suitable congruence relations, and this can be carried over to quantifier logic in Chapter 15. In the same way, Birkhoff's characterization of equationally defined classes of algebras in Chapter 7 carries over to the characterization of elementary classes in Chapter 17. Still, what is being done here is not algebraic logic where (in the case of cylindric or polyadic algebras say) new algebraic structures would be studied in their own right: algebra here is pursued only to that extent which is required for a perspicuous analysis of the (semantical) concepts of logic. A useful conceptual tool for this purpose is the consideration of Boolean valued (instead of only 2‑valued) structures; a more technical, but not less important tool are the substitution maps studied in Chapter 13. Outside this general theme of an algebra of models, various more combinatorial topics are developed, of which I here only mention the extensions of Menger's parser to terms without or with parentheses in Chapter 5, Whitman's solution of the word problem for free lattices in Chapter 6, the study of normal forms in Chapter 9 and of their term rewriting algorithms in Chapter 10, and a new proof of the finiteness theorem for quantifier logic, employing the notion of straight formulas and presented at the beginning of Chapter 14.
Lectures in Mathematical Logic: Calculi for Derivations and Deductions Volume 2 by Waltar Felscher (Gordon and Breach Science Publishers) In this volume, logic starts from the observation that in everyday argument, as hrought forward say by a lawyer, statements are transformed linguistically, connecting them in formal ways irrespective of their contents. Understanding such arguments as deductive situations, or "sequents" in the technical terminology, the transformations between them can be expressed as logical rules. This leads to Gentzen's calculi of derivations, presented first for positive logic: and then, depending on the requirements made on the behaviour of negation, for minimal, intuitionist and classical logic. Identifying interdeducible formulas, each of these calculi gives rise to a lattice-like ordered structure. Describing the generation of filters in these structures leads to corresponding modus ponens calculi, and these turn out to be semantically complete because they express the algorithms generating semantical consequences, as obtained in volume I of these lectures. The operators transforming derivations from one type of calculus into the other are also studied with respect to changes of the lengths of derivations, and operators eliminating defined predicate and function symbols are described explicitly. The book ends with the algorithms producing the results of Gentzen's midsequent theorem and Herbrand's theorem for prenex formulas.
This volume will prove useful to graduates and researchers in the field of mathematical logic. It is also a reference on logic for professionals in pure mathematics and theoretical computer science.
Excerpt:
Lectures in Mathematical Logic: Calculi for Derivations and Deductions Volume 2 is a combinatorial study of derivations and deductions. Gentzen's rules of sequential logic are motivated for the positive connectives in Chapter 1 and for quantifiers in Chapter 8 ; the rules for negation are analysed in Chapter 4 , and classical logic is discussed in Chapter 5 with particular attention to the Peirce rule. Chapter 2 is devoted to the familiar cut elimination going down from the top; Chapter 3 presents Mints' algorithm of cut elimination going upwards. In the case of classical logic, the usual superexponential bounds are improved to twofold exponential ones, but reasons of space have prevented me from including a discussion of Hudelmayer's bounds for intuitionistic logic. Chapter 6 is an excursion into algebra, associating to every sequential calculus the class of algebras defined by the equations given by interdeducible formulas; the classes of algebras arising in this manner are then characterized by generators of the unit filter of their free algebras. These are just the axioms for modus ponens calculi generating such filters, and in Chapter 7 the modus ponens calculi corresponding to the sequential calculi are set up - with sharp upper bounds for the lengths of proofs obtained under the transformations between sequential derivations and modus ponens deductions.
The premisses of the non-critical quantifier rules can be formulated in two ways: either referring to replacement instances of the formula to be quantified, in which the quantifying variable is replaced by a term t which is free for that formula, or instead referring to substitution instances which involve a built-in renaming of the bound variables which occur in t. For the first case, Chapter 8 contains the known result that cut elimination works for derivations with an endsequent in which no variable occurs both free and that cut elimination must always work, and the second part of Chapter 8 contains a syntactical proof for this fact. Chapter 9 develops modus ponens calculi for quantifier logic based on the algorithmic descriptions of consequence in Chapter 1.15 ; Chapter 10 contains various applications such as the conservativeness of equality logic, conservativeness of extensions with predicate and function symbols, the midsequent theorem and Herbrand's theorem.
The semantical approach to logic starts from truth and from mathematical structures, and it proceeds to an analysis of the relation of semantical consequence, as well as to various, and sometimes surprising, properties of models of axiom systems. It is a mathematician's approach, talking about a world of - preferably infinite - sets, as it was developed in Book 1 of this work.
The approach to be followed in the present Book 2 may be described as argumentative or linguistic in that it starts from the rhetorical and literary practice of presenting collections of (usually successive) arguments, connected by logical laws referring to their linguistic form and independent from their contents. In view of its formal linguistic character, this approach may also be called syntactical, and the treatment of linguistic objects will use techniques, which, in mathematics, are counted as combinatorial. The combinatorial rules then describe particular collections of arguments as deductions and derivations in a uniform manner, and the systems of such rules make up what are called logical calculi. In so far, that approach can also be called algorithmic or computational - although the actual design of programs for devices, performing such computations, will not be discussed here.
While there are many ways to define when a collection of arguments may be called a deduction, singling out a particular one among them can hardly avoid being arbitrary and accidental. It was the basic idea of Gentzen to begin, rather, with an abstract notion of general deductive situations, and to develop the foundation of logic from an analysis of the transformations between such deductive situations, as effected by the logical connectives and quantifiers occurring in them. The setting then is that of deductive situations, and meant to say that a statement can, in an unspecified manner, be deduced from a collection M of assumptions. Consider now a deductive situation that involves, either as its conclusion v or among its assumptions M, a logically composite statement. An analysis of the linguistic meaning, of the propositional connectives or of the quantifiers governing this composition, will lead to conditions, acceptable to every consensus (fair and reasonable), about the form of one or several other deductive situations which (a) involve the components of that composite statement and (b) are such that the deducibility may be linguistically concluded from the deducibilities as premises. Condensing such relationships between premisses and conclusions into logical rates, I arrive at a calculus of deductive situations, built from the weakest requirements which those logical transformations between deductions are to obev for the case of positive connectives this will be carried out immediately.
Various applications are discussed in Chapter 10, beginning with the correspondence between sequent and modus ponens calculi in the presence of quantifiers, and ending with a detailed discussion of Herbrand's theorem.
Lectures in Mathematical Logic: The Logic of Arithmetic Volume 3 by Waltar Felscher (Gordon and Breach Science Publishers) is mainly about logical properties of arithmetic, namely the decidability and consistency of some of its weaker fragments, and the undecidability and the unprovability of consistency for stronger ones. For the arithmetic of successor and order in Chapter 1, and for Presburger arithmetic in Chapter 2, decidability is obtained from quantifier elimination, and this, being established syntactically, will entail consistency proofs. In Chapter 3 the Liar paradox is discussed and is used to analyze, in a nontechnical manner, the undefinability of truth and the incompleteness of provability; this Chapter requires no mathematical knowledge and may be accessible to the educated layman. General incompleteness theorems are set up in Chapter 4 ; here encodings are‑assumed to be given and certain encoding functions are assumed to be representable. In Chapters 5 and 6 basic facts about recursive functions and relations are developed. In Chapter 8 undecidability of provability from axioms in an arithmetized language is connected with the representability of recursive relations; this is applied in order to find a uniformization of recursive functions, and also in order obtain the SMN‑property, needed to conclude that this uniformization is isomorphic to other uniformizations known from computability theory. Chapter 9 discusses Robinson's arithmetic and Church's result on the undecidability of pure logic.
In Chapter 10 it is shown that Peano arithmetic PA (actually, already Heyting arithmetic) can be extended conservatively by function symbols for primitive recursive functions. There follows a discussion of primitive recursive arithmetic in which it is emphasized that recursion equations and their consequences now become provable as free variable formulas. In Chapter 11 the unprovability of the consistency of PA is shown by verifying the Bernays‑Lob provability conditions. In order to see that El‑formulas imply their provability, the process of internalization is isolated and studied. It is hoped that the presentation of the basic results of these last two chapters, both in their strength and their limitations, will make them accessible to a wider readership.
More about the book's content will be said in its introduction and in the introductory sections of its chapters.
insert content here