TransWikia.com

What is the generic poset used in forcing, really?

MathOverflow Asked on December 1, 2021

I’m not a set theorist, but I understand the ‘pop’ version of set-theoretic forcing: in analogy with algebra, we can take a model of a set theory, and an ‘indeterminate’ (which is some poset), and add it to the theory and then complete to a model with the desired properties. I understand the category theoretic version better, which is to take sheaves (valued in a given category $Set$) over said poset with a given Grothendieck topology (the double negation topology). The resulting topos is again a model of set theory, but now has the properties you want, absent from the original $Set$.

But what is this poset, really? Is it the poset of subobjects of the set you want to append to your theory/model (say a set of a specified cardinality, or some tree with a property)? Is it related to a proof of the property you are interested in? To clarify, I’m not interested in the mechanical definition of an appropriate generic poset, but what it is morally. Bonus points for saying what it ‘is’ before and after forcing, if this even makes sense.

5 Answers

Let me add some view that hasn't really been mentioned above, namely that of Boolean valued models. Every poset has a completion, which is a complete Boolean algebra.
To each poset we can define a forcing language that essentially looks like first order logic, only that we use so called names (which depend on the poset and stand for elements of the forcing extension) as constants.
Now the elements of the complete Boolean algebra can be considered as truth values of statements in the forcing language.
Generic filters of the poset and its completion are in 1-1 correspondence to each other.
A filter in the complete Boolean algebra corresponds to a consistent theory in the forcing language.

So, choosing a generic filter for the poset over a model of set theory corresponds to choosing a "generic" theory in the forcing language, the theory consisting of statements in the forcing language that are true in the corresponding generic extension.

Actually, with this approach it is not even necessary to really go to a generic extension at all: If you want to show that CH is not provable in ZFC, all you need to do is construct a poset so that in the corresponding forcing language all ZFC axioms have truthvalue 1 in the corresponding complete Boolean algebra and CH does not.
Since everything that follows from ZFC has truthvalue 1 as well, CH does not follow from ZFC. Now, of course, the computations and arguments you have to do to push through this program are just as difficult as in the case of the usual approach.

Answered by Stefan Geschke on December 1, 2021

In topos-theoretic terms, I like to describe forcing as follows. You consider a geometric theory whose models are the objects you want to adjoin. For instance, if you want to adjoin an injection from a set A to a set B, then you consider the propositional geometric theory whose basic propositions are of the form $[f(a)=b]$" for some a∈A and b∈B, and whose axioms are

  • $vdash bigvee_b [f(a)=b]$ for each a (i.e. "f is defined everywhere")

  • $[f(a)=b_1]wedge [f(a)=b_2] vdash bot$ whenever $b_1neq b_2$ ("f is single-valued")

  • $[f(a_1)=b] wedge [f(a_2)=b] vdashbot$ whenever $a_1neq a_2$ ("f is injective")

If A has a larger cardinality than B, then this theory has no models in Set. But we can still consider its classifying topos, which may have no points. Like the classifying topos of any theory, this classifying topos contains a "universal model" of the theory (if you like, by the Yoneda lemma—a representing object x of any functor F is determined by a universal element of F(x).).

A convenient way to construct the classifying topos of a propositional geometric theory is to consider the poset of finite conjunctions of basic formulae (which is the free finitely-complete category generated by the theory) and equip it with a Grothendieck topology coming from the axioms. In the above example, a finite conjunction of basic propositions is a finite relation from A to B. We can discard those which are not single-valued and injective since the axioms tell us that those are covered by the empty family; thus we obtain a poset of "partial injections," looking like those that set-theorists usually talk about.

Finally, if you're a classical set-theorist and want to also have the law of excluded middle, you can consider a Boolean subtopos of this classifying topos. One obvious Boolean topos to look at is the topos of double-negation sheaves on the above poset. Both this and the classifying topos are subtoposes of the presheaf topos on the poset, and if you're lucky (I don't know the exact conditions necessary), then the double-negation subtopos will be contained in the classifying topos, and therefore it will also contain a model of the theory in question (since it has a map to the classifying topos of the theory).

Answered by Mike Shulman on December 1, 2021

The other answers are excellent, but let me augment them by offering an intuitive explanation of the kind you seem to seek.

In most forcing arguments, the main idea is to construct a partial order out of conditions that each consist of a tiny part of the generic object that we would like to add; each condition should make a tiny promise about what the new generic object will be like. The generic filter in effect provides a way to bundle these promises together into a coherent whole having the desired properties.

For example, with the Cohen forcing $text{Add}(omega,theta)$, we want to add $theta$ many new subsets of $omega$, in order to violate CH, say. So we use conditions that specify finitely many bits in a $omegatimestheta$ matrix of zeros and ones. Each condition makes a finite promise about how the entire matrix will be completed. The union of all conditions in the generic filter is a complete filling-in of the matrix. Genericity guarantees that each column of this matrix is a new real not present in the ground model and different from all other columns, since any finite condition can be extended so as to disagree on any particular column with any particular real or from any particular other column.

With the collapse forcing $text{Coll}(omega,kappa)$, we want to add a new surjective function $f:omegatokappa$. So we use conditions consisting of the finite partial functions $p:omegatokappa$, ordered by extension. Each such condition is a tiny piece of the generic function we want to add, describing finitely much of it. The union of the generic filter provides a total function $g:omegato kappa$, and the genericity of the filter will guarantee that $g$ is surjective, since for any $alpha<kappa$, any condition $p$ can be extended to a stronger condition having $alpha$ in the range.

And similarly with many other forcing arguments. We design the partial order to consist of tiny pieces of the object that we are trying to add, each of which makes a small promise about the generic object. If $G$ is a generic filter for this partial order, then the union of $G$ is the joint collection of all these promises.

In many forcing arguments, it is not enough just to build a partial order consisting of tiny pieces of the desired object, since one also wants to know that the forcing preserves other features. For example, we want to know that the forcing does not inadvertently collapse cardinals or that it can be iterated without collapsing cardinals. This adds a wrinkle to the idea above, since one wants to use tiny pieces of the generic object, but impose other requirements on the conditions that will ensure that the partial order has a nice chain-condition or is proper and so on. So the design of a forcing notion is often a trade-off between these requirements---one must find a balance between simple-mindedly added pieces of the desired generic object and ensuring that the partial order has sufficient nice properties that it doesn't destroy too much.

In this sense, I would say that the difficult part of most forcing arguments is not the mastery of the forcing technology, the construction of the generic filter and of the model---although that aspect of forcing is indeed nontrivial---but rather it is the detailed design of the partial order to achieve the desired effect.

Answered by Joel David Hamkins on December 1, 2021

First, a point of clarification: you're not adding the poset but rather forcing with it, and the poset itself remains unchanged after the forcing. The generic object that you're adding is actually a filter on this set meeting every dense subset (or maximal antichain) of the partial order.

The elements of the partial order, often called conditions in the context of forcing, form the possible components and the generic filter is responsible for collecting them in such a way to add an object of the desired form. In this way, the poset is highly correlated with the object you want to add because if you don't have the correct building blocks, there is no way to filter out the ones you want to use to construct an object of the desired form. The key property of the generic filter is that it consists of a compatible collection of conditions simultaneously meeting every dense subset. You can think of these dense subsets as the individual properties that we are going to want this filter to have. This is all better illustrated with examples:

Forcing To Add $kappa$ many Cohen Reals: Here your forcing poset consists of finite partial functions from $omega times kappa$ into {0, 1} ordered by reverse inclusion (longer is stronger). The conditions are the finite pieces of the $kappa$ many Reals (subsets of $omega$) that you're adding. By virtue of consisting of a compatible set of conditions meeting all dense subsets, a generic filter $G$ will pick them out in such a way that $bigcup G: omega times kappa rightarrow {0, 1}$ is a total function with all of its $kappa$ columns representing a newly added distinct Real. Specifically, the filter part insures that these conditions can be put together to form a partial function while the generic part makes sure that we've met all of the conditions imposed by the dense subsets, which include making sure the function is total, and that every column defines a distinct Real that's different from every one in the ground model. Because of the simplicity of the forcing poset, the cardinals between the ground model and the forcing extension are the same so if $kappa = omega_2$, then you've really forced $lnot$CH to hold in the extension.

Forcing to Collapse a Cardinal $lambda$ to have size $kappa$: Here we force with the partial order consisting of partial functions from $kappa$ into $lambda$, each having size strictly less than $kappa$. Again the partial order is ordered by reverse inclusion so the more the domain is filled in, the stronger the condition (lower in the partial order). A generic filter $G$ will again select these partial functions in such a way that $bigcup G: kappa rightarrow lambda$ is a total function. By virtue of meeting all dense subsets, $bigcup G$ will have all elements from $lambda$ in its range so it will be a newly added surjection. Consequently, if $lambda > kappa$ for $kappa$ and $lambda$ cardinals in the ground model (pre-forcing), then $lambda$ will have been collapsed to be a $kappa$-sized ordinal in the forcing extension. Because of the poset's closure, we didn't add any ${<} kappa$-sized subsets of objects from the ground model or collapse any cardinals at or below $kappa$, and we sometimes use these facts or even stronger properties to argue about truth in the ground model by virtue of truth in the forcing extension.

Sacks forcing: Here we add a "minimal Real" by forcing with the partial order consisting of perfect trees of finite binary sequences ordered by inclusion. A generic filter $G$ will thus consist of a collection of trees with larger and larger stalks so that $bigcap G$ is a new branch through the tree ${}^{omega}2$ that can be associated with a new Real.

Prikry forcing: If we have a nonprincipal normal $kappa$-complete measure on a cardinal $kappa$, then we can force with the collection of conditions of the form $langle s, Arangle$ where $s$ is a finite sequence of ordinals from $kappa$ and $A$ is a subset of $kappa$ from the normal measure. $langle t, Brangle leq langle s, Arangle$ if $B subseteq A$ and $t$ is an end extension of $s$ only adding ordinals from $A$ to the range of $t$. Here we won't take the union or the intersection of the generic filter $G$ but only the union of the finite sequences of ordinals in $kappa$ from the first coordinates of the conditions in $G$. Again, the filter will make sure that this constructed object is a function, and by virtue of meeting all dense subsets, this union will form a countable unbounded sequence in $kappa$. The importance of using a normal measure was making sure that no cardinals collapse, which is used for showing the relative consistency of the negation of the Singular Cardinal Hypothesis with a measurable cardinal. In this case, we winded up getting rid of unnecessary parts of the conditions, but they were used to guide the construction. Specifically, they were used to make sure our functions reached up high but did so without collapsing cardinals.

However, despite the fact that all of these forcing posets (notions) guide what our construction will look like, we can have much less intuitive posets accomplishing the same things. Mainly, we can associate each of these partial orders with some strange ordering on the elements of a cardinal and still accomplish the same result. Specifically, all of these forcing extensions would be equivalent to forcing extensions adding a subset of some cardinal.

There are obviously a number of other known forcing notions, but since this post is already long enough and maybe a little too technical for your question, let me just conclude with the main point. Mainly, we choose our posets looking ahead to the properties we want our generic object to have. The more complex the generic object that we want to add, the more complex the dense subsets we're going to need to have to make sure that the filter meeting them has all of the desired characteristics. This greater complexity often comes in terms of some combination of larger possible antichain sizes, more dense subsets, less closure, etc. While we can always define a descending sequence of conditions so that we meet any countable collection of dense subsets or maximal antichains, the genericity of the filter magically does this in a way to simultaneously meet all of them in a compatible way.

Answered by Jason on December 1, 2021

This is easier to understand in terms of locales. The double-negation locale associated to a forcing is the space in which the generic filter lives. The trouble is that non-trivial double-negation locales do not have points, so one has to be adjoined to form the generic extension.

In other words, if $X$ is the double-negation locale we want to force with, we would like to have a pair of geometric morphisms $Set to Sh(X) to Set$. The second part $Sh(X) to Set$ is no problem, but the first part $Set to Sh(X)$ usually doesn't exist. So we expand the universe of sets a bit by adding a generic point to $X$. Note that the double-negation locale $X$ gets destroyed in the final process of generating the generic extension.

Answered by François G. Dorais on December 1, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP