## Recap

So far, we have seen two algorithms for \(k\)-CNF-SAT, both of which were based on local search in the space of truth assignments:

- A randomized algorithm with growth rate \(2-2/k\).
- A deterministic algorithm with growth rate \(2-2/(k+1)\).

This time, we discuss the best known algorithm, which is based on a different idea.

## Guess & Conquer algorithm

We present an algorithm which yields the smallest known growth rate for \(k\)-CNF-SAT for each constant \(k\). The algorithm builds the truth assignment incrementally; it sets the value of variables greedily whenever it can, and otherwise it just guesses by picking a random variable and setting its value randomly.

### The Conquer algorithm

The basic idea of the greedy part of the algorithm is simple: whenever we *know* the value that a given variable must have in order for the formula to be satisfied, we can set it to the forced value. For example, if the unit clause \((\bar x)\) occurs in the formula \(F\), then every satisfying assignment \(\alpha:V\to \{0,1\}\) of \(F\) must have \(\alpha(x)=0\). In general, we say that a variable \(x\) is *frozen* in \(F\) if there is a “forced value” \(b\in\{0,1\}\) such that all satisfying assignments \(\sigma\in\text{sat}(F)\) have \(\sigma(x)=b\).

To describe the algorithm, consider a partial assignment \(\alpha:V\to\{0,1,{?}\}\) that we have constructed so far and that we want to extend. Let \(F\vert_\alpha\) be the formula \(F\) where every occurrence of a variable \(x\) with \(\alpha(x) \in\{0,1\}\) has been replaced by \(\alpha(x)\); moreover, every occurrence of \(0\) in a clause has been removed and every clause containing a \(1\) is removed as well. Then `conquer`

\((F,\alpha)\) works as follows:

- While some variable \(x\) is
*detected as frozen*in \(F\vert_\alpha\):- Set \(\alpha(x)\) to the forced value

- Return the (partial) assignment \(\alpha\)

In principle, we could use any polynomial-time algorithm to try to detect frozen variables along with their forced values. As mentioned before, the most basic idea is probably to just look for unit clauses. Unit clauses are subformulas that are induced by a single variable, and so a natural, slightly more sophisticated idea is to look at subformulas induced by a larger set of variables. For a set \(V'\subseteq V\), we define the subformula of \(F\) *induced by* \(V'\) as the formula

\[ F'=F[V'] := \{C \in F \,\vert\, V(C)\subseteq V'\} \]

That is, \(F'\) consists of all clauses \(C\) whose variables \(V(C)\) are fully contained in \(V'\).

For some global constant integer \(s\) to be chosen later, we consider all formulas that are induced by a set \(V'\) of size at most \(s\). For such a subformula, we can compute all satisfying assignments using exhaustive search, and we can easily find all variables that are frozen. For the purposes of our `conquer`

algorithm, we say that a variable \(x\) is *detected as frozen* in \(F|_\alpha\) if there exists a set \(V' \subseteq V(F|_\alpha)\) of size at most \(s\) such that \(x\) is frozen in \((F|_\alpha)[V']\).

We leave the following observation about the correctness of the procedure as an exercise.

**Exercise (Correctness of conquer).** We say that two partial assignments \(\alpha\) and \(\sigma\) are

*consistent*if for all \(x\) we have \(\sigma(x)={?}\), \(\alpha(x)={?}\), or \(\alpha(x)=\sigma(x)\). Prove that, if \(\alpha\) is a partial assignment that is consistent with a satisfying assignment \(\sigma\) of \(F\), then \(\alpha'=\)

`conquer`

\((F,\alpha)\) is also consistent with \(\sigma\).For the running time of `conquer`

, note that \(V(F')\) contains at most \(s\) variables, and so our procedure to detect a variable as forced runs in time \(\sim\binom{n}{s} \cdot 2^{s}\). This is polynomial whenever \(s\) is a constant.

### The overall algorithm

After applying `conquer`

, the `guess`

phase selects a single random variable and sets it randomly. Then `conquer`

and `guess`

are performed interchangeably until all variables are set.

For the analysis of the algorithm, it is convenient to make the random choices globally one time as opposed to on-the-fly. More formally, the algorithm `GC`

\((F)\) gets a formula \(F\) with variable set \(V=V(F)\) as input, and works as follows:

- Pick a random assignment \(\beta:V \to \{0,1\}\)
- Pick a random permutation \(\pi\) over \(V\)
- Run
`GC`

\((F,\beta,\pi)\)

The subroutine `GC`

\((F,\beta,\pi)\) is completely deterministic, and it constructs an assignment \(\alpha:V\to\{0,1\}\) iteratively as follows:

- We start with \(\alpha(x) := {?}\) for all \(x\).
- For all \(x\in V\) in the order of \(\pi\):
- $:= $
`conquer`

\((F,\alpha)\) - If \(\alpha(x)= {?}\), then “guess” \(x\) by setting \(\alpha(x):=\beta(x)\).

- $:= $
- Return \(\alpha\).

The running time of `GC`

is polynomial. For the soundness, note that `GC`

\((F)\) cannot find a satisfying assignment if \(F\) is unsatisfiable. Thus it remains to analyze the completeness case of the algorithm: For satisfiable formulas \(F\), we define the success probability on input \(F\) as

\[ p_{\text{success}}(F) := \Pr_{\pi,\beta} \Big({\sf GC}(F,\pi,\beta) \in \text{sat}(F)\Big)\,. \]

Here \(\text{sat}(F)\) is the set of satisfying assignments of \(F\). It remains to prove a lower bound on \(p_{\text{success}}\).

## Uniquely Satisfiable Formulas

To get a sense for how the analysis might work, let us first consider the special case in which \(F\) has exactly one satisfying assignment. So let \(\text{sat}(F) = \{\sigma\}\).

### Critical Clauses

We exploit the following special property of uniquely satisfiable formulas: For all variables \(x\in V\), there exists a “critical clause” \(C_x \in F\) so that

\[C_x = \ell_x \vee \ell_2 \vee \dots \vee \ell_k\]

where \(\ell_x \in \{x,\bar x\}\), \(\sigma(\ell_x) = 1\), and \(\sigma(\ell_2\vee\dots\vee\ell_k)=0\) hold. In other words, \(C_x\) is a clause whose only true literal under \(\sigma\) is a literal of \(x\). Such a clause exists for if there was no such clause, then all clauses in which \(x\) occurs as a true literal also have a second true literal under \(\sigma\); this, however, means that we can flip the value of \(x\) and end up with the assignment \((\sigma\oplus \{x\}) \neq \sigma\), which also satisfies \(F\). Here we write \(\sigma'=(\sigma \oplus S)\) for \(S\subseteq V\) for the assignment

\[\sigma'(x') = \begin{cases} \neg \sigma(x') & \text{if $x' \in S$, and}\\ \sigma(x') & \text{otherwise.} \end{cases} \]

Thus, because \(F\) is uniquely satisfiable, every variable has such a critical clause.

### Analysis

It remains to prove a lower bound on the success probability. Critical clauses are useful in the `conquer`

phase of the algorithm, even when we choose the global constant as \(s=1\), which corresponds to merely detecting unit clauses. A critical clause \(C_x\) for \(x\) leads `conquer`

to set the value of \(\alpha(x)\) to its correct value \(\sigma(x)\) if the following two events occur simultaneously:

- The random order \(\pi\) selects \(x\) as the last variable in \(C_x\), and
- When entering
`conquer`

just before the possible`guess`

phase for \(x\), the assignment \(\alpha\) computed so far is defined on all literals \(\ell_2,\dots,\ell_k\) and coincides here with \(\sigma\).

**Observation.** If and when the two events occur simultaneously, the `conquer`

phase of `GC`

is guaranteed to set \(\alpha(x)\) to the correct value \(\sigma(x)\).

Now let \(V_{\text{Last}}(F,\pi)\) be the set of variables \(x\in V(F)\) that occur last, with respect to \(\pi\), in their critical clause \(C_x\). Then we can estimate the success probability as follows:

\[ \begin{align*} p_{\text{success}}(F) &= \mathbf E_{\pi}\Pr_\beta [{\sf GC}(F,\pi,\beta)=\sigma] \\ &\geq \mathbf E_{\pi}\Pr_\beta [\forall x\not\in V_{\text{Last}}(F,\pi). \beta(x)=\sigma(x) ] \\ & = \mathbf E_{\pi} 2^{\vert V_\text{Last}(F,\pi)\vert - n}\\ & \geq 2^{\mathbf E_{\pi} \vert V_\text{Last}(F,\pi)\vert - n}\\ & = 2^{-(1-\frac{1}{k})n}\,. \end{align*} \]

The first inequality follows from the observation we just made about critical clauses: When we condition on the fact that \(\beta(x)=\sigma(x)\) holds for all \(x\not\in V_{\text{Last}}(F,\pi)\), then for all \(x\not\in V_{\text{Last}}(F,\pi)\) we have that \(\alpha(x)\) is either “guessed” correctly as \(\alpha(x):=\beta(x)=\sigma(x)\), or else its value is determined as \(\alpha(x):=\sigma(x)\) in the `conquer`

phase, which follows from the exercise about `conquer`

’s correctness. On the other hand, if \(x\in V_{\text{Last}}(F,\pi)\), the observation above again gives us that \(\alpha(x):=\sigma(x)\) is determined in `conquer`

.

The second inequality is **Jensen’s inequality**. For the last equality, note that the probability over \(\pi\) that \(x\) is last in \(C_x\) is \(\frac{1}{k}\). Thus the expected number of variables that occur last in their critical clauses is \(\frac{n}{k}\).

The analysis and algorithm presented here is from Paturi, Pudlak, and Zane (1999), and it leads to a bounded error probability algorithm of running time \(2^{(1-\frac{1}{k})n}\). While the analysis sheds some light on how we will proceed in the general case, it is quite crude and does not lead to a smaller growth rate than the algorithm from Lecture 1, since \(2-2/k < 2^{1-1/k}\). To improve upon this running time, we simply need to increase the constant \(s\) to some large value depending on \(k\). The analysis becomes much more involved.

## More general analysis

Recall that a frozen variable has the same value in all satisfying assignments. We will be able to establish a trade-off between the following two extreme situations:

- All variables are frozen. Then \(F\vert_\alpha\) is uniquely satisfiable and we win by using a tighter analysis that exploits the fact that \(s\) is a constant bigger than one.
- No variable is frozen. Then we can
`guess`

the next variable arbitrarily and we win because we will get a formula \(F\vert_\alpha\) that is still satisfiable, and so guessing such a variable cannot decrease the success probability.

Let us first deal with frozen variables before we establish the trade-off.

### Frozen Variables

For a variable \(x\) that is frozen in \(F\), we want to describe situations in which we can deduce its value in `conquer`

. That is, we want to find subformulas \(F'\) that are induced by at most \(s\) variables and in which \(x\) is already frozen. Similar to the basic analysis for uniquely satisfiable formulas, we will then prove that these situations occur with relatively high probability.

So let \(F\) be a satisfiable formula with a satisfying assignment \(\sigma\), and let \(x\) be a variable. We consider the event \(E(\pi,\beta)\) that, at the point in time when \(x\) is considered in the loop of `GC`

\((F,\pi,\beta)\), the following properties hold simultaneously:

- the partial assignment \(\alpha\) that we have computed so far is consistent with \(\sigma\), and
- there exists a set \(V'\subseteq V(F\vert_\alpha)\) of size at most \(s\) such that \(x\) is frozen in the subformula \((F\vert_\alpha)[V']\).

We make the following observation about the event \(E(\pi,\beta)\), the proof of which is an exercise.

**Exercise.** If \(E(\pi,\beta)\) holds, then `GC`

\((F,\pi,\beta)\) outputs an assignment \(\alpha\) with \(\alpha(x)=\sigma(x)\).

We want to prove that \(p_{F,x,\sigma}:=\Pr_{\pi,\beta}(E(\pi,\beta))\) is quite large so that we can give a lower bound on the overall success probability \(p_{\text{success}}\).

### Implication Trees

For the analysis, we will construct an implication tree \(T\) (or “critical clause tree”) for a frozen variable \(x\) and a satisfying assignment \(\sigma\) of \(F\). Each vertex of \(T\) is labeled by a literal \(\ell\) that is satisfied by \(\sigma\), that is, \(\sigma(\ell)=1\); in particular, the root is labeled by a literal \(\ell_x \in \{x,\bar x\}\). Additionally, leaves can be marked as *closed*, and if they are not marked as closed we call them *open*. Non-leaves have at most \(k-1\) children, closed leaves are at depth less than \(d\), and open leaves are at depth exactly \(d\). Here \(d\) is a constant to be chosen later; think of \(d > k\) and \(s = k^d\). Finally, we will construct the tree in such a way that it satisfies the following property.

**Implication Tree Property.** Let \(F'=F[V(T)]\). For all partial assignments \(\alpha : V(T) \to \{0,1,?\}\) that are consistent with \(\sigma\) and that *cover all open branches*, we either have \(\alpha(\ell_x)=1\) or the variable \(x\) is frozen in the formula \(F'\vert_\alpha\).

Here, we say that \(\alpha\) *covers all open branches* if, for all paths \(P\subseteq V(T)\) in \(T\) that lead from the root \(\ell_x\) to an open leaf of \(T\), there is at least one literal \(\ell\in P\) with \(\alpha(\ell)\in\{0,1\}\). That is, all root to open leaf paths lead through at least one vertex at which \(\alpha\) is defined. Note that, since \(\alpha\) is assumed to be consistent with \(\sigma\), we even have \(\alpha(\ell)=\sigma(\ell)=1\).

### Construction of Implication Trees

For given \(F\), \(x\), and \(\sigma\) where \(x\) is frozen in \(F\) and \(\sigma\) is a satisfying assignment of \(F\), we now construct an implication tree \(T=T(F,x,\sigma)\).

Initially, note that there must be a clause

\[C_x=\Big((\ell_2\wedge\dots\wedge\ell_k)\rightarrow\ell_x\Big)=\Big(\ell_x \vee \bar\ell_2\vee\dots\vee\bar\ell_k\Big)\]

for which \(\ell_x\in\{x,\bar x\}\) is the only literal of \(C_x\) that is satisfied by the assignment \(\sigma\). If there was no such clause, \((\sigma\oplus\{x\})\) would also be a satisfying assignment, and \(x\) would not be frozen in \(F\). We start the construction of the tree by letting \(\ell_x\) be the root of the tree and \(\bar\ell_2,\dots,\bar\ell_k\) be its children. Let us call this initial tree \(T_1\). Note that we negated the other literals because we want to maintain the property that \(\sigma(\ell)=1\) holds for all literals in the tree. Moreover, it is easy to see that the implication property holds since all leaves are open. We will “grow” the tree iteratively by adding new children to leaves.

Assume we have constructed \(T_{i}\) for some \(i\geq 1\). Let \(\ell\) be any open leaf whose depth in \(T_{i}\) is \(<d\). If there is no such leaf, the construction terminates, and we set \(T=T_i\). Otherwise let \(P \subseteq V(T_{i})\) be the unique path from \(\ell_x\) to \(\ell\) in \(T\), so \(\ell_x,\ell \in P\) and \(|P| < d\). Because \(x\) is frozen, the assignment \(\sigma \oplus P\) is not a satisfying assignment of \(F\). Therefore, there must be a clause \(C_P\) such that the literals in \(C_P\cap P\) are the only literals set to true under \(\sigma\); that is, \(\sigma(\ell')=0\) holds for all \(\ell'\in C_P \setminus P\) and \(C_P\cap P \neq \emptyset\). Moreover, among all such clauses \(C_P\), we select one that minimizes \(|C_P \setminus P|\). Now for each \(\ell'\in C_P\setminus P\), we add \(\overline{\ell'}\) as a child to \(\ell\). If \(C_P \setminus P\) is empty, then we do not add any children; instead, we mark the leaf as *closed*. Let us call the new tree \(T_{i+1}\).

**Exercise.** Prove that if \(T_i\) has the implication property, then so does \(T_{i+1}\).

In the next lecture, we will use the implication tree to obtain a lower bound on the success probability of the Guess & Conquer algorithm.