### Complexity.

What is a rand. alg?

What is an alg?

- Turing Machines. RAM with large ints. log-cost RAM as TM.
- language as decision problem (vs optimization problems) “graphs with small min-cut.” algos accept/reject
- complexity class as set of languages
- $P$. polynomial time in input size
- $NP$ as $P$ with good advice string.
**witnesses** - polytime reductions. hardness, completeness.

**random**

- measure probs over space of advice strings
- equivalence to fliping unbiased random bits

- Polynomial expected time
- $A(x)$ accepts iff $x \in L$.
- Las Vegas algorithms

- polytime (always)
- $x \not\in L \Rightarrow$ rejects (always).
- $x \in L \Rightarrow$ accepts with probability $>1/2$.
- Monte Carlo algorithm
- one sided error
- precise numbers unimportant:
**amplification**. - min-cut example
- $coRP$.
- What if
**NOT**worst case polytime? stop when passes time bound and accept. - $ZPP=RP \cap coRP$ (proof later)

$PP$ (probabilistic polytime) (two-sided MC)

- Worst case polytime (can force)
- $x \in L \Rightarrow$ accepts prob $>1/2$
- $x \notin L \Rightarrow$ accepts prob $< 1/2$
- weakness: $NP \subseteq PP$

- worst case polytime (can force)
- $x \in L \Rightarrow$ accepts prob $>3/4$
- $x \notin L \Rightarrow$ accepts prob $< 1/4$
- precise numbers unimportant.

Clearly $P \subseteq RP \subseteq NP$. Open questions:

- $RP=coRP$? (equiv $RP=ZPP$)
- $BPP \subseteq NP$?

Complexity note

- model assumes source of random bits
- we will assume primitives: biased coins, uniform sampling
- in homework, saw equivalent

### Adelman's Theorem.

Consider $RP$ (one sided error)

- Does randomness help?
- In practice YES
- in one theory model, no
- in another, yes!
- in another, maybe

- Size $n$ problems ($2^n$ of them)
- matrix of advice rows by input columns
- some advice row
**witnesses**half the problems. - delete row and all its problems
- remaining matrix still $RP$ (all remaining rows didn't have witness)
- halves number of inputs. repeat $n$ times.

**Result:**on $RP$ of size $n$, exists $n$ witnesses that cover all problems.- polytime algorithm: try $n$ witnesses.
- Nonuniformity: witnesses not known.
- $RP \subseteq P/poly$

oblivious versus nonoblivious adversary and algorithms.

## Game Trees

### Tree evaluation.

deterministic model: must examine all leaves. time $2^h=4^{h/2}=n$

- by induction: on any tree of height $h$, as questions are
asked, can answer such that root is not determined until all
leaves checked.
- Keep both subtrees ambiguous until one has all leaves checked
- When first subtree is “finished”, make it a win (for that child)
- Which means outcome depends on other subtree

- Note: bad instance being constructed on the fly as algorithm runs.
- But, since algorithm deterministic, bad instance can be built in advance by simulating algorithm.

- $W(0)=L(0)=1$
- winning position can guess move. $W(h) = L(h-1)$
- losing must check both. $L(h)=2W(h-1)$
- follows $W(h)=2*W(h-2) = 2^{h/2} = n^{1/2}$

- $W(0)=1$
- $W(T)$ is a random variable
- If $T$ is winning time it takes to verify $T$ is a win. Undefined if $T$ is losing.
- Ditto $L(T)$.
- Expectation is over random choices of algorithm; NOT over trees.
- Different trees have different expectations

- $W(h)=$ max over all height-$h$ winning trees of $E[W(T)]$
- $L(h)=$ same for losing trees.
- Consider any losing height-$h$ tree
- iff both children are winning
- must eval both.
- each takes at most $W(h-1)$ in expectation
- Thus (by linearity of expectation) we take at most $2W(h-1)$
- Deduce $L(h) \le 2W(h-1)$.

- Consider any winning height-$h$ tree
- Possibly both children are losing.
- If so, we stop after evaling the first child we pick.
- Total time $L(h-1).$

- If exactly one child losing, two cases:
- if first choice is losing, eval it and stop: time at most $L(h-1)$.
- if first choice is winning, eval both children: $L(h-1)+W(h-1)$.
- So, $W(h) \le \frac12 L(h-1) + \frac12 (W(h-1)+L(h-1))$
- That is, $W(h) \le L(h-1)+\frac12 W(h-1)$
- This is worse bound than $L(h-1)$,
- so dominates recurrence

- Loose approx: $W(h-1) \le L(h-1)$ so $W(h) \le (3/2)L(h-1) \le 3W(h-2)$.
- conclude $W(h) \le 3^{h/2}$
- So $W(h) \le 3^{h/2} = n^{\log_4 3} = n^{0.793}$
- Tighter analysis
- $W(h) \le 2W(h-2)+\frac12W(h-1)$
- Characteristic equation: $x^2-x/2+2=0$
- Conclude $W(h) = O(\alpha^h)$ where $\alpha=\frac14(1+\sqrt{33})$
- Recall $h=\log_2 n$
- So $W(n)\le\alpha^{\log_2 n}=n^{\log_2 \alpha} = n^{0.753...}$

- Possibly both children are losing.
- basically pushing $E[]$ through a linear recurrence and relying on linearity of expectation.

Note: Changed presentation from book.

- We used “game tree” with win/loss
- So if win denoted by 1, loss by , then function at each node is NAND
- MR uses “MIN/MAX tree” with $d$ “rounds” (1 move per player)
- corresponds to Win/Loss tree of height $2d$ (role of 0/1 in MIN/MAX gets alternately flipped on W/L

Summary:

- Deterministic eval costs $2^h$
- Nondeterminstic eval costs $2^{h/2}\approx (1.414)^h$
- Randomized “in between” at $\left(\frac{1+\sqrt{33}}{4}\right)^h \approx (1.686)^h$