Blog

Schnapsen Engine

Rules

Look up the rules here.

Basic Idea

Solve Schnapsen with all cards open to see with alpha-beta search. This can be done in a fraction of a second.

In the real game, make every possible assignment of unknown cards and select move that works best on average. We select the move with smallest probability of losing. We could also use greatest expected score.

Problem: Too many assignments:

Move Number of assignments
1 \(\pmatrix{14 \\ 5} \cdot 9! = 726\,485\,760\)
2 \(\pmatrix{13 \\ 4} \cdot 9! = 259\,459\,200\)
3 \(\pmatrix{12 \\ 5} \cdot 7! = 3\,991\,680\)
4 \(\pmatrix{11 \\ 4} \cdot 7! = 1\,663\,200\)
5 \(\pmatrix{10 \\ 5} \cdot 5! = 30\,240\)

Explanation: At the beginning we have 5 cards and 1 card of the talon is open. So the opponent has 5 out of 14 cards on the hand and the talon has 9! possible orderings. At the second move, the opponent has played a card. So we do not know 4 out of 13 cards in his hand. The talon remains unchanged.

Note that these numbers are worst case. If we would lock the talon at move 4, then we only care about the cards on the hand \(\pmatrix{10 \\ 5} = 252.\) We can only test all assignments after move 4. But we can evaluate all locking moves at every time since \(\pmatrix{? \\ 5} \le 2002 \) is small.

Bayesian Estimates

So for move 1 - 4, we cannot test all assignments. We will randomly choose \(N\) assignments instead and perform Bayesian analysis.

We model the losing probability with a Beta distribution, which is a conjugate prior to the Bernoulli distribution (event of losing). Starting with an uninformative prior \(\text{Beta}(1,1).\) If we lose \(S\) out of \(N\) games the posterior becomes \(\text{Beta}(1 + S, 1 + N - S).\) We then estimate the losing probability with the mean \(\frac{\alpha}{\alpha + \beta} = \frac{1+S}{N + 2} \approx \frac{S}{N}\) and can provide confidence bounds with the standard deviation \(\sqrt{\frac{\alpha\beta}{(\alpha+\beta)^2(\alpha+\beta+1)}} \approx \sqrt{\frac{\frac{S}{N}\cdot \frac{N-S}{N}}{N}} \le \frac{1}{2\sqrt{N}}.\) We choose \(N = 2500\) to get 1% precision.

We model the score with a categorical variable \(\in \{-3,-2,-1,0,1,2,3\}.\) The conjugate prior is a Dirichlet distribution with uninformative prior \(\text{Dirichlet}(1,1,1,1,1,1,1)\) and posterior \(\text{Dirichlet}(1 + c_1,\dots,1+c_7) = \text{Dirichlet}(\alpha),\) where \(c_i\) are the number of observations in category \(i\).

Let \(P \sim \text{Dirichlet}(\alpha)\), then \(\text{expected score} = P \cdot (-3,-2,-1,0,1,2,3) = P \cdot S.\) It holds that \(\text{var}(P \cdot S) = \sum_{i,j} S_i \cdot S_j \cdot \text{Cov}(P_i, P_j)\) and \(\text{Cov}(P_i, P_j) = \frac{\delta_{ij} p_i - p_i p_j}{N + 7 + 1} \le \frac{1}{N},\) \(p = \frac{\alpha}{N + 7}\) and thus \(\text{var}(\text{expected score}) \le \frac{3^2 \cdot 7^2}{N}.\)

Blog