class: center, middle # BCC 2022 - Poker Bot --- # Agenda 0. Introduction 0. The Game 0. Theory 3. Hands On 4. Conclusion --- # Introduction --- # The Game ## Poker Complexity ## Go vs. Poker ## Mau Mau --- # Theory --- ## Theory - Model --- ## Theory - Algorithms Following algorithms are part of Reinforcement Learning ### Theory - Algorithms - DQN: Deep Q-Network ![](img/dqn-formula.png) - Q-Learning Form of temporal difference learning (TD-Learning) - Deep Q-Learning --- ### Theory - Algorithms - DMC: Deep Monte Carlo - Monte Carlo Simulation - Stochastisches Näherungsverfahren --- ### Theory - Algorithms - NFSP: Neural Fictious Self Play > It uses two independent networks Q(s, a | θ(Q) ), and Π(s, a | θ(Π) ) and two memory buffers Mrl and Msl assigned to each of them respectively: > > - Mrl is a circular buffer that stores agent experience in the form of > ![](img/nfsp.png) > - Msl is a reservoir that stores the best behaviour in the form of: [s(t), a(t)] state and action at time step t. > - Q(s, a | θ(Q) ) is a neural network to predict action values from data in Mrl using off-policy reinforcement learning. This network approximates best response strategy, β = ε-greedy(Q), which selects a random action with probability ε , and the action that maximizes Q-value with probability (1-ε). > - Π(s, a | θ(Π) ), is a neural network that maps states to action probabilities and defines the agent’s average strategy, π = Π. In other words, it tries to reproduce the best response behaviour using supervised learning from its history of previous best response behaviour in Msl. [Source](https://towardsdatascience.com/neural-fictitious-self-play-800612b4a53f) --- # Hands on ## DeepStack ## RLCards --- ## Hands On - Torch & TensorFlow TensorFlow and Torch are scientific Open Source libraries for machine learning. Torch was originally written in Lua, we used PyTorch. It's still quite young, but growing fast. TensorFlow originates in Google. It's very powerful and mature. --- ## Hands On - Training --- ### Hands On - Training - DQN Training DQN with adjusted payoff calculation based on card values led to a better training result. ![](img/dqn-formula.png) ```python def get_payoffs(self): winner = self.round.winner if winner is not None and len(winner) == 1: self.payoffs[winner[0]] = 1 self.payoffs[1 - winner[0]] = -1 return self.payoffs ``` ![](img/dqn-no-specific-payoff.png) --- ### Hands On - Training - DQN (Adjusted Payoff) ```python def get_payoffs(self): winner = self.round.winner if winner is not None and len(winner) == 1: self.payoffs[winner[0]] = MauMauGame.weight_hand(self.players[1-winner[0]].hand) self.payoffs[1 - winner[0]] = -MauMauGame.weight_hand(self.players[1-winner[0]].hand) return self.payoffs @staticmethod def weight_hand(cards): count = 0 for card in cards: if card.rank == '7': count += 7 elif card.rank == '8': count += 8 elif card.rank == '9': count += 9 elif card.rank == 'T': count += 10 elif card.rank == 'Q': count += 10 elif card.rank == 'K': count += 10 elif card.rank == 'A': count += 11 elif card.rank == 'J': count += 20 return count ``` --- ### Hands On - Training - DQN - Learning Improvement ![DQN Custom Payoff](img/dqn-specific-payoff.png) --- ![DQN Custom Payoff - Long Run](img/dqn_custom-payoff_result_long-run.png) --- ### Hands On - Training - NFSP Training with NFSP on the other side is not as promising as previously seen with DQN ![NFSP](img/nfsp.png) ![NFSP Cusom Payoff](img/nfsp-custom-payoff.png) --- Comparison TODO: table with different models --- TODO: train DQN against winner --- ### Hands On - Training - General Conclusion - Model and reward system is important! - Rewards / payoff defines what we are optimizing for! - Position Matters! The stronger bot looses, if in position two. - Being first implies being one card ahead to player two. --- # Conclusion # Literature