Logical Background

For the epistemic models in this game, we will employ the axioms of the $\mathbf{S5}(m)$-system initially, i.e. all accessibility relations are assumed to be reflexive, transitive and euclidean [2]. All models can be updated by either removing entire worlds when they have no relations leading to other worlds in the model, i.e. only contain reflexive relations, or by removing individual accessibility relations between worlds either for all or for individual players. The removal type depends on both the model and the formula that the model should be consistent with after the update. While we designed all update functions carefully to ensure that each resulting model will still adhere to the axioms of the $\mathbf{S5}(m)$-system, in theory, we cannot fully guarantee this since a manual check of the resulting models is not possible due to their size. Therefore any further extensions of the formulas used in the rest of this chapter through the axioms and rules of the $\mathbf{S5}(m)$-system should only be made with caution, especially concerning the axiom (A4) related to the transitivity of accessibility relations.

We identified three aspects of the game that can be modelled well in epistemic logic. These are the agents’ knowledge about the goal deck, their own handcards, and the other players handcards. As described in the Game section, some limitations apply to these models in order to keep them computationally feasible. Additionally, we are splitting the full game knowledge into three distinct Kripke models to model the three aspects described above separately in order to increase performance further. A fully formalized Kripke model containing all of the game knowledge can be restored from these simpler models by multiplication:

Let us assume as an example the goal model contains the world $w_1 = {peacock, rope, library}$ and the individual hand card model for player 1 contains the two worlds $w_2 = {scarlett, poison, dagger, white}$ and $w3 = {green, pistol, axe, study}$. A full model would then contain the two worlds $w_1^{\ast} = {g:scarlett, g:poison, g:library, h_1:scarlett, h_1:poison, h_1:dagger, h_1:white}$ and $w_2^{\ast} = {g:scarlett, g:poison, g:library, h_1:green, h_1:pistol, h_1:axe, h_1:study}$, where the $g:$ prefix denotes propositions coming from the goal model and the $h_1:$ prefix denotes propositions coming from the handcard-model of player 1. Each of those new worlds inherits the accessibility relations of the goal model towards all other newly created combined worlds and the accessibility relations of the previous hand card model between the worlds created from this individual combination of one world from the goal model with all worlds of the hand card model. The same step needs then to be repeated for the hand card models of all other players and their accessibility relations. It is intuitively easy to see that splitting up the model into several submodels allows us to model a large number of atomic propositions in a performance-effective and humanly-understandable way without compromising on any knowledge that would be contained in a fully formalized combined model.

In the following sections, we will refer to any specific character, weapon or room by their respective name but will refer to an arbitrary character as $c$, an arbitrary weapon as $w$ and an arbitrary room as $r$. Any arbitrary card will be referred to as $k$. The formula $(\mathbf{M}) \models K_1 c$ would therefore describe that at this point in the game, player 1 knows about an arbitrary character that has e.g. been shown to him in the current game turn. We will denote the correct character, weapon and room as $c^\ast$, $w^\ast$ and $r^\ast$.

Formally all our models consist of the following:

  • A set of $m$ players $Pl = {p_1, p_2, …, p_m}$
  • A set of $n$ characters $C = {c_1, c_2, …, c_n}$
  • A set of $o$ weapons $W = {w_1, w_2, …, w_o}$
  • A set of $q$ rooms $R = {r_1, r_2, …, r_q}$
  • A number of game rounds, i.e. timesteps in which all models are updated $T = \mathbf{N}$
  • A set of worlds in eahc individual Kripke model which are explained in detail below

Modelling the Goal Deck

The Kripke model of the goal stack is created by assigning each possible combination of a character, weapon and room card to a world and creating reflexive, transitive and euclidean accessibility relations between them. In each world, exactly three propositions corresponding to the three cards of this world are true, while the propositions for all other cards are false. One of those possible worlds has been shown as $w_1$ above. In this world, the character peacock, weapon rope and room library are in the goal deck and need therefore be found by the players.

Each player has their own internal representation of this Kripke model, which is updated each turn. At the beginning of the game, they can already deduce that any world containing at least one of their own hand cards cannot represent the goal deck and can therefore be removed from the model. This constitutes an update of the model with the constraint formula $(\mathbf{M}) \models \neg k$ or, by extension through the axiom (A3) of $\mathbf{S5}(m)$ $(\mathbf{M}) \models K_m\neg k$ for all cards $k$ in the players hand. After that, there are two main update functions. First, when the player ($m$) is shown a card by another player ($n$), they know that any world containing this card cannot be in the goal deck. This corresponds again to the formula $(\mathbf{M}) \models K_m( \neg k)$. Second, all other players can deduce from this that at least one of the three cards in the suggestion by $m$ cannot be in the goal deck, which corresponds to the formula $(\mathbf{M}) \models K_{p \in Pl | p \neq m,n}\neg (c \land w \land r)$.

Once a player has eliminated all worlds except one, they know that the three cards proposed in this world have to be in the goal deck, i.e. $(\mathbf{M}) \models K_m(c^\ast \land w^\ast \land r\ast)$. Alternatively, the game might end early when a player makes a suggestion that no other player can show a card for, after which the player will make an accusation with these three cards to win the game. This function, however, is not explicitly modeled in the Kripke model. We found this final update step to be computationally unnecessary since it will always create a model with only one world equivalent to an update with the above formula.

Modelling the Hand Cards

The Kripke models for the hand cards of each other player $ p \in Pl | p \neq m$ are created for each player $m$ by assigning each possible hand card combination a world and creating reflexive, transitive and euclidean accessibility relations between them. In each world, as many propositions as there are hand cards for this player are true, while the propositions for all other cards are false.

Again, each player $m$ has their own internal representation of these Kripke models, which are updated each turn. At the beginning of the game, the player can again deduce that any world containing at least one of their own hand cards is impossible for any other player. After that, there are also two main update functions. If the player is shown a card by another player $n$, they can deduce that any world that does not contain this card is not consistent with the handcard model ( $(\mathbf{M}) \models K_m k $). Second, if the player $n$ shows player $m$ a card privately, any other player knows that $n$ needs to have at least one of those three cards on their hand $(\mathbf{M}) \models K_{p \in Pl | p \neq m,n}(c \lor w \lor r)$.

Once a proposition for a specific card is valid in all remaining worlds of the handcard Kripke model of player $m$ for another player $n$, then $m$ knows that $n$ has to have this card in their hand ($(\mathbf{M}) \models K_m k$) and can therefore deduce that it cannot be in the goal deck ($(\mathbf{M}) \models K_m \neg k$). This is checked each turn for every player.

Additionally, each player also has a Kripke model for the knowledge other players have about his handcards. This model is created through a power set of possible hand card combinations, e.g. for a player with the handcards white and dagger four worlds with the true propositions {{},{white},{dagger},{white,dagger}} would be created. The model will be created with reflexive, transitive and euclidean accessibility relations between those worlds for each other player. Each time $m$ shows another player $n$ any of their handcards, the model will be updated to be consistent with the formula $(\mathbf{M}) \models K_n k$. The player $m$ will then use this model to determine which handcard to show another player $n$ and preferably show them a handcard where they know that the other player $n$ already knows that this card is in the hand of $m$, i.e. according to the formula $(\mathbf{M}) \models K_m K_n k$.

Decision Making

Throughout the game, the players will have to make decisions about which suggestions to make based on their internal knowledge. We decided to split the players into three distinct groups that limit what knowledge is available to them and can be used to build new suggestions:

Zero-Order Knowledge Agents: These agents possess no internal reasoning about their own knowledge and can only react to the cards that are presented to them as facts. The following propositions and limitations are true for any zero-order agent $m$:

  • They will update their goal model once a card is presented to them with the formula $(\mathbf{M}) \models \neg k$
  • They will not update their goal model when a card is shown to another player to deduct that this player needs to have at least one of the three cards of the suggestion in his hand
  • They will make a suggestion by randomly choosing any of the worlds possible to them
  • They will move randomly across the playing board based on their suggestion in the current turn
  • They will not check their knowledge of other players hand cards to exclude propositions from their goal model
  • If more than one of their handcards matches the suggestion of another player, they will choose the card to show them randomly

First-Order Knowledge Agents: These agents can reason about their own knowledge to decide which suggestion to make in a given turn. The following propositions and limitations are true for any first-order agent $m$:

  • They will update their goal model once a card is presented to them with the formula $(\mathbf{M}) \models K_m \neg k$
  • They will update their goal model when a card is shown to another player to deduct that this player needs to have at least one of the three cards of the suggestion in his hand, i.e. by using the formula $(\mathbf{M}) \models K_m \neg (c \land w \land r)$
  • They will make a suggestion by calculating which character, weapon and room are present in the most possible worlds in their goal model and combine these three values into one suggestion, meaning that their suggestion is not limited to a single world but takes the information from all possible worlds into account
  • They will move across the playing board based on their suggestion in the current turn
  • They will check their knowledge of other players hand cards to exclude propositions from their goal model
  • If more than one of their handcards matches the suggestion of another player, they will choose the card to show them randomly

Second-Order Knowledge Agents: These agents can reason not only about their own knowledge but also about the knowledge of other agents at each given turn. They base their decisions on the same propositions and limitations as first-order agents, with one notable exception. They will use their knowledge of the knowledge other players have of their handcards to decide which card to show another player when more than one of their handcards matches a suggestion of that player. While it seems like only one change might not make much of a difference, we expect that this single addition gives the second-order agents a competitive advantage since they can stall the progress of other agents by repeatedly showing them only a card they already know. The Conclusion gives a brief overview of other possible second-order knowledge actions that we were, unfortunately, unable to implement in this project due to the added computational complexity they would introduce.

Implementation

The epistemic logic described on this page is implemented using the mlsolver package. In order to enable the computations needed for this project, several extensions and optimizations of the package have been developed that can hopefully be integrated into the original project later. These extensions can be found in the project repository in the folder cluedo/logic-checker.

Firstly, the original function to find a submodel of a Kripke model that satisfies a given formula (KripkeStructure.solve()) has been rewritten in a simpler and more effective way. The original function was creating a power set of all possible sets of worlds in the original model to then determine the smallest set of worlds in a model, still satisfying a given formula. Since this would quickly lead the program to run out of memory with the amount of possible worlds in our models, we simplified this function by simply iterating over the full model and then removing all worlds that do not satisfy the formula. The actual function to remove these worlds and corresponding relations has also been optimized by simply saving all worlds that satisfy the formula into a set and recreating the relations from this subset instead of iterating over the sets of worlds and relations multiple times to delete all unwanted worlds and relations from them. Two new functions are created for this in the file kripke_model.py. The function _remove_node_by_name() removes a single world and corresponding relations. It is only marginally (around 10 %) faster than the original function. The function _remove_nodes_by_name() removes all inconsistent worlds at once and is up to five times faster than the original functions. The performance values have been taken from running these functions for a game configuration of 4 players, 6 characters, 6 weapons and 6 rooms with no manual players and averaging over the resulting execution times. Actual execution times in seconds are not given here since those will vary for each individual game turn. Additionally we created a new function _nodes_follow_formula() which provides a contrasting functionality to the original KripkeStructure.nodes_not_follow_formula() function of the mlsolver package.

Secondly, two new functions to effectively create reflexive, transitive and symmetric accessibility relations between a given set of worlds have been created. The function _create_single_relations() will create those relations for a model with only a single agent whereas the function _create_multi_relations() will create the relations for a multi-agent model.

Lastly, we created two functions to remove accessibility relations instead of full worlds from a Kripke model, a functionality not implemented in the mlsolver package. The function remove_relations() removes relations for all agents in a multi-agent model and can also be used to remove relations in a single-agent model, whereas the function remove_agent_relations() removes relations for a single agent only in a multi-agent model. Both functions accept either a specified start- and end-world to remove a single relation or a wildcard character for either the start- or end-world, making it possible to remove all relations to or from a single world at once. It can also be specified if symmetric relations should be fully removed if needed or if only the specified direction of a relation should be removed. Reflexive relations are not removed unless both worlds are explicitly specified in order to keep the resulting model consistent with the common axiom systems of knowledge.