Jump to Content

Approximating the Core of Cooperative Games

View publication Download


The core is a long standing solution concept in cooperative game theory, but is known to be hard to compute even for restricted classes of coalitional games. Cooperative game theory has many applications, ranging from analyzing power in decision making bodies and politics to predicting how human teams might decide to share profits achieved jointly. Another recent application of cooperative games in machine learning is explainable AI (XAI), and in particular identifying the key features or data points that were most influential on the predictions made by a black box model. Recent research has applied the Shapley value for these purposes, as it can be approximated using fast random algorithms. In this paper, we propose efficient algorithms for approximating the core in coalitional games, that iteratively refine a payoff vector by sampling the constraints posed by randomly selected coalitions. By overcoming the computational barrier, we show how the core can provide a desirable alternative to the Shapley value for many applications: political power analysis, predicting individual payoffs in human teams and explainable AI. Our empirical analysis shows how team stability is affected by game parameters for important classes of cooperative games (weighted voting games, graph games and marginal contribution networks). Further, we contrast the individual impact estimates of the Shapley value and the least-core in political settings and explainable AI analysis of multiple datasets.


Ian Gemp, Marc Lanctot, Luke Marris, Yiran Mao, Edgar Duéñez-Guzmán, Sarah Perrin, Andras Gyorgy, Romuald Elie, Georgios Piliouras, Michael Kaisers, Daniel Hennes, Kalesha Bullard, Kate Larson, Yoram Bachrach


AAMAS 2024