The story is briefly told by Rousseau, in A Discourse on Inequality : If it was a matter of hunting a deer, everyone well realized that he must remain faithful to his post; but if a hare happened to pass within reach of one of them, we cannot doubt that he would h ave gone off in pursuit . Throughout history, armed force has been a ubiquitous characteristic of the relations between independent polities, be they tribes, cities, nation-states or empires. How does the Just War Tradition position itself in relation to both Realism and Pacifism? For example, Jervis highlights the distinguishability of offensive-defensive postures as a factor in stability. The corresponding payoff matrix is displayed as Table 8. See Katja Grace, John Salvatier, Allan Dafoe, Baobao Zhang, & Owain Evans, When Will AI Exceed Human Performance? Does a more optimistic/pessimistic perception of an actors own or opponents capabilities affect which game model they adopt? The Stag-hunt is probably more useful since games in life have many equilibria, and its a question of how you can get to the good ones. In the most common account of this dilemma, which is quite different from Rousseau's, two hunters must decide separately, and without the other knowing, whether to hunt a stag or a hare.
PDF A game theory view of the relationship between the U.S., China and Taiwan Press: 1992). They will be tempted to use the prospect of negotiations with the Taliban and the upcoming election season to score quick points at their rivals expense, foregoing the kinds of political cooperation that have held the country together until now. [5] They can, for example, work together to improve good corporate governance. The response from Kabul involved a predictable combination of derision and alarm, for fear that bargaining will commence on terms beyond the current administrations control. It is the goal this paper to shed some light on these, particularly how the structure of preferences that result from states understandings of the benefits and harms of AI development lead to varying prospects for coordination. Payoff matrix for simulated Deadlock. This section defines suggested payoffs variables that impact the theory and simulate the theory for each representative model based on a series of hypothetical scenarios. If, by contrast, each hunter patiently keeps his or her post, everyone will be rewarded with a lavish feast. For instance, if the expected punishment is 2, then the imposition of this punishment turns the above prisoner's dilemma into the stag hunt given at the introduction. .
Understanding the Stag Hunt Game: How Deer Hunting Explains Why People The payoff matrix in Figure 1 illustrates a generic stag hunt, where [5] Stuart Armstrong, Nick Bostrom, & Carl Shulman, Racing to the precipice: a model of artificial intelligence development, AI and Society 31, 2(2016): 201206. Before getting to the theory, I will briefly examine the literature on military technology/arms racing and cooperation. Here, we have the formation of a modest social contract. Moreover, each actor is more confident in their own capability to develop a beneficial AI than their opponents. How can the security dilemma be mitigated and transcended? To be sustained, a regime of racial oppression requires cooperation. To reiterate, the primary function of this theory is to lay out a structure for identifying what game models best represent the AI Coordination Problem, and as a result, what strategies should be applied to encourage coordination and stability. The stag hunt problem originated with philosopher Jean-Jacques Rousseau in his Discourse on Inequality. 0000001840 00000 n
Table 4. The question becomes, why dont they always cheat? 0000016501 00000 n
For instance if a=10, b=5, c=0, and d=2. In this scenario, however, both actors can also anticipate to the receive additional anticipated harm from the defector pursuing their own AI development outside of the regime. What are some good examples of coordination games? Hunting stags is quite challenging and requires mutual cooperation. Each model is differentiated primarily by the payoffs to cooperating or defecting for each international actor. In biology many circumstances that have been described as prisoner's dilemma might also be interpreted as a stag hunt, depending on how fitness is calculated. [50] This is visually represented in Table 3 with each actors preference order explicitly outlined. If either hunts a stag alone, the chance of success is minimal. Your application of the Prisoners Dilemma (PD) game to international trade agreements raises a few very interesting and important questions for the application of game theory to real-life strategic situations. [29] There is a scenario where a private actor might develop AI in secret from the government, but this is unlikely to be the case as government surveillance capabilities improve. in . If an individual hunts a stag, he must have the cooperation of his partner in order to succeed. Nations are able to communicate with each other freely, something that is forbidden in the traditional PD game. Discuss. [55] See also Bostrom, Superintelligence at Chapter 14. 0000018184 00000 n
[12] Apple Inc., Siri, https://www.apple.com/ios/siri/. [49] For example, by defecting from an arms-reduction treaty to develop more weapons, an actor can gain the upper hand on an opponent who decides to uphold the treaty by covertly continuing or increasing arms production. The most important role of the U.S. presence is to keep the Afghan state afloat, and while the negotiations may turn out to be a positive development, U.S. troops must remain in the near term to ensure the possibility of a credible deal. As discussed, there are both great benefits and harms to developing AI, and due to the relevance AI development has to national security, it is likely that governments will take over this development (specifically the US and China). What is the difference between 'negative' and 'positive' peace? As a result, this could reduce a rival actors perceived relative benefits gained from developing AI. [7] Aumann concluded that in this game "agreement has no effect, one way or the other." hVN0ii ipv@B\Z7 'Q{6A"@](v`Q(TJ}Px^AYbA`Z&gh'{HoF4 JQb&b`#B$03an8"3V0yFZbwonu#xZ? A day passes. We can see through studying the Stag Hunt game theory that, even though we are selfish, we still are ironically aiming to for mutual benefit, and thus we tend to follow a such a social contract. Course blog for INFO 2040/CS 2850/Econ 2040/SOC 2090, Link: http://www.socsci.uci.edu/~bskyrms/bio/papers/StagHunt.pdf. The stag hunters are likely to interact with other stag hunters to seek mutual benefit, while hare hunters rarely care with whom they interact with since they rather not depend on others for success. However, a hare is seen by all hunters moving along the path. which can be viewed through the lens of the stag hunt in for an example the countrys only international conference in International Relations from, Scenario Assurance game is a generic name for the game more commonly known as Stag Hunt. The French philosopher, Jean Jacques Rousseau, presented the following A relevant strategy to this insight would be to focus strategic resources on shifting public or elite opinion to recognize the catastrophic risks of AI. An example of the payoff matrix for the stag hunt is pictured in Figure 2. Specifically, it is especially important to understand where preferences of vital actors overlap and how game theory considerations might affect these preferences. Half a stag is better than a brace of rabbits, but the stag will only be brought down with a . But the moral is not quite so bleak. Table 1. [35] Outlining what this Coordination Regime might look like could be the topic of future research, although potential desiderata could include legitimacy, neutrality, accountability, and technical capacity; see Allan Dafoe, Cooperation, Legitimacy, and Governance in AI Development, Working Paper (2016). In game theory, the stag hunt, sometimes referred to as the assurance game, trust dilemma or common interest game, describes a conflict between safety and social cooperation.
The Stag Hunt Theory and the Formation Social of Contracts : Networks 201-206.
Relative vs. Absolute Gains - Intro to International Relations Other names for it or its variants include "assurance game", "coordination game", and "trust dilemma". Two, three, four hours pass, with no trace. (1) the responsibility of the state to protect its own population from genocide, war crimes, ethnic cleansing and crimes against humanity, and from their incitement; What is the difference between structural and operational conflict prevention? Robert J Aumann, "Nash Equilibria are not Self-Enforcing," in Economic Decision Making: Games, Econometrics and Optimisation (Essays in Honor of Jacques Dreze), edited by J. J. Gabszewicz, J.-F. Richard, and L. Wolsey, Elsevier Science Publishers, Amsterdam, 1990, pp. Moreover, the usefulness of this model requires accurately gauging or forecasting variables that are hard to work with. Here, both actors demonstrate high uncertainty about whether they will develop a beneficial or harmful AI alone (both Actors see the likelihood as a 50/50 split), but they perceive the potential benefits of AI to be slightly greater than the potential harms. Explain how the 'Responsibility to Protect' norm tries to provide a compromise between the UN Charter's principle of non-interference (state sovereignty) and the UN genocide convention. [20] Will Knight, Could AI Solve the Worlds Biggest Problems? MIT Technology Review, January 12, 2016, https://www.technologyreview.com/s/545416/could-ai-solve-the-worlds-biggest-problems/. They can cheat on the agreement and hope to gain more than the first nation, but if the both cheat, they both do very poorly. ? Gray[36] defines an arms race as two or more parties perceiving themselves to be in an adversary relationship, who are increasing or improving their armaments at a rapid rate and structuring their respective military postures with a general attain to the past, current, and anticipated military and political behaviour of the other parties.. Next, I outline my theory to better understand the dynamics of the AI Coordination Problem between two opposing international actors. The coincident timing of high-profile talks with a leaked report that President Trump seeks to reduce troop levels by half has already triggered a political frenzy in Kabul. This could be achieved through signaling lack of effort to increase an actors military capacity (perhaps by domestic bans on AI weapon development, for example). As we discussed in class, the catch is that the players involved must all work together in order to successfully hunt the stag and reap the rewards once one person leaves the hunt for a hare, the stag hunt fails and those involved in it wind up with nothing. What is the key claim of the 'Liberal Democratic Peace' thesis? This is expressed in the following way: The intuition behind this is laid out in Armstrong et al.s Racing to the precipice: a model of artificial intelligence.[55] The authors suggest each actor would be incentivized to skimp on safety precautions in order to attain the transformative and powerful benefits of AI before an opponent. HV6am`vjyJ%K>{:kK$C$$EedI3OilJZT$h_'eN. [47] look at different policy responses to arms race de-escalation and find that the model or game that underlies an arms race can affect the success of policies or strategies to mitigate or end the race. Some observers argue that a precipitous American retreat will leave the countryand even the capital, Kabulvulnerable to an emboldened, undeterred Taliban given the limited capabilities of Afghanistans national security forces. 4 thoughts on " The Six-Party Talks as a Game Theoretic 'Stag-Hunt' (2): For example international relations-if the people made international decisions stag hunt, chicken o International relations is a perfect example of an Cooperation under the security dilemma. In their paper, the authors suggest Both the game that underlies an arms race and the conditions under which it is conducted can dramatically affect the success of any strategy designed to end it[58]. Additionally, both actors can expect a greater return if they both cooperate rather than both defect. For example, if the two international actors cooperate with one another, we can expect some reduction in individual payoffs if both sides agree to distribute benefits amongst each other. 0000002252 00000 n
Beding (2008), but also in international relations (Jervis 1978) and macroeconomics (Bryant 1994). It would be much better for each hunter, acting individually, to give up total autonomy and minimal risk, which brings only the small reward of the hare. This makes the risk twofold; the risk that the stag does not appear, and the risk that another hunter takes the kill. Another example is the hunting practices of orcas (known as carousel feeding). 0000006962 00000 n
There is a substantial relationship between the stag hunt and the prisoner's dilemma.
The Use of Force in International Politics: Four Revolutions [23] United Nations Office for Disarmament Affairs, Pathways to Banning Fully Autonomous Weapons, United Nations, October 23, 2017, https://www.un.org/disarmament/update/pathways-to-banning-fully-autonomous-weapons/. [56] look at three different types of strategies governments can take to reduce the level of arms competition with a rival: (1) a unilateral strategy where an actors individual actions impact race dynamics (for example, by focusing on shifting to defensive weapons[57]), (2) a tacit bargaining strategy that ties defensive expenditures to those of a rival, and (3) a negotiation strategy composed of formal arms talks. These strategies are not meant to be exhaustive by any means, but hopefully show how the outlined theory might provide practical use and motivate further research and analysis. In international relations, countries are the participants in the stag hunt. From that moment on, the tenuous bonds keeping together the larger band of weary, untrusting hunters will break and the stag will be lost. For the painting about stag hunting, see, In this symmetric case risk dominance occurs if (. In a case with a random group of people, most would choose not to trust strangers with their success. Payoff variables for simulated Deadlock, Table 10. [22] Julia Angwin, Jeff Larson, Surya Mattu, and Lauren Kirchner, Machine Bias, ProPublica, May 23, 2016 https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing. 16 (2019): 1. Table 4. This means that it remains in U.S. interests to stay in the hunt for now, because, if the game theorists are right, that may actually be the best path to bringing our troops home for good. to Be Made in China by 2030, The New York Times, July 20, 2017, https://www.nytimes.com/2017/07/20/business/china-artificial-intelligence.html, [33] Kania, Beyond CFIUS: The Strategic Challenge of Chinas Rise in Artificial Intelligence., [34] McKinsey Global Institute, Artificial Intelligence: The Next Digital Frontier.. Evaluate this statement. Even doing good can parallel with bad consequences. Table 5. In this section, I briefly argue that state governments are likely to eventually control the development of AI (either through direct development or intense monitoring and regulation of state-friendly companies)[29], and that the current landscape suggests two states in particular China and the United States are most likely to reach development of an advanced AI system first. Table 9. Use integration to find the indicated probabilities. For the cooperator (here, Actor B), the benefit they can expect to receive from cooperating would be the same as if both actors cooperated [P_(b|B) (AB)b_Bd_B]. In order for human security to challenge global inequalities, there has to be cooperation between a country's foreign policy and its approach to global health. Image: The Intelligence, Surveillance and Reconnaissance Division at the Combined Air Operations Center at Al Udeid Air Base, Qatar. Deadlock is a common if little studied occurrence in international relations, although knowledge about how deadlocks are solved can be of practical and theoretical importance. [11] In our everyday lives, we store AI technology as voice assistants in our pockets[12] and as vehicle controllers in our garages. There is no certainty that the stag will arrive; the hare is present. Explain Rousseau's metaphor of the 'stag hunt'. This additional benefit is expressed here as P_(b|A) (A)b_A.
GAME THEORY FOR INTERNATIONAL ACCORDS - University of South Carolina Downs et al.
What is coercive bargaining and the Stag Hunt? Give an example [56] Downs et al., Arms Races and Cooperation., [57] This is additionally explored in Jervis, Cooperation Under the Security Dilemma.. As a result, concerns have been raised that such a race could create incentives to skimp on safety.
Social Stability and Catastrophe Risk: Lessons From the Stag Hunt Additionally, this model accounts for an AI Coordination Regime that might result in variable distribution of benefits for each actor. It comes with colossal opportunities, but also threats that are difficult to predict. If both choose to row they can successfully move the boat. The best response correspondences are pictured here. In international relations, examples of Chicken have included the Cuban Missile Crisis and the concept of Mutually Assured Destruction in nuclear arms development. Scholars of civil war have argued, for example, that peacekeepers can preserve lasting cease-fires by enabling warring parties to cooperate with the knowledge that their security will be guaranteed by a third party. Each player must choose an action without knowing the choice of the other. In the current Afghan context, the role of the U.S. military is not that of third-party peacekeeper, required to guarantee the peace in disinterested terms; it has the arguably less burdensome job of sticking around as one of several self-interested hunters, all of whom must stay in the game or risk its collapse. %PDF-1.3
%
Together, these elements in the arms control literature suggest that there may be potential for states as untrusting, rational actors existing in a state of international anarchy to coordinate on AI development in order to reduce future potential global harms. hunting stag is successful only if both hunters hunt stag, while each hunter can catch a less valuable hare on his own. Stag hunt definition: a hunt carried out to find and kill stags | Meaning, pronunciation, translations and examples A sudden drop in current troop levels will likely trigger a series of responses that undermine the very peace and stability the United States hopes to achieve. In this game "each player always prefers the other to play c, no matter what he himself plays. \wb94W(F}pYY"[17/x(K+jf+M)S_3ZP7~Nj\TgTId=/o7Mx{a[ K}
A great example of chicken in IR is the Cuban Missile Crisis. > These two concepts refer to how states will act in the international community. This is why international tradenegotiationsare often tense and difficult. 0000001656 00000 n
Perhaps most alarming, however, is the global catastrophic risk that the unchecked development of AI presents. Why do trade agreements even exist?
In so doing, they have maintained a kind of limited access order, drawing material and political benefits from cooperating with one another, most recently as part of the current National Unity Government. f(x)={323(4xx2)0if0x4otherwise. Whereas the short-term impact of AI depends on who controls it, the long-term impact depends now whether it can be controlled at all.[26]. This table contains an ordinal representation of a payoff matrix for a Chicken game. Meanwhile, the harm that each actor can expect to receive from an AI Coordination Regime consists of the actors perceived likelihood that such a regime would create a harmful AI expressed as P_(h|A) (AB)for Actor A and P_(h|B) (AB)for Actor B times each actors perceived harm expressed as hA and hB.
Can you think of any situations of scenarios in international Actor As preference order: DC > CC > CD > DD, Actor Bs preference order: CD > CC > DC > DD. This variant of the game may end with the trust rewarded, and it may result with the trusting party alone receiving full penalty, thus, leading to a new game of revenge. 0000003638 00000 n
> [52] In the context of developing an AI Coordination Regime, recognizing that two competing actors are in a state of Deadlock might drive peace-maximizing individuals to pursue de-escalation strategies that differ from other game models. In these abstractions, we assume two utility-maximizing actors with perfect information about each others preferences and behaviors. Payoff variables for simulated Stag Hunt, Table 14. For example, can the structure of distribution impact an actors perception of the game as cooperation or defection dominated (if so, should we focus strategic resources on developing accountability strategies that can effectively enforce distribution)? Carol M. Rose argues that the stag hunt theory is useful in 'law and humanities' theory. Due to the potential global harms developing AI can cause, it would be reasonable to assume that government actors would try impose safety measures and regulations on actors developing AI, and perhaps even coordinate on an international scale to ensure that all actors developing AI might cooperate under an AI Coordination Regime[35] that sets, monitors, and enforces standards to maximize safety. Actor As preference order: DC > DD > CC > CD, Actor Bs preference order: CD > DD > CC > DC. In international relations, examples of Chicken have included the Cuban Missile Crisis and the concept of Mutually Assured Destruction in nuclear arms development.