Reciprocity in hybrid systems of humans and machines

Emergence and collapse of reciprocity in semiautomatic driving coordination experiments with humans

This project is about social norms in hybrid systems of humans and machines. To investigate how computer assistance will impact on social norms, we build a cyber-physical lab experiment system with real-time video streaming where online participants drive physical robotic vehicles remotely in a coordination game.(N = 300 in 150 dyads)

The collaboration project with the lead author Hirokazu Shirado (CMU) and Christakis, N (Yale University) and Sony CSL researcher Shunichi Kasahara. Dr. Kasahara conducts this project as part of his research for Cybernetic Humanity.

Abstract

Forms of both simple and complex machine intelligence are increasingly acting within human groups in order to affect collective outcomes. Considering the nature of collective action problems, however, such involvement could paradoxically and unintentionally suppress existing beneficial social norms in humans, such as those involving cooperation. Here, we test theoretical predictions about such an effect using a unique cyber-physical lab experiment where online participants (N = 300 in 150 dyads) drive robotic vehicles remotely in a coordination game. We show that autobraking assistance increases human altruism, such as giving way to others, and that communication helps people to make mutual concessions. On the other hand, autosteering assistance completely inhibits the emergence of reciprocity between people in favor of self-interest maximization. The negative social repercussions persist even after the assistance system is deactivated. Furthermore, adding communication capabilities does not relieve this inhibition of reciprocity because people rarely communicate in the presence of autosteering assistance. Our findings suggest that active safety assistance (a form of simple AI support) can alter the dynamics of social coordination between people, including by affecting the trade-off between individual safety and social reciprocity. The difference between autobraking and autosteering assistance appears to relate to whether the assistive technology supports or replaces human agency in social coordination dilemmas. Humans have developed norms of reciprocity to address collective challenges, but such tacit understandings could break down in situations where machine intelligence is involved in human decision-making without having any normative commitments.

Paper

Shirado, Hirokazu, Shunichi Kasahara, and Nicholas A. Christakis. 2023. "Emergence and Collapse of Reciprocity in Semiautomatic Driving Coordination Experiments with Humans." PNAS, Proceedings of the National Academy of Sciences of the United States of America 120 (51): e2307804120.
https://www.pnas.org/doi/10.1073/pnas.2307804120

Experiment setup. (A) The physical coordination space. Two car robots face each other on a single road. Players remotely drive the robots over the Internet to control the speed and whether to drive on or off the road. (B) A sequence of a unilateral turns by the yellow car. To avoid a crash, at least one of the players needs to give way to their counterpart, but this reduces their driving speed by 75% (and thus their payoff). (C) Experimental treatments for the driving system. In addition to the default (i.e., manual driving), cars with autobraking automatically stop once with a warning, while those with autosteering automatically swerve at the last moment.

We show that autobraking assistance increases human altruism, such as giving way to others and making mutual concessions. BUT, autosteering assistance completely inhibits the emergence of reciprocity between people. gif = example of non-reciprocity pattern.

Bird's-eye-view footage of bilateral turns across the rounds. This example session is in the auto-steering condition with communication capabilities. The parallel animation shows the recorded information used for the analysis, such as the trajectories of vehicle motion, the participants' communication messages and timing (although this example's participants never sent messages), and the assist system's activation timing (i.e., warning and autonomous emergency steering assistance). The robotic vehicles automatically went into the start positions between rounds.

The diff between Reciprocity and non-Reciprocity patterns appears to relate to whether the technology supports or replaces "human agency" in social coordination dilemmas, suggesting that agency in hybrid systems can impact social coordination. gif = Reciprocity pattern

Bird's-eye-view footage of reciprocal unilateral turns across the rounds. This example session is in the manual condition with communication capabilities. The parallel animation shows the recorded information used for the analysis, such as the trajectories of vehicle motion, the participants' communication messages and timing, and the assist system's activation timing (i.e., warning; although this example’s participants never activated the warning system). The robotic vehicles automatically went into the start positions between rounds.

上へ戻る