Political Behaviour/Sociology



F16 - The Politics and Consequences of AI

Date: Jun 14 | Time: 08:30am to 10:00am | Location:

Chair/Président/Présidente : Dietlind Stolle (McGill University)

Discussant/Commentateur/Commentatrice : Dietlind Stolle (McGill University)

The Multidimensional Structure of Risk: How Dread and Controllability Perceptions Shape Opinions About Artificial Intelligence: Mathieu Turgeon (University of Western Ontario), Tyler Romualdi (University of Western Ontario), Tyler Girard (Purdue University), Yannick Dufresne (Université Laval)
Abstract: Studies of public opinion about new and emerging technologies are gaining momentum. The rise of ChatGPT and other artificial intelligence (A.I) programs has raised meaningful concerns about academic integrity, personal security, and the spread of misinformation. However, questions about how one's judgement about the seriousness or pervasiveness of new technologies affects public acceptance persist. Previous work suggests that individual risk evaluations have become increasingly multidimensional, with beliefs about familiarity and the technology’s degree of danger often serving as primary concerns. Yet, two overlooked dimensions with meaningful implications for opinions about the acceptance and support of new technologies in society include perceived dread and controllability. These refer to beliefs about the perceived magnitude of the risk posed by the technology (e.g., dread) and its controllability – the suspected capacity to control the growth and outcome of the technology. We leverage an original cross-national survey with an embedded experiment to examine three primary research questions. First, what is the extent of dread and controllability concerns regarding A.I. technology in Canada and Japan? Second, who is most susceptible to dread and controllability concerns posed by A.I. technology in these contexts? Lastly, how do frames showing varying degrees of the perceived magnitude and controllability of technological risks impact public opinion about adopting A.I.-based technology in society? And, does it vary by policy domain? The results demonstrate the importance of evaluating the multidimensional nature of citizens' technological risks and how these threats get communicated to the public.


Contingent Public Support for Artificial Intelligence? Evidence from 6 survey vignette experiments: John McAndrews (McMaster University), Ori Freiman (McMaster University), Jordan Mansell (McMaster University), Clifton van der Linden (McMaster University), Anwar Sheluchin (McMaster University)
Abstract: Citizens—who are both potential users of AI and potentially subject to public and private decisions made with AI—have an important role to play in the emerging conversation about how to regulate AI. This paper contributes to this fast-developing public conversation about regulation by exploring how public support for AI, as well as the restrictions placed on it, may depend on three factors: domain of use, motivation, and degree of autonomous decision-making. To test these three factors, we designed six vignette experiments that were fielded in 2023 as part of an online survey to a large opt-in sample of Toronto residents. The paper extends existing research in several ways. First, it unpacks the motivations for AI adoption that the public finds most compelling—comparing support for adoption prompted by accuracy, speed, or cost-cutting. Second, it leverages the large survey sample to explore the interactions between factors—specifically whether the effects of motivation and autonomous decision-making on public support depend on a wide range of public and private domains, allowing for a more nuanced assessment of the generalizability of public attitudes across contexts. Third, it integrates individual-level psychological traits—such as optimism and openness to experience—allowing the evaluation of how effects may be moderated by individual dispositions.


Who Benefits and who Loses? The perceived effects of generative AI on labour markets: Sophie Borwein (University of British Columbia), Beatrice Magistro (Caltech), R. Michael Alvarez (Caltech), Peter Loewen (University of Toronto)
Abstract: The rapid diffusion of generative artificial intelligence (AI) has the potential to transform labour markets, yet it will take time to uncover the broader impacts of this technology on labour productivity and inequality. In the meantime, how governments and workplaces approach the use and regulation of generative AI will depend on how people perceive its benefits and costs. Given the broader uncertainty around these technologies, this paper asks: how do people reason about the effects of these new technologies on labour markets? Who do people perceive will be the beneficiaries of these technologies, and moreover, can providing them with information about the possible benefits for certain groups of workers shift their attitudes? Finally, this paper asks what policies people support in response to generative AI. We answer these questions by drawing on two pre-registered survey experiments of respondents in Canada the United States.