SSOAR Logo
    • Deutsch
    • English
  • Deutsch 
    • Deutsch
    • English
  • Einloggen
SSOAR ▼
  • Home
  • Über SSOAR
  • Leitlinien
  • Veröffentlichen auf SSOAR
  • Kooperieren mit SSOAR
    • Kooperationsmodelle
    • Ablieferungswege und Formate
    • Projekte
  • Kooperationspartner
    • Informationen zu Kooperationspartnern
  • Informationen
    • Möglichkeiten für den Grünen Weg
    • Vergabe von Nutzungslizenzen
    • Informationsmaterial zum Download
  • Betriebskonzept
Browsen und suchen Dokument hinzufügen OAI-PMH-Schnittstelle
JavaScript is disabled for your browser. Some features of this site may not work without it.

Download PDF
Volltext herunterladen

(880.1 KB)

Zitationshinweis

Bitte beziehen Sie sich beim Zitieren dieses Dokumentes immer auf folgenden Persistent Identifier (PID):
https://nbn-resolving.org/urn:nbn:de:0168-ssoar-99521-9

Export für Ihre Literaturverwaltung

Bibtex-Export
Endnote-Export

Statistiken anzeigen
Weiterempfehlen
  • Share via E-Mail E-Mail
  • Share via Facebook Facebook
  • Share via Bluesky Bluesky
  • Share via Reddit reddit
  • Share via Linkedin LinkedIn
  • Share via XING XING

What does the public think about artificial intelligence? A criticality map to understand bias in the public perception of AI

[Zeitschriftenartikel]

Brauner, Philipp
Hick, Alexander
Philipsen, Ralf
Ziefle, Martina

Abstract

Introduction: Artificial Intelligence (AI) has become ubiquitous in medicine, business, manufacturing and transportation, and is entering our personal lives. Public perceptions of AI are often shaped either by admiration for its benefits and possibilities, or by uncertainties, potential threats and ... mehr

Introduction: Artificial Intelligence (AI) has become ubiquitous in medicine, business, manufacturing and transportation, and is entering our personal lives. Public perceptions of AI are often shaped either by admiration for its benefits and possibilities, or by uncertainties, potential threats and fears about this opaque and perceived as mysterious technology. Understanding the public perception of AI, as well as its requirements and attributions, is essential for responsible research and innovation and enables aligning the development and governance of future AI systems with individual and societal needs. Methods: To contribute to this understanding, we asked 122 participants in Germany how they perceived 38 statements about artificial intelligence in different contexts (personal, economic, industrial, social, cultural, health). We assessed their personal evaluation and the perceived likelihood of these aspects becoming reality. Results: We visualized the responses in a criticality map that allows the identification of issues that require particular attention from research and policy-making. The results show that the perceived evaluation and the perceived expectations differ considerably between the domains. The aspect perceived as most critical is the fear of cybersecurity threats, which is seen as highly likely and least liked. Discussion: The diversity of users influenced the evaluation: People with lower trust rated the impact of AI as more positive but less likely. Compared to people with higher trust, they consider certain features and consequences of AI to be more desirable, but they think the impact of AI will be smaller. We conclude that AI is still a "black box" for many. Neither the opportunities nor the risks can yet be adequately assessed, which can lead to biased and irrational control beliefs in the public perception of AI. The article concludes with guidelines for promoting AI literacy to facilitate informed decision-making.... weniger

Thesaurusschlagwörter
künstliche Intelligenz; Vertrauen; Benutzer; neue Technologie; Akzeptanz; Forschung; Risikoabschätzung; Innovation; Wahrnehmung; Bundesrepublik Deutschland; Einstellung

Klassifikation
Technikfolgenabschätzung
Wissenschaftssoziologie, Wissenschaftsforschung, Technikforschung, Techniksoziologie

Freie Schlagwörter
affect heuristic; public perception; user diversity; mental models; technology acceptance; responsible research and innovation (RRI); collingridge dilemma; Interpersonales Vertrauen (KUSIV3) (ZIS 37)

Sprache Dokument
Englisch

Publikationsjahr
2023

Seitenangabe
S. 1-12

Zeitschriftentitel
Frontiers in Computer Science, 5 (2023)

DOI
https://doi.org/10.3389/fcomp.2023.1113903

ISSN
2624-9898

Status
Veröffentlichungsversion; begutachtet (peer reviewed)

Lizenz
Creative Commons - Namensnennung 4.0


GESIS LogoDFG LogoOpen Access Logo
Home  |  Impressum  |  Betriebskonzept  |  Datenschutzerklärung
© 2007 - 2025 Social Science Open Access Repository (SSOAR).
Based on DSpace, Copyright (c) 2002-2022, DuraSpace. All rights reserved.
 

 


GESIS LogoDFG LogoOpen Access Logo
Home  |  Impressum  |  Betriebskonzept  |  Datenschutzerklärung
© 2007 - 2025 Social Science Open Access Repository (SSOAR).
Based on DSpace, Copyright (c) 2002-2022, DuraSpace. All rights reserved.