SSOAR Logo
    • Deutsch
    • English
  • English 
    • Deutsch
    • English
  • Login
SSOAR ▼
  • Home
  • About SSOAR
  • Guidelines
  • Publishing in SSOAR
  • Cooperating with SSOAR
    • Cooperation models
    • Delivery routes and formats
    • Projects
  • Cooperation partners
    • Information about cooperation partners
  • Information
    • Possibilities of taking the Green Road
    • Grant of Licences
    • Download additional information
  • Operational concept
Browse and search Add new document OAI-PMH interface
JavaScript is disabled for your browser. Some features of this site may not work without it.

Download PDF
Download full text

(222.2Kb)

Citation Suggestion

Please use the following Persistent Identifier (PID) to cite this document:
https://doi.org/10.34669/wi.cp/5.6

Exports for your reference manager

Bibtex export
Endnote export

Display Statistics
Share
  • Share via E-Mail E-Mail
  • Share via Facebook Facebook
  • Share via Bluesky Bluesky
  • Share via Reddit reddit
  • Share via Linkedin LinkedIn
  • Share via XING XING

The Problem of the Automation Bias in the Public Sector: A Legal Perspective

[conference paper]


This document is a part of the following document:
Proceedings of the Weizenbaum Conference 2023: AI, Big Data, Social Media, and People on the Move

Ruschemeier, Hannah

Corporate Editor
Weizenbaum Institute for the Networked Society - The German Internet Institute

Abstract

The automation bias describes the phenomenon, proven in behavioural psychology, that people place excessive trust in the decision suggestions of machines. The law currently sees a dichotomy - and covers only fully automated decisions, and not those involving human decision makers at any stage of the... view more

The automation bias describes the phenomenon, proven in behavioural psychology, that people place excessive trust in the decision suggestions of machines. The law currently sees a dichotomy - and covers only fully automated decisions, and not those involving human decision makers at any stage of the process. However, the widespread use of such systems, for example to inform decisions in education or benefits administration, creates a leverage effect and increases the number of people affected. Particularly in environments where people routinely have to make a large number of similar decisions, the risk of automation bias increases. As an example, automated decisions providing suggestions for job placements illustrate the particular challenges of decision support systems in the public sector. So far, the risks have not been sufficiently addressed in legislation, as the analysis of the GDPR and the draft Artificial Intelligence Act show. I argue for the need for regulation and present initial approaches.... view less

Keywords
discrimination; labor market; data protection; artificial intelligence; algorithm; decision making

Classification
Technology Assessment
Law

Free Keywords
AI bias; GDPR

Collection Title
Proceedings of the Weizenbaum Conference 2023: AI, Big Data, Social Media, and People on the Move

Conference
5. Weizenbaum Conference "AI, Big Data, Social Media, and People on the Move". Berlin, 2023

Document language
English

Publication Year
2023

City
Berlin

Page/Pages
p. 1-11

Status
Primary Publication; peer reviewed

Licence
Creative Commons - Attribution 4.0


GESIS LogoDFG LogoOpen Access Logo
Home  |  Legal notices  |  Operational concept  |  Privacy policy
© 2007 - 2025 Social Science Open Access Repository (SSOAR).
Based on DSpace, Copyright (c) 2002-2022, DuraSpace. All rights reserved.
 

 


GESIS LogoDFG LogoOpen Access Logo
Home  |  Legal notices  |  Operational concept  |  Privacy policy
© 2007 - 2025 Social Science Open Access Repository (SSOAR).
Based on DSpace, Copyright (c) 2002-2022, DuraSpace. All rights reserved.