SSOAR Logo
    • Deutsch
    • English
  • English 
    • Deutsch
    • English
  • Login
SSOAR ▼
  • Home
  • About SSOAR
  • Guidelines
  • Publishing in SSOAR
  • Cooperating with SSOAR
    • Cooperation models
    • Delivery routes and formats
    • Projects
  • Cooperation partners
    • Information about cooperation partners
  • Information
    • Possibilities of taking the Green Road
    • Grant of Licences
    • Download additional information
  • Operational concept
Browse and search Add new document OAI-PMH interface
JavaScript is disabled for your browser. Some features of this site may not work without it.

Download PDF
Download full text

(external source)

Citation Suggestion

Please use the following Persistent Identifier (PID) to cite this document:
https://doi.org/10.17645/mac.9523

Exports for your reference manager

Bibtex export
Endnote export

Display Statistics
Share
  • Share via E-Mail E-Mail
  • Share via Facebook Facebook
  • Share via Bluesky Bluesky
  • Share via Reddit reddit
  • Share via Linkedin LinkedIn
  • Share via XING XING

How Generative AI Went From Innovation to Risk: Discussions in the Korean Public Sphere

[journal article]

Kim, Sunghwan
Jung, Jaemin

Abstract

Technological progress breeds both innovation and potential risks, a duality exemplified by the recent debate over generative artificial intelligence (GAI). This study examines how GAI has become a perceived risk in the Korean public sphere. To explore this, we analyzed news articles (N = 56,468) an... view more

Technological progress breeds both innovation and potential risks, a duality exemplified by the recent debate over generative artificial intelligence (GAI). This study examines how GAI has become a perceived risk in the Korean public sphere. To explore this, we analyzed news articles (N = 56,468) and public comments (N = 68,393) from early 2023 to mid-2024, a period marked by heightened interest in GAI. Our analysis focused on articles mentioning "generative artificial intelligence." Using the social amplification of risk framework (Kasperson et al., 1988), we investigated how risks associated with GAI are amplified or attenuated. To identify key topics, we employed the bidirectional encoder representations from transformers model on news content and public comments, revealing distinct media and public agendas. The findings show a clear divergence in risk perception between news media and public discourse. While the media's amplification of risk was evident, its influence remained largely confined to specific amplification stations. Moreover, the focus of public discussion is expected to shift from AI ethics and regulatory issues to the broader consequences of industrial change.... view less

Keywords
artificial intelligence; risk communication; technical development; South Korea; public opinion; risk

Classification
Media Contents, Content Analysis
Impact Research, Recipient Research

Free Keywords
AI; ChatGPT; amplification stations; generative AI; public discourse; risk amplification; risk attenuation

Document language
English

Publication Year
2025

Journal
Media and Communication, 13 (2025)

Issue topic
AI, Media, and People: The Changing Landscape of User Experiences and Behaviors

ISSN
2183-2439

Status
Published Version; peer reviewed

Licence
Creative Commons - Attribution 4.0


GESIS LogoDFG LogoOpen Access Logo
Home  |  Legal notices  |  Operational concept  |  Privacy policy
© 2007 - 2025 Social Science Open Access Repository (SSOAR).
Based on DSpace, Copyright (c) 2002-2022, DuraSpace. All rights reserved.
 

 


GESIS LogoDFG LogoOpen Access Logo
Home  |  Legal notices  |  Operational concept  |  Privacy policy
© 2007 - 2025 Social Science Open Access Repository (SSOAR).
Based on DSpace, Copyright (c) 2002-2022, DuraSpace. All rights reserved.