Next Article in Journal
Climate Change Misinformation in the United States: An Actor–Network Analysis
Next Article in Special Issue
Bibliometric and Content Analysis of the Scientific Work on Artificial Intelligence in Journalism
Previous Article in Journal
Expanding the Victory of Prohibition: Richmond P. Hobson’s Freelance Public Relations Crusade against Narcotics
Previous Article in Special Issue
Artificial Intelligence (AI) in Brazilian Digital Journalism: Historical Context and Innovative Processes
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

How Generative AI Is Transforming Journalism: Development, Application and Ethics

School of Government and Public Affairs, Communication University of China, Bei**g 100024, China
*
Author to whom correspondence should be addressed.
Journal. Media 2024, 5(2), 582-594; https://doi.org/10.3390/journalmedia5020039
Submission received: 17 April 2024 / Revised: 2 May 2024 / Accepted: 6 May 2024 / Published: 10 May 2024

Abstract

:
Generative artificial intelligence (GAI) is a technology based on algorithms, models, etc., that creates content such as text, audio, images, videos, and code. GAI is deeply integrated into journalism as tools, platforms and systems. However, GAI’s role in journalism dilutes the power of media professionals, changes traditional news production and poses ethical questions. This study attempts to systematically answer these ethical questions in specific journalistic practices from the perspectives of journalistic professionalism and epistemology. Building on the review of GAI’s development and application, this study identifies the responsibilities of news organizations, journalists and audiences, ensuring that they realize the potential of GAI while adhering to journalism professionalism and universal human values to avoid negative technological effects.

1. Introduction

Generative artificial intelligence (GAI) is a technology system based on algorithms, models, etc., designed to create text, audio, images, videos, and code. GAI includes various technologies and architectures to learn the underlying patterns and correlations in data, and subsequently generate content based on this acquired knowledge (Jovanovic and Campbell 2022). A research report published by Accenture in March 2023 shows that GAI ushers in a bold new future for science, business and society, with a profoundly positive impact on human creativity and productivity. Among business leaders, 98% of respondents agree AI foundation models will play an important role in their organization’s strategies over the next three to five years. Furthermore, as much as 40% of all working hours will be supported or augmented by language-based AI, such as GPT-4 (Harper et al. 2023).
GAI is currently used to assist and accelerate particular procedural tasks, such as content creation, fact-checking, data processing, image generation, speech conversion and translation, reducing the burden on human and increasing efficiency (Caswell et al. 2021; Nishal and Diakopoulos 2024). With the widespread adoption of GAI in news production, academia has begun to scrutinize GAI news production from three perspectives. The first line of study is comparing GAI-assisted journalism and traditional journalism (Hong and Tewksbury 2024; Wang et al. 2023). The second is to examine how GAI is used in journalism of specific countries or regions. For example, Gondwe (2023) explored how journalists in sub-Saharan Africa use GAI, while Pinto (2024) investigated GAI in the Brazilian news industry. The third is reflecting on ethical challenges and calling for upholding journalistic professionalism and its core values. Some scholars argue that journalistic professionalism is compromised by GAI, championing “Without journalists, there is no journalism” (Fernández et al. 2023). Some scholars express concern that the growing integration of GAI into news production may have a profound impact on public opinion and even influence the future of democracy (Spennemann 2023; Arguedas and Simon 2023). These three lines of study, whether emphasizing operational aspects or ethical concerns, isolate the examination of the ethical challenges posed by GAI in journalism and fail to contextualize these challenges within specific stages of news production for comprehensive investigation.
Therefore, this study attempts to discuss ethical challenges under the context of specific stages of news production. In addition to scrutinizing existing technological implementations, its goal is to offer theoretical perspectives that can guide researchers, professionals, and policymakers in leveraging the capabilities of GAI responsibly while upholding journalistic integrity and universal human values, thus mitigating adverse technological impacts.

2. Development of GAI in Journalism

AI has been used in the news industry for the past decade. The Associated Press (AP) was one of the earliest news media to integrate AI into news production (Radcliffe 2023). In 2014, the AP began using AI to process reports on corporate earnings. Before using AI, AP editors and journalists spent countless resources creating financial reports, which drew their focus away from news of greater significance. Despite considerable investment, the AP could only produce 300 financial reports per quarter, leaving thousands of potential corporate earnings reports unwritten. With the help of Wordsmith created by Automated Insights, now they are able to convert earnings data into publishable news stories within seconds; this is nearly 15 times more efficient than the traditional way (Miller 2015). Other media organizations including Bloomberg, Reuters, Forbes, The New York Times, The Washington Post, and the BBC also use AI for news production. The primary applications of AI in these large media organizations involve news gathering, production, and dissemination. In China, Tencent was the first to use AI for news writing. In September 2015, Tencent’s financial channel used a news-writing robot known as “Dreamwriter” for a news report titled “August CPI rose 2.0% year-on-year, hitting a new high in 12 months”. The content mainly consisted of data analysis and expert commentary on the data. Dreamwriter continued to release more reports, reaching 40,000 in the first three quarters of 2016 (Zhou 2015). On 18 November 2015, ** the roles and responsibilities of journalists and other media professionals. For example, traditionally, the information gathered by journalists should possess societal significance and humanistic considerations, reflecting the intricate dynamics between individuals and society, while also adhering to principles of fairness, objectivity, and rationality. If journalists are entirely replaced by machines, and it is left to machines to decide what information to gather as news material, there is a risk that machines may indiscriminately collect personal information or user data without human constraints, thus infringing on individual privacy. As a result, news production may violate basic human rights as journalism forsakes its fundamental purpose of serving society and instead takes on the role of mass surveillance (Dilmaghani et al. 2019; Andrew and Huang 2023; Guembe et al. 2022). With intelligent systems in place, users’ privacy is exposed, and the prerequisite for accessing intelligent services is the relinquishment of personal information.
Last year, June 29 witnessed sixteen individuals in the United States filing a lawsuit against ChatGPT. They accused the platform of collecting and revealing their personal information, such as account details, login credentials, emails, payment information, browsing history, social media interactions, chat logs, and other online activities, without adequately informing or obtaining consent from the users (Mauran 2023). Additionally, the restricted accuracy of GAI, particularly in comprehending intricate emotions, results in a growing prevalence of standardized news devoid of human touch in its delivery.

4.2. Content Production: Credibility

Credibility serves as not only a foundational principle in journalism but also a fundamental aspect of journalistic ethics. However, with the advent of the digital age, both scholars and professionals are reexamining the concept of “news credibility”. Alongside the digital transformation of the news industry, discussions surrounding the credibility of news have become increasingly prevalent. For instance, within the realm of digital news, discussions delve into nuanced concepts such as “experiential truth” within media technology, “perceived truth” within cognitive psychology, and “negotiated truth” concerning power dynamics. Despite the evolving definitions and discussions surrounding news credibility, its significance remains paramount in defining what qualifies as “news”.
In fact, truth is not absolute but falls within a measurable range. Thanks to technological advancements, journalism is now closer to truth than ever, sometimes even overly detailed and accurate, appearing clearer than reality itself. For example, the fact-checking ability of GAI is highly accurate—for example, ChatGPT achieves an accuracy rate of up to 68.79% (Hoes and Bermeo 2023). However, GAI’s usefulness in making fabrications convincingly authentic is concealing its detrimental risks of blurring the boundaries of reality. GAI, characterized by technical rationale, stealthily integrates mechanistic cognition rooted in technical logic and preconceived judgments into news production. Extreme technology supporters become adherents of technological superstition (Shipley and Williams 2023). For example, GAI has caused deepfake challenges, where Generative Adversarial Networks (GANs) are used for face swap**, lip synchronization, facial re-enactment, motion transfer, and more. GAI’s ability to generate fake content of individuals expressing opinions with specific emotions and even dancing has led to an upheaval in the news industry, making it more difficult for media professionals to ensure the authenticity of their content (Marconi and Daldrup 2018).
Moreover, the decreasing cost associated with producing deepfakes via GAI is exacerbated by the adoption of the Platform as a Service (PaaS) model in cloud services and cloud terminals. This trend intensifies the challenges posed by deepfake technology, amplifying its potential negative political ramifications. For example, a video features a two-minute-long conversation in which the leader of Progresívne Slovakia, Mr Michal Simecka, appears to discuss buying votes from the Roma minority with a journalist. AFP fact-checkers consulted several experts who concluded that the audio was synthesized by an AI tool trained on real samples of the speakers’ voices, and several copies are available on social media without a label marking them as misleading (Solon 2023). As Thomson et al. (2022) observe, “while the creation of synthetic media is not inherently problematic (for example, in the case of art, satire, or parody), issues emerge when those media are presented without transparency and masquerade as reality”. Indeed, outlets like The New York Times have begun labeling AI-generated images as such to highlight their provenance.

4.3. Customization and Dissemination: Value Orientation

News values encompass the qualities that enhance the significance or appeal of a story to the public. Consistent factors of news values include credibility and timeliness, while variables are prominence, impact, proximity, and interest. The depth and diversity of values embedded within news facts directly correlate with the overall value of the news report. Traditional media pursue truth and fulfill social responsibilities. Their news values include timeliness, relevance, and prominence. Caring about what the audience want, they also produce interesting news or news that matter to a given audience. With the development of GAI, power replaces news values, which means that whoever possesses the technology and controls the resource decides what is newsworthy. Deploying GAI requires substantial resources such as digital infrastructure and training corpora that are controlled by countries and enterprises with the most resources and power, leading to their monopoly. Consequently, the AI models prioritize their worldviews and perspectives, while their technological regulations and standards establish global norms, and their value systems become universal, ultimately causing a shift in news values towards power (McIntosh 2018; Acemoglu 2021). New Zealand’s data scientist David Rozado created a GAI model called RightWingGPT to express US right-wing political views. Research indicates that language models can subtly influence users’ values, highlighting the potential serious consequences of political biases in GAI models (Knight 2023). Other researchers use DALL-E and Stable Diffusion to convert text into images, revealing that these models perpetuate common stereotypes. For instance, prompts related to cleaners consistently generate images depicting women (Bianchi et al. 2023).
The inherent flaws in large language models directly affect the output of GAI, which relies on machine learning to mimic human behavior and generate content. This situation leads to systemic biases, value conflicts, cultural hegemony, stereotypes, and misinformation (Zhou et al. 2024). In addition, GAI models lack common sense reasoning, making it difficult to comprehend complex issues and distinguish nuances in tone and subtle emotions. Issues like AI hallucinations emerge, where generated content may seem plausible but contradicts real-world knowledge or existing data (Salvagno et al. 2023; Alkaissi and McFarlane 2023). As a result, GAI values shape the customization and dissemination of news, hindering comprehension of events, narrowing perspectives, intensifying polarized discourse, and eroding public discourse.
To facilitate an understanding of the study, a figure is shown here to depict the application of GAI in news production and the associated ethical challenges (Figure 1).
Lastly, it is imperative to discuss the legal ramifications stemming from the integration of GAI in news production, particularly concerning potential violations of individual privacy rights and copyrights. On 15 February 2023, Francesco Marconi, a Wall Street Journal reporter, publicly accused OpenAI of unauthorized use of content from prominent foreign media outlets such as Reuters, The New York Times, The Guardian, and the BBC to train its ChatGPT model, without compensating the original sources. Presently, prevailing legal frameworks predominantly attribute responsibility to individuals, which may prove insufficient in addressing GAI as digital entities. Legal accountability demands the establishment of a direct causal link between the AI operator and any infringement, with penalties commensurate with the actual harm inflicted. However, existing legal standards lack the necessary mechanisms to precisely identify and attribute damages, as well as engage in ongoing assessment and risk mitigation protocols to minimize adverse consequences. Moreover, the incorporation of GAI poses significant challenges to the protection of intellectual property. The question of whether AI-generated entities are entitled to intellectual property rights remains unresolved, although disputes over the rights of AI-generated entities have already arisen. Notably, in China, a defendant faced litigation for employing AI-generated images without the plaintiff’s authorization as illustrations for articles. The Bei**g Internet Court held that AI-generated images demonstrating human-like “originality” and intellectual input could be protected under copyright law as manifestations of creative intellectual endeavors.

5. Conclusions and Recommendations

This research extensively explores how GAI could potentially disrupt the conventional methods of news production. It delves deep into the integration of GAI within the news production workflow, thoroughly examining its current state and the ethical quandaries it introduces to the field of journalism. Through a meticulous examination of these ethical challenges, this study aims to offer invaluable insights into how GAI can play a constructive role in sha** the future of news production. Ultimately, its goal is to equip professionals and researchers with a comprehensive understanding to effectively navigate the continuous technological transformations occurring within the realm of journalism.
The advantages of GAI in news production are multifaceted. Firstly, GAI significantly enhances efficiency across various tasks such as collecting information, generating content, tailoring it to a specific audience, and disseminating it, thus optimizing the overall news production process. This relieves human reporters and editors of routine tasks, allowing them to focus on critical endeavors like fact-checking and closely monitoring unfolding news events. Moreover, GAI enriches the depth and breadth of news content, capturing the audience’s attention and catering to their information and emotional needs. Lastly, GAI fosters interaction between the audience and media entities, empowering the audience and prompting news organizations to incorporate a spectrum of viewpoints into their reporting. By embracing diversity and minimizing biases, GAI contributes to nurturing a more robust and inclusive media landscape.
The ethical challenges presented by GAI in journalism are significant and cannot be overlooked. The integration and progression of GAI disrupt the traditional roles of human journalists and editors, offering convenience while also raising valid concerns about news credibility. Of particular concern is the subtle influence that GAI may have on audience values, which may be less overt compared to the cognitive effects of conventional news reporting. This subtle influence threatens to undermine the integrity and professionalism of journalists, potentially resulting in a deterioration in the overall quality of media content. In response to these challenges, numerous media outlets have taken steps to develop guidelines for the responsible use of GAI, providing practical examples and guidance for newsrooms to effectively navigate these ethical dilemmas. For example, the Partnership on AI (PAI), a collaborative effort involving academic, civil society, industry, and media organizations, issued the AI Procurement and Use Guidebook for Newsrooms in August 2023. This guidebook aims to offer comprehensive measures for editorial departments to address the challenges posed by AI in a proactive and responsible manner (PAI Staff 2023).
To address these challenges, it is recommended that three key stakeholders—news organizations, journalists, and the audience—proactively embrace their roles in upholding journalistic ethics. For news organizations, responsible use of GAI is paramount. This involves ensuring that the benefits of GAI are harnessed for both the organization and the audience while mitigating the risks of amplifying biases and spreading misinformation with potential societal harm. Industry associations should establish guidelines or ethical standards for GAI application in journalism to facilitate this.
Journalists need to undergo a fundamental shift in their role, moving beyond simply producing information to serving as diligent fact-checkers. They must meticulously verify the credibility of news generated by GAI across a wide range of topics and domains, ensuring that only accurate information is disseminated to the public. Furthermore, journalists have a responsibility to actively inform the public of the truth and to critique any content that fails to meet the rigorous standards of journalism. In addition to these responsibilities, journalists should focus on enhancing their cross-disciplinary integration skills. This involves not only discovering news but also effectively integrating and organizing content from various sources and perspectives. Moreover, journalists must develop strong planning, organizing, and coordinating abilities through practical experience. By leveraging their innate human qualities such as intuition, adaptability, creativity, critical thinking, and a sense of humanistic care, journalists can maintain their agency and integrity in collaboration with GAI.
Audience members using news platforms driven by GAI should undergo thorough media literacy training to effectively navigate this evolving landscape. They must understand the nuances of GAI application to discern between true and false information generated by these systems. Additionally, it is vital to encourage users to minimize their reliance on GAI whenever possible, fostering critical thinking and independent judgment. Collaborative efforts from various stakeholders are essential to enhance audience media literacy. News organizations and journalists should seamlessly integrate media literacy education into their news practices, actively encouraging the audience to detect false information by facilitating the sharing of pertinent educational materials. Strengthening media literacy skills empowers the audience to adeptly identify misinformation and engage more meaningfully in the news ecosystem.

Author Contributions

Conceptualization, Y.S.; writing—original draft preparation, Y.S.; writing—review and editing, L.S.; supervision, Y.S.; project administration, L.S.; funding acquisition, L.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Tian** Education Science Planning Project “Theoretical and Practical Research on Public Art Education as a Path for Building a Learning Society”, grant number EGE210268 and Special funding project for basic scientific research business expenses of Chinese central universities “Research on the Index System of the Influence of Chinese Civilization Communication Power”, grant number CUC230D046.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Acemoglu, Daron. 2021. Harms of AI. Working Paper No: 29247. Cambridge: National Bureau of Economic Research. [Google Scholar]
  2. Alim, Arjun Neil. 2023. Daily Mirror Publisher Explores Using ChatGPT to Help Write Local News. Financial Times. February 20. Available online: https://www.ft.com/content/4fae2380-d7a7-410c-9eed-91fd1411f977 (accessed on 7 April 2024).
  3. Alkaissi, Hussam, and Samy I. McFarlane. 2023. Artificial hallucinations in ChatGPT: Implications in scientific writing. Cureus 15: e35179. [Google Scholar] [CrossRef] [PubMed]
  4. Andrew, Baker, and Casey Huang. 2023. Data breaches in the age of surveillance capitalism: Do disclosures have a new role to play? Critical Perspectives on Accounting 90: 102396. [Google Scholar] [CrossRef]
  5. Arguedas, Amy Ross, and Felix M. Simon. 2023. Automating Democracy: Generative AI, Journalism, and the Future of Democracy. Oxford: Balliol Interdisciplinary Institute, University of Oxford. [Google Scholar]
  6. Bianchi, Federico, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, and Aylin Caliskan. 2023. Easily accessible text-to-image generation amplifies demographic stereotypes at large scale. Paper presented at the 2023 ACM Conference on Fairness, Accountability, and Transparency, Chicago, IL, USA, June 12–15; pp. 1493–504. [Google Scholar]
  7. Bloomberg. 2023. Introducing BloombergGPT, Bloomberg’s 50-Billion Parameter Large Language Model, Purpose-Built from Scratch for Finance. March 23. Available online: https://www.bloomberg.com/company/press/bloomberggpt-50-billion-parameter-llm-tuned-finance/ (accessed on 7 April 2024).
  8. Caswell, Gurevych, Fink, and Konstantin Dörr. 2021. Automated Journalism and Professional Identity: A Professional Community Analysis of News Writers and News Consumers. Digital Journalism 9: 850–69. [Google Scholar]
  9. Chua, Gina. 2023. Semfor. How Chatbots Can Change Journalism. Or Not. February 20. Available online: https://www.semafor.com/article/02/17/2023/how-chatbots-can-change-journalism-or-not (accessed on 11 March 2024).
  10. Cools, Hannes, Baldwin Van Gorp, and Michaël Opgenhaffen. 2023. The levels of automation and autonomy in the AI-augmented newsroom: Toward a multi-level typology of computational journalism. In Research Handbook on Artificial Intelligence and Communication. Cheltenham: Edward Elgar Publishing, pp. 284–99. [Google Scholar]
  11. David, Emilia. 2023. Forbes Now Has Its Own AI Search Engine. October 27. Available online: https://www.theverge.com/2023/10/26/23933799/forbes-generative-ai-search-adelaide (accessed on 26 March 2024).
  12. Diakopoulos, Nicholas. 2019. Automating the News: How Algorithms Are Rewriting the Media. Cambridge: Harvard University Press, p. 16. [Google Scholar]
  13. Dilmaghani, Saharnaz, Matthias R. Brust, Grégoire Danoy, Natalia Cassagnes, and Johnatan Pecero. 2019. Privacy and security of big data in AI systems: A research and standards perspective. Paper presented at 2019 IEEE International Conference on Big Data, Los Angeles, CA, USA, December 9–12; pp. 5737–43. [Google Scholar]
  14. Doherty, Skye, and Viller Stephen. 2020. Prototy** interaction: Designing technology for communication. In Reimagining Communication: Experience. Edited by Michael Filimowicz and Veronika Tzankova. New York: Routledge, pp. 80–96. [Google Scholar]
  15. Fernández, Peña, Simón Fernández, Koldobika Meso Ayerdi, Ainara Larrondo Ureta, and Javier Díaz Noci. 2023. Without journalists, there is no journalism: The social dimension of generative artificial intelligence in the media. El Profesional de la Información 32: 2. [Google Scholar] [CrossRef]
  16. Gondwe, Gregory. 2023. CHATGPT and the Global South: How are journalists in sub-Saharan Africa engaging with generative AI? Online Media and Global Communication 2: 228–49. [Google Scholar] [CrossRef]
  17. Guembe, Blessing, Ambrose Azeta, Sanjay Misra, Victor Chukwudi Osamor, Luis Fernandez-Sanz, and Vera Pospelova. 2022. The Emerging Threat of Ai-Driven Cyber Attacks: A Review. Applied Artificial Intelligence 36: 2037254. [Google Scholar] [CrossRef]
  18. Harcup, Tony. 2023. The Struggle for News Value in the Digital Era. Journalism and Media 4: 902–17. [Google Scholar] [CrossRef]
  19. Harper, Christian, Jenn Francis, and Julie Bennink. 2023. Accenture Technology Vision 2023: Generative AI to Usher in a Bold New Future for Business, Merging Physical and Digital Worlds. March 30. Available online: https://newsroom.accenture.com/news/2023/accenture-technology-vision-2023-generative-ai-to-usher-in-a-bold-new-future-for-business-merging-physical-and-digital-worlds (accessed on 7 April 2024).
  20. Hoes, Altay, and Juan Bermeo. 2023. Leveraging ChatGPT for Efficient Fact-Checking. PsyAr**v 3. Available online: https://osf.io/qnjkf (accessed on 7 April 2024).
  21. Hong, Chang, and David Tewksbury. 2024. Can AI Become Walter Cronkite? Testing the Machine Heuristic, the Hostile Media Effect, and Political News Written by Artificial Intelligence. Digital Journalism 12: 1–24. [Google Scholar] [CrossRef]
  22. Ians. 2023. Punjab News Express. World’s First AI-Generated News Channel Called NewsGPT Launched. March 16. Available online: https://www.punjabnewsexpress.com/technology/news/worlds-first-ai-generated-news-channel-called-newsgpt-launched-203017 (accessed on 9 April 2024).
  23. Jovanovic, Mladan, and Mark Campbell. 2022. Generative artificial intelligence: Trends and prospects. Computer 55: 107–12. [Google Scholar] [CrossRef]
  24. Knight, Will. 2023. Meet ChatGPT’s Right-Wing Alter Ego. April 27. Available online: https://www.wired.com/story/fast-forward-meet-chatgpts-right-wing-alter-ego/ (accessed on 11 April 2024).
  25. Korn, Jennifer. 2023. CNN Business, How Companies Are Embracing Generative AI for Employees… or Not. September 22. Available online: https://edition.cnn.com/2023/09/22/tech/generative-ai-corporate-policy/index.html (accessed on 15 March 2024).
  26. Lamri, Jeremy. 2023. How Do Generative Artificial Intelligences (GAI) Actually Work? Medium. January 21. Available online: https://jeremy-lamri.medium.com/how-do-generative-artificial-intelligences-gai-actually-work-42670d1ca19 (accessed on 6 April 2024).
  27. Liu, **aomo, Armineh Nourbakhsh, Quanzhi Li, Sameena Shah, Robert Martin, and John Duprey. 2017. Reuters tracer: Toward automated news production using large scale social media data. Paper presented at 2017 IEEE International Conference on Big Data (Big Data), Boston, MA, USA, December 11–14; pp. 1483–93. [Google Scholar]
  28. Mara, Andrew, and Byron Hawk. 2009. Posthuman rhetorics and technical communication. Technical Communication Quarterly 19: 1–10. [Google Scholar] [CrossRef]
  29. Marconi, Francesco, and Till Daldrup. 2018. How the Wall Street Journal Is Preparing Its Journalists to Detect Deepfakes. November 15. Available online: https://www.niemanlab.org/2018/11/how-the-wall-street-journal-is-preparing-its-journalists-to-detect-deepfakes/ (accessed on 11 April 2024).
  30. Marr, Bernard. 2024. How Generative AI Will Change the Jobs of Journalists. Forbes. March 22. Available online: https://www.forbes.com/sites/bernardmarr/2024/03/22/how-generative-ai-will-change-the-jobs-of-journalists/?sh=64319f1c2847 (accessed on 4 April 2024).
  31. Mauran, Cecily. 2023. OpenAI Is Being Sued for Training ChatGPT with ‘Stolen’ Personal Data. Mashable. June 30. Available online: https://sea.mashable.com/tech/24637/openai-is-being-sued-for-training-chatgpt-with-stolen-personal-data (accessed on 7 April 2024).
  32. McIntosh, Daniel. 2018. We need to talk about data: How digital monopolies arise and why they have power and influence. Journal of Technology Law & Policy 23: 185–213. [Google Scholar]
  33. Miller, Ross. 2015. AP’s ‘Robot Journalists’ Are Writing Their Own Stories Now. January 30. Available online: https://www.theverge.com/2015/1/29/7939067/ap-journalism-automation-robots-financial-reporting (accessed on 9 March 2024).
  34. Moravec, Václav, Veronika Macková, Jakub Sido, and Kamil Ekštein. 2020. The robotic reporter in the Czech news agency: Automated journalism and augmentation in the newsroom. Communication Today 11: 36. [Google Scholar]
  35. Newman, Nic. 2024. Journalism, Media, and Technology Trends and Predictions. Available online: https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions-2024 (accessed on 7 April 2024).
  36. Nishal, Sachita, and Nicholas Diakopoulos. 2024. Envisioning the Applications and Implications of Generative AI for News Media. ar**v ar**v:2402.18835. [Google Scholar]
  37. Ouchchy, Coin, and Veljko Dubljević. 2020. AI in the headlines: The portrayal of the ethical issues of artificial intelligence in the media. AI & Society 35: 927–36. [Google Scholar]
  38. PAI Staff. 2023. PAI Seeks Public Comment on the AI Procurement and Use Guidebook for Newsrooms. August 3. Available online: https://partnershiponai.org/pai-seeks-public-comment-on-the-ai-procurement-guidebook-for-newsrooms/ (accessed on 7 April 2024).
  39. Patrick, Joseph. 2023. How Does the Conversation between a Journalist and Bard, the AI Chatbot at Google Occur? Heart of Hollywood Magazine. June 2. Available online: https://www.heartofhollywoodmagazine.com/post/how-does-the-conversation-between-a-journalist-and-bard-the-ai-chatbot-at-google-occur (accessed on 14 March 2024).
  40. Pinto, Barbosa. 2024. Artificial Intelligence (AI) in Brazilian Digital Journalism: Historical Context and Innova-tive Processes. Journalism and Media 5: 325–41. [Google Scholar] [CrossRef]
  41. Radcliffe, Damian. 2023. Mediamakersmeet. Unlocking the Power of AI: 6 Lessons from AP for Publishers. March 3. Available online: https://mediamakersmeet.com/unlocking-the-power-of-ai-6-lessons-from-ap-for-publishers/ (accessed on 10 March 2024).
  42. Reuters. 2017. May 15. Available online: https://www.reutersagency.com/en/reuters-community/reuters-news-tracer-filtering-through-the-noise-of-social-media/ (accessed on 15 March 2024).
  43. Reuters Institute. 2023. Trends and Predictions 2024. Available online: https://reutersinstitute.politics.ox.ac.uk/journalism-media-and-technology-trends-and-predictions (accessed on 18 March 2024).
  44. Salvagno, Michele, Fabio Silvio Taccone, and Alberto Giovanni Gerli. 2023. Artificial intelligence hallucinations. Critical Care 27: 180. [Google Scholar] [CrossRef] [PubMed]
  45. Shipley, Gerhard P., and Deborah H. Williams. 2023. Critical AI Theory: The Ontological Problem. Open Journal of Social Sciences 11: 618–35. [Google Scholar] [CrossRef]
  46. Simon, Felix M. 2022. Uneasy bedfellows: AI in the news, platform companies and the issue of journalistic autonomy. Digital Journalism 10: 1832–54. [Google Scholar] [CrossRef]
  47. Solon, Olivia. 2023. Bloomberg Law. Trolls in Slovakian Election Tap AI Deepfakes to Spread Disinfo. September 29. Available online: https://news.bloomberglaw.com/artificial-intelligence/trolls-in-slovakian-election-tap-ai-deepfakes-to-spread-disinfo (accessed on 7 April 2024).
  48. Spennemann, Dirk H. R. 2023. Will the Age of Generative Artificial Intelligence Become an Age of Public Ignorance? Preprints 2023091528. [Google Scholar]
  49. The New York Times. 2023. A Valentine, from A.I. to You. February 13. Available online: https://www.nytimes.com/interactive/2023/02/13/opinion/valentines-day-chatgpt.html (accessed on 4 April 2024).
  50. Thomson, T. J., Daniel Angus, Paula Dootson, Edward Hurcombe, and Adam Smith. 2022. Visual Mis/Disinformation in Journalism and Public Communications: Current Verification Practices, Challenges, and Future Opportunities. Journalism Practice 16: 938–62. [Google Scholar] [CrossRef]
  51. Twipe. 2023. The “Wild West” of Generative AI Experiments, for News Publishers. Mediamakersmeet. May 12. Available online: https://mediamakersmeet.com/the-wild-west-of-generative-ai-experiments-for-news-publishers/ (accessed on 6 April 2024).
  52. Wang, Sitong, Samia Menon, Tao Long, Keren Henderson, Dingzeyu Li, Kevin Crowston, Mark Hansen, Jeffrey V. Nickerson, and Lydia B. Chilton. 2023. Reelf-ramer: Co-creating news reels on social media with generative AI. ar**v ar**v:2304.09653. [Google Scholar]
  53. Zagorulko, Dmytro I. 2023. ChatGPT in newsrooms: Adherence of AI-generated content to journalism standards and prospects for its implementation in digital media. Vcheni Zapysky TNU Imeni VI Vernadskoho 34: 319–25. [Google Scholar] [CrossRef]
  54. Zhong, Y, and Han Zhang. 2019. “Kuaibi **aoxin”: **nhua News Agency’s First Robot Reporter. **nwenzhanxian. February 27. Available online: http://media.people.com.cn/GB/n1/2019/0227/c425664-30905230.html (accessed on 6 April 2024).
  55. Zhou, Tong. 2015. This Is a Manuscript Written by a Robot in One Second. September 11. Available online: https://www.im2maker.com/news/20150911/475.html (accessed on 9 March 2024).
  56. Zhou, Mi, Vibhanshu Abhishek, Timothy Derdenger, Jaymo Kim, and Kannan Srinivasan. 2024. Bias in Generative AI. ar**v ar**v:2403.02726. [Google Scholar]
Figure 1. Application of GAI in news production and the associated ethical challenges.
Figure 1. Application of GAI in news production and the associated ethical challenges.
Journalmedia 05 00039 g001
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Shi, Y.; Sun, L. How Generative AI Is Transforming Journalism: Development, Application and Ethics. Journal. Media 2024, 5, 582-594. https://doi.org/10.3390/journalmedia5020039

AMA Style

Shi Y, Sun L. How Generative AI Is Transforming Journalism: Development, Application and Ethics. Journalism and Media. 2024; 5(2):582-594. https://doi.org/10.3390/journalmedia5020039

Chicago/Turabian Style

Shi, Yi, and Lin Sun. 2024. "How Generative AI Is Transforming Journalism: Development, Application and Ethics" Journalism and Media 5, no. 2: 582-594. https://doi.org/10.3390/journalmedia5020039

Article Metrics

Back to TopTop