In the last letter, we summarized the opportunities AI presents for news media, the first part of our report“Responsibility and Mission of News Media in AI Era”. This time, we continue to explore the “other side of the coin”, namely the risks and challenges that AI brings to news media.
The think tank report titled "Responsibility and Mission of News Media in AI Era" was released worldwide on October 14 during the 6th World Media Summit in Urumqi.
Part 2
Challenges: AI Creates Multiple Risks
All technologies possess a dual nature, offering both potential benefits and harms. Given the inherent uncertainties and extensive applications of AI, its evolution not only empowers the news media, but also introduces a multitude of new risks.
Ⅰ.Misinformation Triggers a Crisis of Trust
The misuse and abuse of AI have led to the widespread and viral production and dissemination of misinformation. This phenomenon undermines the social trust essential to news organizations and has sparked a global crisis in the credibility of the information environment.
(1)Unprecedented Scale of Misinformation Production
The technical intervention of AI has lowered the barriers and costs associated with creating and spreading false information, leading to its rapid proliferation and generating multiple layers of “informational fog” that distort the perceptions of reality.
The study found that generative AI has already triggered a revolution in the media industry, 67.6% of respondents observed significant transformations underway. The top three areas of concern are:
Production Process Needs Overhaul: The work patterns of editors and journalists are undergoing rapid changes.
Heightened Competition from Other Media Outlets: Particularly from self-media and other independent content creators.
Increased Effort Needed to Combat Misinformation: More resources needed to distinguish genuine news from fabricated content, including fake images.
The survey also found that most media organizations are adopting a cautious approach to generative AI, primarily concerned that it could undermine their credibility.
76.4% of respondents expressed concern about the “distortion and inaccuracy of news leads and materials”.
Additionally, regarding the question of whether the integration of generative AI into the media industry will enhance the credibility and reliability of the information environment over the next 3 to 5 years:
36.4% of respondents held a pessimistic outlook;
24.1% held an optimistic view;
39.5% remained neutral.
(2) “Deepfakes”: A More Deceptive Threat
AI’s multimodal capabilities have significantly diversified the forms of misinformation, making it more difficult for the general public to detect. One prominent example is the rise of “deepfakes”. Deepfakes can manipulate or fabricate images, voices, and videos to produce highly realistic yet misleading multimedia content.
The survey found that among media organizations still hesitant to fully embrace generative AI, the top three hindrances were:
AI has its own shortcomings, such as failing to meet expectations for accuracy and reliability in content generation.
Human-machine collaboration faces challenges, particularly due to a shortage of comprehensive talent for effective integration.
The high costs associated with technical investments.
(3)“Simulated Dissemination” Enhanced Concealment
Represented by "social bots," a new generation of “internet trolls” has infiltrated major social media platforms worldwide, becoming omnipresent “invisible viruses” on the web.
Unlike traditional “internet trolls” which rely on human manipulation through anonymous accounts, “bot armies” composed of social bots can generate personalized viewpoints and operate continuously 24/7, consistently building and reinforcing their “persona.” This makes the spread of misinformation more covert and difficult to detect.
Ⅱ. Technology Misuse Disrupts Public Opinion
The pervasive application of AI in information dissemination has introduced significant variables into the global public opinion landscape.
(1)Algorithmic Bias and Its Influence on Individual Cognition
Due to the characteristics of deep learning, large language models inevitably inherit stereotypes and value biases present in their training data and the designs of their human creators.
While artificial intelligence cannot directly influence the human brain yet, it has already permeated various aspects of social information flow. By leveraging the sheer volume of data and the accumulation of time, AI can gradually dominate social consciousness, ultimately facilitating value transmission to users and even eroding their cognitive frameworks.
Even when AI systems operate under the principle of serving humanity, their reliance on personalized content delivery can limit users’ cognitive horizons, exacerbating issues such as cognitive narrowing, rigid thinking, and group polarization.
(2)Machine Posters Manipulating Social Opinion
The emergence of advanced artificial intelligence has facilitated the influence or even control of public opinion, casting shadows over the necessary transparency and fairness of social discourse.
In the political realm, artificial intelligence has long been used to influence the value judgments and political stances of target individuals or to undermine the public opinion environment and social image of adversarial forces.
In the social sphere, malicious actors have exploited AI tools to release large amounts of emotional content, exacerbating social conflicts around sensitive topics such as race, immigration, and wealth disparity.
In the commercial realm, AI is also widely used for “data, rating and sales manipulation” to distort and obscure real evaluations, promote products, or discredit competitors, severely damaging the market order.
(3)Intelligent Weapons Intensifies Information Warfare
In a context of frequent social conflicts and tense geopolitical situations, artificial intelligence is widely used in “intelligence warfare,” “public opinion warfare,” and “cognitive warfare,” worsening tensions in international public opinion and significantly increasing the risks of escalation and conflict.
Through differentiated data delivery, AI can instantaneously create waves of public opinion that influence group cognition. By tracking data and employing algorithmic strategies, AI can predict the cognitive dynamics of different regions and groups, assisting in planning and promoting core narratives and topics.
Amidst the fog of war information disseminated by AI, truth and falsehood have become blurred, and suspicion and division grown, fundamentally altering the nature, means, and methods of modern warfare.
Ⅲ.Rapid Development Exacerbating Governance Concerns
The uncertainty of AI, far exceeding what is known, has fueled a widespread debate over AI’s development trajectory.
(1)The Debate over Development Paths
Should AI development be "accelerated" or "aligned"? For the foreseeable future, AI’s progress will likely be influenced by this ideological tug-of-war.
Proponents of acceleration argue that societal progress depends on technological innovation. Thus, pushing AI forward should be an ongoing pursuit for mankind.
The alignment camp, however, advocates prioritizing the ethical impacts and social consequences of AI to ensure that technology advances in line with human values
The survey reveals that most media organizations currently on the fence do not reject or underestimate generative artificial intelligence; rather, they plan to adopt it once the key conditions are met. Their top three priorities are:
Clearly identifying areas where AI can significantly enhance productivity and reduce labor costs;
Achieving significant improvements in AI performance, particularly in accuracy and reliability;
Ensuring absence of ethical controversies, regulatory challenges, and legal disputes related to journalism.
(2)The Dilemma of Value Alignment
The goal of value alignment is to ensure that AI operates in accordance with ethical principles, moral norms, and values of human so that it functions in a socially benign way. However, in the context of global cultural diversity, questions like "Whose values should AI be aligned with?" and "How should AI be aligned?" remain difficult to answer.
In the media sector, discrepancies in news value judgments, professional identities, and editorial practices across countries make it challenging to establish consistent standards for AI’s application.
As artificial intelligence becomes increasingly integrated into news production, its operational standards and logic will significantly impact how editors and journalists define what constitutes news, good news, and valuable news, leading to a necessary re-evaluation and evolution of the news value system.
(3)Regulatory Gaps
A fundamental conflict exists between privacy protection and AI development as AI relies on extensive human behavioral data and knowledge to improve, develop and be applied to different scenarios, such high dependence on data making privacy "nowhere to hide".
With the decentralization of rights related to information production, processing, publication, and redistribution, copyright disputes arising from the use of AI are gaining prominence.
High-quality, professional news content serves as a crucial training dataset for AI, and the content generated by these systems often closely resembles the raw data, potentially violating the copyrights of some media organizations.
Furthermore, issues around the copyright ownership and profit-sharing of AI-assisted news products remain unresolved.
Ⅳ.Intelligent Applications Widens the Development Gap
Like other transformative technological breakthroughs, the widespread application of next-generation AI will inevitably lead to shifts in social wealth and power, triggering a range of socio-political and economic issues.
(1)Individual Differences and Vulnerable Groups
There is no doubt that those with access to AI technology will be more competitive and advantaged in the future. However, due to disparities in cognitive ability, resource availability, and AI literacy, certain groups—such as the elderly, the poorly educated and low-income populations—will be increasingly marginalized as AI applications continue to evolve, becoming the new disadvantaged groups in the AI era.
(2)Urban-Rural Disparities and Digital Deserts
The "intelligence gap" between urban and rural areas is a widespread issue globally, not only as a technological challenge, but also as a social and economic one. This disparity exists across various dimensions, including social development, resource allocation, education levels, and infrastructure, as well as in individual income, cultural literacy, information technology skills, and notions of knowledge.
(3)The North-South Divide and the "AI Divide"
As the global economy shifts toward AI-driven growth, underdeveloped nations risk falling further behind, deepening the economic and social divide between them and developed countries. The uneven adoption of AI technology has become a pressing concern, outpacing the overall rate of economic development.
Additionally, some developed countries are found to leverage their first-mover advantage in AI technology to seek technological hegemony, forming exclusive groups that hinder other countries' progress and deliberately creating technological barriers to disrupt global AI supply chain. This “small courtyard with high walls” phenomenon will deepen the AI development gap between the Global North and Global South, exacerbating the situation where “the strong gets stronger, and the weak gets weaker”.
Stay tuned for our updates.
Full text: Responsibility and Mission of News Media in AI Era