Carlos Diaz Ruiz spoke at a Webinar on “Market-Oriented Disinformation and the Threat of Agentic Disinformation”
In a webinar hosted by ISEAS – Yusof Ishak Institute, Dr Carlos Diaz Ruiz presented an analysis of market-driven informational disorder and the emerging threat of agentic AI in digital advertising ecosystems. Thereafter, Mr Beltsazar Krisetya presented a Southeast Asian perspective on the extent to which governments in the region regulated the advertising supply chains and other structural incentives that sustain informational disorder beyond market logic. The event was moderated by Dr Maria Monica Wihardja, Co-Coordinator of the Media, Technology and Society (MTS) Programme.
MEDIA, TECHNOLOGY & SOCIETY PROGRAMME WEBINAR
Clockwise from top left: Dr Maria Monica Wihardja, Dr Carlos Diaz Ruiz, and Mr Beltsazar Krisetya. (Credit: ISEAS – Yusof Ishak Institute)
Dr Diaz Ruiz began his presentation by introducing the concept of “agentic disinformation”, warning that emerging technological developments are moving the information ecosystem toward a more complex and troubling future. Although such systems are not yet fully realised, he argued that the tools enabling them are already emerging. He urged audiences to consider both the current information environment and its trajectory. Contextualising the nature of his work, he noted that, unlike conventional security- or military-focused approaches, his research on disinformation emphasises civil society concerns such as digital democracy, journalism, and fact-checking. From this perspective, he highlighted the role of market dynamics and monetisation in shaping the spread of disinformation.
Dr Diaz Ruiz reframed disinformation not merely as false or misleading content with malicious intent, but as coordinated, provocative narratives designed for amplification within the attention economy. Social media, he explained, functions as an attention, pleasure and surveillance machine, rewarding incendiary content that generates clicks and engagement. The core problem, therefore, is not simply the existence of falsehoods, but the amplification systems that echo, stabilise and monetise misleading narratives over time.
Dr Diaz Ruiz then argued that disinformation is fundamentally a business problem rooted in digital advertising and the monetisation structures of the economy of content creation. Platforms profit from surveillance and traffic distribution, while advertising revenue often inadvertently funds misleading or harmful content. Dr Diaz Ruiz warned that the shift towards AI-mediated “zero-click internet” environments could intensify these dynamics. He described how AI agents and advertising technologies can enable “agentic disinformation”, where autonomous systems are designed to steer beliefs and behaviours on demand. These agents would operate within AI-mediated environments through mechanisms such as sponsored prompts, curated responses, and manipulated training data, allowing advertisers or political actors to subtly shape information outputs before they reach users. He argued that in such an ecosystem, persuasion could be embedded directly into AI systems that appear authoritative and impartial, reducing users’ incentives to verify sources and making disinformation harder to detect.
In his closing remarks, Dr Diaz Ruiz cautioned that agentic disinformation represents a significant emerging risk, even if its full implications are still unfolding. He recalled how the founders of Google had once warned that advertising-funded search engines would be inherently biased, underscoring the irony of similar assurances now being made by AI companies.
Mr Krisetya contextualised Dr Diaz Ruiz’s analysis within the region of Southeast Asia, highlighting how regulatory and civil society responses often overlook the structural incentives driving disinformation. Citing data compiled by the University of St. Gallen, he noted that most policies focus on content moderation rather than reforming existing advertising infrastructure. He added that civil society efforts similarly prioritise enforcement, profit-sharing and platform accountability, with limited attention to transforming the underlying ad-tech ecosystem. Among the reasons he mentioned were the invisibility of the advertising supply chain and the lack of bandwidth of civil society organisations, as they were already overstretched with surveillance, press freedom and content moderation issues.
Drawing on an analysis of 138 national-level policy entries in the Digital Policy Alert database, Mr Krisetya observed that regulation across Southeast Asia is extensive but concentrated in areas that align with state interests, such as content moderation and AI governance. Measures addressing the advertising supply chain, such as ad verification, competition and market reforms, and programmatic transparency, remain comparatively underdeveloped. In many cases, existing transparency rules are designed primarily to extend state oversight over platforms rather than restructuring the commercial advertising infrastructure that funds online content.
Mr Krisetya then compared regulatory priorities across countries in the region. He used Singapore and Malaysia as examples of countries that often prioritise maintaining their attractiveness as technology and investment hubs, which can lead to competition and market reforms in advertising markets being deprioritised. He cited Vietnam and Thailand as examples of countries introducing transparency rules that expand state monitoring of digital platforms. Notably, Vietnam is currently the only country in Southeast Asia with intermediary obligations in ad governance. Indonesia and the Philippines, by contrast, have instead focused mainly on enforcing rules governing what content may be posted online.
When Southeast Asian regulators target advertising providers, structural market reforms are higher on the agenda than when targeting user-generated content platforms, but only one out of five regulatory activities in Southeast Asia is targeted at advertising providers. As a result, structural market reforms targeting advertising intermediaries and programmatic supply chains remain largely nascent in the member states’ domestic policy agenda. Southeast Asian countries have yet to address underlying market incentives in the dissemination of false and harmful online content.
Lastly, Mr Krisetya identified the broader technical, political, cultural and institutional enablers of disinformation in the region. These include patronage funding, socio-cultural hierarchies, regulatory capture, and moderation language asymmetries, creating conditions for misleading narratives to thrive. He concluded his presentation by urging policymakers to consider three key areas: whether interventions target market, political, cultural or platform failures; who benefits from the status quo; and whether proposed reforms risk capture by the very actors they seek to constrain.
The presentations were followed by a moderated discussion between Dr Diaz Ruiz, Mr Krisetya, and Dr Wihardja. They first discussed the structural challenges of data protection and market-driven disinformation in Southeast Asia. Dr Diaz Ruiz explained that tools such as HTTP cookies, originally intended for legitimate website functionality, have been repurposed for extensive surveillance and commercial targeting. Attempts to replace or restrict these tools, such as Google’s ID tracking system, often exacerbate the concentration of control in the hands of a few corporations, rather than resolving privacy concerns.
The discussion then shifted to the rise of the zero-click internet and its implications for news and media. Dr Diaz Ruiz warned that AI-generated summaries and curated content may undermine investigative journalism by diverting users away from original sources, eliminating the viability of critical perspectives. Users are exposed to highly personalised information, often without the ability to verify sources or context, eroding critical thinking and decision-making skills. This environment also magnifies agentic disinformation, where AI-driven agents steer user behaviour to achieve commercial or political objectives, exploiting fear and misinformation. Particularly worrying is the convergence of AI agents and AdTech, as advertisements are already being tested on large language model applications such as ChatGPT, where users frequently seek legal advice, therapy, and companionship.
Thereafter, the panel addressed the need for regulatory and competition measures. Dr Diaz Ruiz proposed introducing “Know Your Customer” rules for intermediaries, ensuring transparency about who is paying to influence online content, alongside due diligence requirements for advertisers to verify the legitimacy of the platforms and creators they finance. Together, these measures aim to increase accountability, mitigate monopolistic control in the digital advertising landscape, and safeguard both human rights and democratic processes.
The event concluded with a Q&A session. Dr Diaz Ruiz and Mr Krisetya answered questions from the audience on topics such as improving public understanding of market-oriented disinformation, the role of individual and collective action against it, and practical steps to navigate and counter the disinformation economy. Additionally, they also addressed questions regarding the implications of open-source AI agents and legal liability for AI-driven actions. Lastly, they tackled queries on incentivising “trusted” AI tools and platform economies.
The webinar drew an audience of 69 participants.
The text above was posted originally here by ISEAS – Yusof Ishak Institute