Keynotes

Kai Shu (Department of Computer Science, Illinois Institute of Technology, Chicago, USA)

“Combating Disinformation on Social Media and Its Challenges”

Short bio: Dr. Kai Shu is a Gladwin Development Chair Assistant Professor in the Department of Computer Science at Illinois Institute of Technology since Fall 2020. He obtained his Ph.D. in Computer Science at Arizona State University. He was the recipient of the 2020 ASU Engineering Dean’s Dissertation Award, 2021 Finalist of Meta Research Faculty Award, 2022 Cisco Research Faculty Award, 2022 AMiner AI-2000 Most Influential Scholar Honorable Mention, 2022 Baidu AI Global High-Potential Young Scholar Award, and 2023 AAAI New Faculty Highlights. His research addresses challenges varying from big data, to social media, and to trustworthy AI on issues on fake news detection, social network analysis, cybersecurity, and health informatics. He has published innovative works in highly ranked journals and top conference proceedings such as ACM KDD, SIGIR, WSDM, WWW, EMNLP, NAACL, CIKM, IEEE ICDM, IJCAI, and AAAI.

Talk Abstract: The global proliferation of disinformation has become increasingly prominent in recent years, particularly during the COVID-19 pandemic. This dissemination of false information can have significant negative impacts on individuals and society as a whole. Social media platforms have shown to be particularly susceptible to the spread of fake news, which can cause divisions, polarization, confusion, and can even be exploited by nation-states to further their own interests. Despite recent advances in identifying fake news, detecting and mitigating disinformation remains a challenging task due to its complexity, diversity, multi-modality, speed, and the costs associated with fact-checking and annotation, as well as the influence of social and psychological factors. In this talk, we look at some lessons learned when exploring strategies of detecting disinformation and fake news, and discuss challenges faced in disinformation research and the pressing need for interdisciplinary research.


Colin Porlezza (Institute of Media and Journalism, Università della Svizzera italiana, Lugano, Switzerland)

“Towards A Responsible Future of AI in Journalism”

Short bio: Dr. Colin Porlezza is a Senior Assistant Professor with the Institute of Media and Journalism at the Università della Svizzera italiana in Lugano, Switzerland, and a Senior Honorary Research Fellow with the Department of Journalism at City, University of London. As a journalism scholar, his central focus is on the transformation of journalism in today’s datafied and networked world, with a particular interest in how AI and automation shape journalism, and what kind of ethical challenges these transformations entail. Recently, he was a Knight News Innovation Fellow with the Tow Center for Digital Journalism at Columbia University; and he also directs the European Journalism Observatory EJO, a knowledge transfer platform that aims to bridge journalism research and practice in Europe, and that wants to contribute to fostering journalism professionalism and press freedom.

Talk Abstract: Artificial Intelligence (AI) is transforming different fields in society and challenges existing practices and professions. In journalism, AI and automation have become increasingly pervasive, influencing almost every aspect of newswork, from news gathering, to news production and news distribution. Although news organizations regard these tools as helpful for the editorial production, they change the nature, role and workflows of journalism. As algorithms are increasingly determining editorial decisions, the question of how to build responsible AI is becoming paramount, in particular because it raises specific ethical challenges regarding the potential issue of AI to perpetuate biases, undermining therefore the integrity of journalism practice. The input will thus focus on the challenges of responsible AI systems design for news production and the importance of incorporating ethical considerations and journalistic values into the development of responsible AI technology. This includes human oversight, accountability mechanisms, and promoting transparency in decision-making processes, and making sure that AI systems are secure, fair, and in line with editorial values. However, designing responsible AI and embedding it in journalistic workflows requires not only a sociotechnical design that blends work routines and values, but also a close collaboration between journalists and technologists, in particular during the conceptual design. Hence, the input aims to offer a valuable insight into the importance of developing clear principles for (co-)design and machine learning, allowing for an “ethics by design” approach that guides the responsible development of AI technology for journalism.