Publications
You can also find the updated list of articles on my Google Scholar profile (opens in another page).
2024
A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities
Andrea Tocchetti*, Lorenzo Corti*, Agathe Balayn*, Mireia Yurrita, Philip Lippmann, Marco Brambilla, Jie Yang — in ACM Computing Surveys, 2024
* These authors contributed equally to this research.
Recommended Citation
@article{10.1145/3665926, author = {Tocchetti, Andrea and Corti, Lorenzo and Balayn, Agathe and Yurrita, Mireia and Lippmann, Philip and Brambilla, Marco and Yang, Jie}, title = {A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities}, year = {2024}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, issn = {0360-0300}, url = {https://doi.org/10.1145/3665926}, doi = {10.1145/3665926}, abstract = {Despite the impressive performance of Artificial Intelligence (AI) systems, their robustness remains elusive and constitutes a key issue that impedes large-scale adoption. Besides, robustness is interpreted differently across domains and contexts of AI. In this work, we systematically survey recent progress to provide a reconciled terminology of concepts around AI robustness. We introduce three taxonomies to organize and describe the literature both from a fundamental and applied point of view: 1) methods and approaches that address robustness in different phases of the machine learning pipeline; 2) methods improving robustness in specific model architectures, tasks, and systems; and in addition, 3) methodologies and insights around evaluating the robustness of AI systems, particularly the trade-offs with other trustworthiness properties. Finally, we identify and discuss research gaps and opportunities and give an outlook on the field. We highlight the central role of humans in evaluating and enhancing AI robustness, considering the necessary knowledge they can provide, and discuss the need for better understanding practices and developing supportive tools in the future.}, note = {Just Accepted}, journal = {ACM Comput. Surv.}, month = {may}, keywords = {Artificial Intelligence, Robustness, Human-Centered AI, Trustworthy AI} }
Understanding Stakeholders’ Perceptions and Needs Across the LLM Supply Chain
Agathe Balayn, Lorenzo Corti, Fanny Rancourt, Fabio Casati, Ujwal Gadiraju — in Human-Centered Explainable AI (HCXAI) (CHI 2024 workshop), 2024
Recommended Citation
@misc{balayn2024understanding, title={Understanding Stakeholders' Perceptions and Needs Across the LLM Supply Chain}, author={Agathe Balayn and Lorenzo Corti and Fanny Rancourt and Fabio Casati and Ujwal Gadiraju}, year={2024}, eprint={2405.16311}, archivePrefix={arXiv}, primaryClass={cs.HC} }
“It Is a Moving Process”: Understanding the Evolution of Explainability Needs of Clinicians in Pulmonary Medicine
Lorenzo Corti, Rembrandt Oltmans, Jiwon Jung, Agathe Balayn, Marlies Wijsenbeek, Jie Yang — in CHI '24: Proceedings of the CHI Conference on Human Factors in Computing Systems, 2024
Recommended Citation
@inproceedings{Corti2024XAIIPF, author = {Corti, Lorenzo and Oltmans, Rembrandt and Jung, Jiwon and Balayn, Agathe and Wijsenbeek, Marlies and Yang, Jie}, title = {``It Is a Moving Process": Understanding the Evolution of Explainability Needs of Clinicians in Pulmonary Medicine}, year = {2024}, isbn = {9798400703300}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3613904.3642551}, doi = {10.1145/3613904.3642551}, abstract = {Clinicians increasingly pay attention to Artificial Intelligence (AI) to improve the quality and timeliness of their services. There are converging opinions on the need for Explainable AI (XAI) in healthcare. However, prior work considers explanations as stationary entities with no account for the temporal dynamics of patient care. In this work, we involve 16 Idiopathic Pulmonary Fibrosis (IPF) clinicians from a European university medical centre and investigate their evolving uses and purposes for explainability throughout patient care. By applying a patient journey map for IPF, we elucidate clinicians’ informational needs, how human agency and patient-specific conditions can influence the interaction with XAI systems, and the content, delivery, and relevance of explanations over time. We discuss implications for integrating XAI in clinical contexts and more broadly how explainability is defined and evaluated. Furthermore, we reflect on the role of medical education in addressing epistemic challenges related to AI literacy.}, booktitle = {Proceedings of the CHI Conference on Human Factors in Computing Systems}, articleno = {441}, numpages = {21}, keywords = {Explainable AI, Healthcare, User Needs}, location = {, Honolulu, HI, USA, }, series = {CHI '24} }
2023
ARTIST: ARTificial Intelligence for Simplified Text
Lorenzo Corti, Jie Yang — in Generative AI and HCI (CHI 2023 workshop), 2023
Recommended Citation
@misc{Corti2023Artist, title={ARTIST: ARTificial Intelligence for Simplified Text}, author={Lorenzo Corti and Jie Yang}, year={2023}, eprint={2308.13458}, archivePrefix={arXiv}, primaryClass={cs.CL} }
2022
CHIME: Causal Human-in-the-Loop Model Explanations
Shreyan Biswas, Lorenzo Corti, Stefan Buijsman, Jie Yang — in AAAI Conference on Human Computation and Crowdsourcing, 2022
Recommended Citation
@article{Biswas2022CHIME, title={CHIME: Causal Human-in-the-Loop Model Explanations}, volume={10}, url={https://ojs.aaai.org/index.php/HCOMP/article/view/21985}, DOI={10.1609/hcomp.v10i1.21985}, abstractNote={Explaining the behaviour of Artificial Intelligence models has become a necessity. Their opaqueness and fragility are not tolerable in high-stakes domains especially. Although considerable progress is being made in the field of Explainable Artificial Intelligence, scholars have demonstrated limits and flaws of existing approaches: explanations requiring further interpretation, non-standardised explanatory format, and overall fragility. In light of this fragmentation, we turn to the field of philosophy of science to understand what constitutes a good explanation, that is, a generalisation that covers both the actual outcome and, possibly multiple, counterfactual outcomes. Inspired by this, we propose CHIME: a human-in-the-loop, post-hoc approach focused on creating such explanations by establishing the causal features in the input. We first elicit people’s cognitive abilities to understand what parts of the input the model might be attending to. Then, through Causal Discovery we uncover the underlying causal graph relating the different concepts. Finally, with such a structure, we compute the causal effects different concepts have towards a model’s outcome. We evaluate the Fidelity, Coherence, and Accuracy of the explanations obtained with CHIME with respect to two state-of-the-art Computer Vision models trained on real-world image data sets. We found evidence that the explanations reflect the causal concepts tied to a model’s prediction, both in terms of causal strength and accuracy.}, number={1}, journal={Proceedings of the AAAI Conference on Human Computation and Crowdsourcing}, author={Biswas, Shreyan and Corti, Lorenzo and Buijsman, Stefan and Yang, Jie}, year={2022}, month={Oct.}, pages={27-39} }
COCTEAU: an Empathy-Based Tool for Decision-Making
Andrea Mauri, Andrea Tocchetti, Lorenzo Corti, Yen-Chia Hsu, Himanshu Verma, Marco Brambilla — in International Conference on Web Engineering, 2022
Recommended Citation
@inproceedings{Mauri2022Cocteau, author = {Mauri, Andrea and Tocchetti, Andrea and Corti, Lorenzo and Hsu, Yen-Chia and Verma, Himanshu and Brambilla, Marco}, title = {COCTEAU: an Empathy-Based Tool for Decision-Making}, year = {2022}, isbn = {9781450391306}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3487553.3524233}, doi = {10.1145/3487553.3524233}, abstract = {Traditional approaches to data-informed policymaking are often tailored to specific contexts and lack strong citizen involvement and collaboration, which are required to design sustainable policies. We argue the importance of empathy-based methods in the policymaking domain given the successes in diverse settings, such as healthcare and education. In this paper, we introduce COCTEAU (Co-Creating The European Union), a novel framework built on the combination of empathy and gamification to create a tool aimed at strengthening interactions between citizens and policy-makers. We describe our design process and our concrete implementation, which has already undergone preliminary assessments with different stakeholders. Moreover, we briefly report pilot results from the assessment. Finally, we describe the structure and goals of our demonstration regarding the newfound formats and organizational aspects of academic conferences.}, booktitle = {Companion Proceedings of the Web Conference 2022}, pages = {219–222}, numpages = {4}, keywords = {Crowdsourcing, decision-making, empathy, gamification, human-centered computing}, location = {Virtual Event, Lyon, France}, series = {WWW '22} }
2021
Scaling collaborative policymaking: how to leverage on digital co-creation to engage citizens
Diletta Di Marco, Andrea Tocchetti, Lorenzo Corti, Marco Brambilla — in Data for Policy, 2021
Recommended Citation
@misc{DiMarco2021PolicyMaking, author = {Di Marco Diletta and Tocchetti Andrea and Corti Lorenzo and Brambilla Marco}, title= {Scaling collaborative policymaking: how to leverage on digital co-creation to engage citizens}, month = aug, year = 2021, publisher = {Zenodo}, doi = {10.5281/zenodo.5227881}, url = {https://doi.org/10.5281/zenodo.5227881} }
VaccinItaly: monitoring Italian conversations around vaccines on Twitter and Facebook
Francesco Pierri, Andrea Tocchetti, Lorenzo Corti, Marco Di Giovanni, Silvio Pavanetto, Marco Brambilla, Stefano Ceri — in International Workshop on Cyber Social Threats (ICWSM 2021 workshop), 2021
Recommended Citation
@inproceedings{Pierri2021VaccinItaly, title={VaccinItaly: monitoring Italian conversations around vaccines on Twitter and Facebook}, author={Pierri, Francesco and Tocchetti, Andrea and Corti, Lorenzo and Di Giovanni, Marco and Pavanetto, Silvio and Brambilla, Marco and Ceri, Stefano} booktitle={Workshop Proceedings of the 15th International AAAI Conference on Web and Social Media}, year={2021} }
A content-based approach for the analysis and classification of vaccine-related stances on Twitter: the Italian scenario
Marco Di Giovanni, Lorenzo Corti, Silvio Pavanetto, Francesco Pierri, Andrea Tocchetti, Marco Brambilla — in Information Credibility and Alternative Realities in Troubled Democracies (ICWSM 2021 workshop), 2021
Recommended Citation
@inproceedings{DiMarco2021Content, title={A content-based approach for the analysis and classification of vaccine-related stances on Twitter: the Italian scenario}, author={Di Giovanni, Marco and Corti, Lorenzo and Pavanetto, Silvio and Pierri, Francesco and Tocchetti, Andrea and Brambilla, Marco}, booktitle={Workshop Proceedings of the 15th International AAAI Conference on Web and Social Media}, pages={1--6}, year={2021} }
A Web-Based Co-Creation and User Engagement Method and Platform
Andrea Tocchetti, Lorenzo Corti, Marco Brambilla, Diletta Di Marco — in International Conference on Web Engineering, 2021
Recommended Citation
@InProceedings{Tocchetti2021COCTEAU, author="Tocchetti, Andrea and Corti, Lorenzo and Brambilla, Marco and Marco, Diletta Di", editor="Brambilla, Marco and Chbeir, Richard and Frasincar, Flavius and Manolescu, Ioana", title="A Web-Based Co-Creation and User Engagement Method and Platform", booktitle="Web Engineering", year="2021", publisher="Springer International Publishing", address="Cham", pages="496--501", abstract="In recent years, new methods to engage citizens in deliberative processes of governments and institutions have been studied. Such methodologies have become a necessity to assure the efficacy and longevity of policies. Several tools and solutions have been proposed while trying to achieve such a goal. The dual problem to citizen engagement is how to provide policy-makers with useful and actionable insights stemming from those processes. In this paper, we propose a research featuring a method and implementation of a crowdsourcing and co-creation technique that can provide value to both citizens and policy-makers engaged in the policy-making process. Thanks to our methodology, policy-makers can design challenges for citizens to partake, cooperate and provide their input. We also propose a web-based tool that allow citizens to participate and produce content to support the policy-making processes through a gamified interface that focuses on emotional and vision-oriented content.", isbn="978-3-030-74296-6" }
EXP-Crowd: A gamified crowdsourcing framework for explainability
Andrea Tocchetti, Lorenzo Corti, Marco Brambilla, Irene Celino — in Frontiers in Artificial Intelligence, 2021
Recommended Citation
@ARTICLE{Tocchetti2022EXPCrowd, AUTHOR={Tocchetti, Andrea and Corti, Lorenzo and Brambilla, Marco and Celino, Irene}, TITLE={EXP-Crowd: A Gamified Crowdsourcing Framework for Explainability}, JOURNAL={Frontiers in Artificial Intelligence}, VOLUME={5}, YEAR={2022}, URL={https://www.frontiersin.org/articles/10.3389/frai.2022.826499}, DOI={10.3389/frai.2022.826499}, ISSN={2624-8212}, ABSTRACT={The spread of AI and black-box machine learning models made it necessary to explain their behavior. Consequently, the research field of Explainable AI was born. The main objective of an Explainable AI system is to be understood by a human as the final beneficiary of the model. In our research, we frame the explainability problem from the crowds point of view and engage both users and AI researchers through a gamified crowdsourcing framework. We research whether it's possible to improve the crowds understanding of black-box models and the quality of the crowdsourced content by engaging users in a set of gamified activities through a gamified crowdsourcing framework named EXP-Crowd. While users engage in such activities, AI researchers organize and share AI- and explainability-related knowledge to educate users. We present the preliminary design of a game with a purpose (G.W.A.P.) to collect features describing real-world entities which can be used for explainability purposes. Future works will concretise and improve the current design of the framework to cover specific explainability-related needs.} }