Posts by Collection

blog

portfolio

publications

EXP-Crowd: A gamified crowdsourcing framework for explainability

Andrea Tocchetti, Lorenzo Corti, Marco Brambilla, Irene Celino

@ARTICLE{Tocchetti2022EXPCrowd, AUTHOR={Tocchetti, Andrea and Corti, Lorenzo and Brambilla, Marco and Celino, Irene}, TITLE={EXP-Crowd: A Gamified Crowdsourcing Framework for Explainability}, JOURNAL={Frontiers in Artificial Intelligence}, VOLUME={5}, YEAR={2022}, URL={https://www.frontiersin.org/articles/10.3389/frai.2022.826499}, DOI={10.3389/frai.2022.826499}, ISSN={2624-8212}, ABSTRACT={The spread of AI and black-box machine learning models made it necessary to explain their behavior. Consequently, the research field of Explainable AI was born. The main objective of an Explainable AI system is to be understood by a human as the final beneficiary of the model. In our research, we frame the explainability problem from the crowds point of view and engage both users and AI researchers through a gamified crowdsourcing framework. We research whether it's possible to improve the crowds understanding of black-box models and the quality of the crowdsourced content by engaging users in a set of gamified activities through a gamified crowdsourcing framework named EXP-Crowd. While users engage in such activities, AI researchers organize and share AI- and explainability-related knowledge to educate users. We present the preliminary design of a game with a purpose (G.W.A.P.) to collect features describing real-world entities which can be used for explainability purposes. Future works will concretise and improve the current design of the framework to cover specific explainability-related needs.} }

Recommended citation: @ARTICLE{Tocchetti2022EXPCrowd, AUTHOR={Tocchetti, Andrea and Corti, Lorenzo and Brambilla, Marco and Celino, Irene}, TITLE={EXP-Crowd: A Gamified Crowdsourcing Framework for Explainability}, JOURNAL={Frontiers in Artificial Intelligence}, VOLUME={5}, YEAR={2022}, URL={https://www.frontiersin.org/articles/10.3389/frai.2022.826499}, DOI={10.3389/frai.2022.826499}, ISSN={2624-8212}, ABSTRACT={The spread of AI and black-box machine learning models made it necessary to explain their behavior. Consequently, the research field of Explainable AI was born. The main objective of an Explainable AI system is to be understood by a human as the final beneficiary of the model. In our research, we frame the explainability problem from the crowds point of view and engage both users and AI researchers through a gamified crowdsourcing framework. We research whether it's possible to improve the crowds understanding of black-box models and the quality of the crowdsourced content by engaging users in a set of gamified activities through a gamified crowdsourcing framework named EXP-Crowd. While users engage in such activities, AI researchers organize and share AI- and explainability-related knowledge to educate users. We present the preliminary design of a game with a purpose (G.W.A.P.) to collect features describing real-world entities which can be used for explainability purposes. Future works will concretise and improve the current design of the framework to cover specific explainability-related needs.} } https://www.frontiersin.org/articles/10.3389/frai.2022.826499/full

A Web-Based Co-Creation and User Engagement Method and Platform

Andrea Tocchetti, Lorenzo Corti, Marco Brambilla, Diletta Di Marco

@InProceedings{Tocchetti2021COCTEAU, author="Tocchetti, Andrea and Corti, Lorenzo and Brambilla, Marco and Marco, Diletta Di", editor="Brambilla, Marco and Chbeir, Richard and Frasincar, Flavius and Manolescu, Ioana", title="A Web-Based Co-Creation and User Engagement Method and Platform", booktitle="Web Engineering", year="2021", publisher="Springer International Publishing", address="Cham", pages="496--501", abstract="In recent years, new methods to engage citizens in deliberative processes of governments and institutions have been studied. Such methodologies have become a necessity to assure the efficacy and longevity of policies. Several tools and solutions have been proposed while trying to achieve such a goal. The dual problem to citizen engagement is how to provide policy-makers with useful and actionable insights stemming from those processes. In this paper, we propose a research featuring a method and implementation of a crowdsourcing and co-creation technique that can provide value to both citizens and policy-makers engaged in the policy-making process. Thanks to our methodology, policy-makers can design challenges for citizens to partake, cooperate and provide their input. We also propose a web-based tool that allow citizens to participate and produce content to support the policy-making processes through a gamified interface that focuses on emotional and vision-oriented content.", isbn="978-3-030-74296-6" }

Recommended citation: @InProceedings{Tocchetti2021COCTEAU, author="Tocchetti, Andrea and Corti, Lorenzo and Brambilla, Marco and Marco, Diletta Di", editor="Brambilla, Marco and Chbeir, Richard and Frasincar, Flavius and Manolescu, Ioana", title="A Web-Based Co-Creation and User Engagement Method and Platform", booktitle="Web Engineering", year="2021", publisher="Springer International Publishing", address="Cham", pages="496--501", abstract="In recent years, new methods to engage citizens in deliberative processes of governments and institutions have been studied. Such methodologies have become a necessity to assure the efficacy and longevity of policies. Several tools and solutions have been proposed while trying to achieve such a goal. The dual problem to citizen engagement is how to provide policy-makers with useful and actionable insights stemming from those processes. In this paper, we propose a research featuring a method and implementation of a crowdsourcing and co-creation technique that can provide value to both citizens and policy-makers engaged in the policy-making process. Thanks to our methodology, policy-makers can design challenges for citizens to partake, cooperate and provide their input. We also propose a web-based tool that allow citizens to participate and produce content to support the policy-making processes through a gamified interface that focuses on emotional and vision-oriented content.", isbn="978-3-030-74296-6" } https://link.springer.com/chapter/10.1007/978-3-030-74296-6_38

A content-based approach for the analysis and classification of vaccine-related stances on Twitter: the Italian scenario

Marco Di Giovanni, Lorenzo Corti, Silvio Pavanetto, Francesco Pierri, Andrea Tocchetti, Marco Brambilla

@inproceedings{DiMarco2021Content, title={A content-based approach for the analysis and classification of vaccine-related stances on Twitter: the Italian scenario}, author={Di Giovanni, Marco and Corti, Lorenzo and Pavanetto, Silvio and Pierri, Francesco and Tocchetti, Andrea and Brambilla, Marco}, booktitle={Workshop Proceedings of the 15th International AAAI Conference on Web and Social Media}, pages={1--6}, year={2021} }

Recommended citation: @inproceedings{DiMarco2021Content, title={A content-based approach for the analysis and classification of vaccine-related stances on Twitter: the Italian scenario}, author={Di Giovanni, Marco and Corti, Lorenzo and Pavanetto, Silvio and Pierri, Francesco and Tocchetti, Andrea and Brambilla, Marco}, booktitle={Workshop Proceedings of the 15th International AAAI Conference on Web and Social Media}, pages={1--6}, year={2021} } https://workshop-proceedings.icwsm.org/abstract.php?id=2021_52

VaccinItaly: monitoring Italian conversations around vaccines on Twitter and Facebook

Francesco Pierri, Andrea Tocchetti, Lorenzo Corti, Marco Di Giovanni, Silvio Pavanetto, Marco Brambilla, Stefano Ceri

@inproceedings{Pierri2021VaccinItaly, title={VaccinItaly: monitoring Italian conversations around vaccines on Twitter and Facebook}, author={Pierri, Francesco and Tocchetti, Andrea and Corti, Lorenzo and Di Giovanni, Marco and Pavanetto, Silvio and Brambilla, Marco and Ceri, Stefano}, booktitle={Workshop Proceedings of the 15th International AAAI Conference on Web and Social Media}, year={2021} }

Recommended citation: @inproceedings{Pierri2021VaccinItaly, title={VaccinItaly: monitoring Italian conversations around vaccines on Twitter and Facebook}, author={Pierri, Francesco and Tocchetti, Andrea and Corti, Lorenzo and Di Giovanni, Marco and Pavanetto, Silvio and Brambilla, Marco and Ceri, Stefano}, booktitle={Workshop Proceedings of the 15th International AAAI Conference on Web and Social Media}, year={2021} } http://workshop-proceedings.icwsm.org/abstract.php?id=2021_11

Scaling collaborative policymaking: how to leverage on digital co-creation to engage citizens

Diletta Di Marco, Andrea Tocchetti, Lorenzo Corti, Marco Brambilla

@misc{DiMarco2021PolicyMaking, author = {Di Marco Diletta and Tocchetti Andrea and Corti Lorenzo and Brambilla Marco}, title= {Scaling collaborative policymaking: how to leverage on digital co-creation to engage citizens}, month = aug, year = 2021, publisher = {Zenodo}, doi = {10.5281/zenodo.5227881}, url = {https://doi.org/10.5281/zenodo.5227881} }

Recommended citation: @misc{DiMarco2021PolicyMaking, author = {Di Marco Diletta and Tocchetti Andrea and Corti Lorenzo and Brambilla Marco}, title= {Scaling collaborative policymaking: how to leverage on digital co-creation to engage citizens}, month = aug, year = 2021, publisher = {Zenodo}, doi = {10.5281/zenodo.5227881}, url = {https://doi.org/10.5281/zenodo.5227881} } https://zenodo.org/records/5227881

COCTEAU: an Empathy-Based Tool for Decision-Making

Andrea Mauri, Andrea Tocchetti, Lorenzo Corti, Yen-Chia Hsu, Himanshu Verma, Marco Brambilla

@inproceedings{Mauri2022Cocteau, author = {Mauri, Andrea and Tocchetti, Andrea and Corti, Lorenzo and Hsu, Yen-Chia and Verma, Himanshu and Brambilla, Marco}, title = {COCTEAU: an Empathy-Based Tool for Decision-Making}, year = {2022}, isbn = {9781450391306}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3487553.3524233}, doi = {10.1145/3487553.3524233}, abstract = {Traditional approaches to data-informed policymaking are often tailored to specific contexts and lack strong citizen involvement and collaboration, which are required to design sustainable policies. We argue the importance of empathy-based methods in the policymaking domain given the successes in diverse settings, such as healthcare and education. In this paper, we introduce COCTEAU (Co-Creating The European Union), a novel framework built on the combination of empathy and gamification to create a tool aimed at strengthening interactions between citizens and policy-makers. We describe our design process and our concrete implementation, which has already undergone preliminary assessments with different stakeholders. Moreover, we briefly report pilot results from the assessment. Finally, we describe the structure and goals of our demonstration regarding the newfound formats and organizational aspects of academic conferences.}, booktitle = {Companion Proceedings of the Web Conference 2022}, pages = {219–222}, numpages = {4}, keywords = {Crowdsourcing, decision-making, empathy, gamification, human-centered computing}, location = {Virtual Event, Lyon, France}, series = {WWW '22} }

Recommended citation: @inproceedings{Mauri2022Cocteau, author = {Mauri, Andrea and Tocchetti, Andrea and Corti, Lorenzo and Hsu, Yen-Chia and Verma, Himanshu and Brambilla, Marco}, title = {COCTEAU: an Empathy-Based Tool for Decision-Making}, year = {2022}, isbn = {9781450391306}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3487553.3524233}, doi = {10.1145/3487553.3524233}, abstract = {Traditional approaches to data-informed policymaking are often tailored to specific contexts and lack strong citizen involvement and collaboration, which are required to design sustainable policies. We argue the importance of empathy-based methods in the policymaking domain given the successes in diverse settings, such as healthcare and education. In this paper, we introduce COCTEAU (Co-Creating The European Union), a novel framework built on the combination of empathy and gamification to create a tool aimed at strengthening interactions between citizens and policy-makers. We describe our design process and our concrete implementation, which has already undergone preliminary assessments with different stakeholders. Moreover, we briefly report pilot results from the assessment. Finally, we describe the structure and goals of our demonstration regarding the newfound formats and organizational aspects of academic conferences.}, booktitle = {Companion Proceedings of the Web Conference 2022}, pages = {219–222}, numpages = {4}, keywords = {Crowdsourcing, decision-making, empathy, gamification, human-centered computing}, location = {Virtual Event, Lyon, France}, series = {WWW '22} } https://dl.acm.org/doi/abs/10.1145/3487553.3524233

CHIME: Causal Human-in-the-Loop Model Explanations

Shreyan Biswas, Lorenzo Corti, Stefan Buijsman, Jie Yang

@article{Biswas2022CHIME, title={CHIME: Causal Human-in-the-Loop Model Explanations}, volume={10}, url={https://ojs.aaai.org/index.php/HCOMP/article/view/21985}, DOI={10.1609/hcomp.v10i1.21985}, abstractNote={Explaining the behaviour of Artificial Intelligence models has become a necessity. Their opaqueness and fragility are not tolerable in high-stakes domains especially. Although considerable progress is being made in the field of Explainable Artificial Intelligence, scholars have demonstrated limits and flaws of existing approaches: explanations requiring further interpretation, non-standardised explanatory format, and overall fragility. In light of this fragmentation, we turn to the field of philosophy of science to understand what constitutes a good explanation, that is, a generalisation that covers both the actual outcome and, possibly multiple, counterfactual outcomes. Inspired by this, we propose CHIME: a human-in-the-loop, post-hoc approach focused on creating such explanations by establishing the causal features in the input. We first elicit people’s cognitive abilities to understand what parts of the input the model might be attending to. Then, through Causal Discovery we uncover the underlying causal graph relating the different concepts. Finally, with such a structure, we compute the causal effects different concepts have towards a model’s outcome. We evaluate the Fidelity, Coherence, and Accuracy of the explanations obtained with CHIME with respect to two state-of-the-art Computer Vision models trained on real-world image data sets. We found evidence that the explanations reflect the causal concepts tied to a model’s prediction, both in terms of causal strength and accuracy.}, number={1}, journal={Proceedings of the AAAI Conference on Human Computation and Crowdsourcing}, author={Biswas, Shreyan and Corti, Lorenzo and Buijsman, Stefan and Yang, Jie}, year={2022}, month={Oct.}, pages={27-39} }

Recommended citation: @article{Biswas2022CHIME, title={CHIME: Causal Human-in-the-Loop Model Explanations}, volume={10}, url={https://ojs.aaai.org/index.php/HCOMP/article/view/21985}, DOI={10.1609/hcomp.v10i1.21985}, abstractNote={Explaining the behaviour of Artificial Intelligence models has become a necessity. Their opaqueness and fragility are not tolerable in high-stakes domains especially. Although considerable progress is being made in the field of Explainable Artificial Intelligence, scholars have demonstrated limits and flaws of existing approaches: explanations requiring further interpretation, non-standardised explanatory format, and overall fragility. In light of this fragmentation, we turn to the field of philosophy of science to understand what constitutes a good explanation, that is, a generalisation that covers both the actual outcome and, possibly multiple, counterfactual outcomes. Inspired by this, we propose CHIME: a human-in-the-loop, post-hoc approach focused on creating such explanations by establishing the causal features in the input. We first elicit people’s cognitive abilities to understand what parts of the input the model might be attending to. Then, through Causal Discovery we uncover the underlying causal graph relating the different concepts. Finally, with such a structure, we compute the causal effects different concepts have towards a model’s outcome. We evaluate the Fidelity, Coherence, and Accuracy of the explanations obtained with CHIME with respect to two state-of-the-art Computer Vision models trained on real-world image data sets. We found evidence that the explanations reflect the causal concepts tied to a model’s prediction, both in terms of causal strength and accuracy.}, number={1}, journal={Proceedings of the AAAI Conference on Human Computation and Crowdsourcing}, author={Biswas, Shreyan and Corti, Lorenzo and Buijsman, Stefan and Yang, Jie}, year={2022}, month={Oct.}, pages={27-39} } https://ojs.aaai.org/index.php/HCOMP/article/view/21985

ARTIST: ARTificial Intelligence for Simplified Text

Lorenzo Corti, Jie Yang

@misc{Corti2023Artist, title={ARTIST: ARTificial Intelligence for Simplified Text}, author={Lorenzo Corti and Jie Yang}, year={2023}, eprint={2308.13458}, archivePrefix={arXiv}, primaryClass={cs.CL} }

Recommended citation: @misc{Corti2023Artist, title={ARTIST: ARTificial Intelligence for Simplified Text}, author={Lorenzo Corti and Jie Yang}, year={2023}, eprint={2308.13458}, archivePrefix={arXiv}, primaryClass={cs.CL} } https://arxiv.org/abs/2308.13458

“It Is a Moving Process”: Understanding the Evolution of Explainability Needs of Clinicians in Pulmonary Medicine

Lorenzo Corti, Rembrandt Oltmans, Jiwon Jung, Agathe Balayn, Marlies Wijsenbeek, Jie Yang

@inproceedings{Corti2024XAIIPF, author = {Corti, Lorenzo and Oltmans, Rembrandt and Jung, Jiwon and Balayn, Agathe and Wijsenbeek, Marlies and Yang, Jie}, title = {``It Is a Moving Process": Understanding the Evolution of Explainability Needs of Clinicians in Pulmonary Medicine}, year = {2024}, isbn = {9798400703300}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3613904.3642551}, doi = {10.1145/3613904.3642551}, abstract = {Clinicians increasingly pay attention to Artificial Intelligence (AI) to improve the quality and timeliness of their services. There are converging opinions on the need for Explainable AI (XAI) in healthcare. However, prior work considers explanations as stationary entities with no account for the temporal dynamics of patient care. In this work, we involve 16 Idiopathic Pulmonary Fibrosis (IPF) clinicians from a European university medical centre and investigate their evolving uses and purposes for explainability throughout patient care. By applying a patient journey map for IPF, we elucidate clinicians’ informational needs, how human agency and patient-specific conditions can influence the interaction with XAI systems, and the content, delivery, and relevance of explanations over time. We discuss implications for integrating XAI in clinical contexts and more broadly how explainability is defined and evaluated. Furthermore, we reflect on the role of medical education in addressing epistemic challenges related to AI literacy.}, booktitle = {Proceedings of the CHI Conference on Human Factors in Computing Systems}, articleno = {441}, numpages = {21}, keywords = {Explainable AI, Healthcare, User Needs}, location = {, Honolulu, HI, USA, }, series = {CHI '24} }

Recommended citation: @inproceedings{Corti2024XAIIPF, author = {Corti, Lorenzo and Oltmans, Rembrandt and Jung, Jiwon and Balayn, Agathe and Wijsenbeek, Marlies and Yang, Jie}, title = {``It Is a Moving Process": Understanding the Evolution of Explainability Needs of Clinicians in Pulmonary Medicine}, year = {2024}, isbn = {9798400703300}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, url = {https://doi.org/10.1145/3613904.3642551}, doi = {10.1145/3613904.3642551}, abstract = {Clinicians increasingly pay attention to Artificial Intelligence (AI) to improve the quality and timeliness of their services. There are converging opinions on the need for Explainable AI (XAI) in healthcare. However, prior work considers explanations as stationary entities with no account for the temporal dynamics of patient care. In this work, we involve 16 Idiopathic Pulmonary Fibrosis (IPF) clinicians from a European university medical centre and investigate their evolving uses and purposes for explainability throughout patient care. By applying a patient journey map for IPF, we elucidate clinicians’ informational needs, how human agency and patient-specific conditions can influence the interaction with XAI systems, and the content, delivery, and relevance of explanations over time. We discuss implications for integrating XAI in clinical contexts and more broadly how explainability is defined and evaluated. Furthermore, we reflect on the role of medical education in addressing epistemic challenges related to AI literacy.}, booktitle = {Proceedings of the CHI Conference on Human Factors in Computing Systems}, articleno = {441}, numpages = {21}, keywords = {Explainable AI, Healthcare, User Needs}, location = {, Honolulu, HI, USA, }, series = {CHI '24} } https://dl.acm.org/doi/full/10.1145/3613904.3642551

Understanding Stakeholders’ Perceptions and Needs Across the LLM Supply Chain

Agathe Balayn, Lorenzo Corti, Fanny Rancourt, Fabio Casati, Ujwal Gadiraju

@misc{balayn2024understanding, title={Understanding Stakeholders' Perceptions and Needs Across the LLM Supply Chain}, author={Agathe Balayn and Lorenzo Corti and Fanny Rancourt and Fabio Casati and Ujwal Gadiraju}, year={2024}, eprint={2405.16311}, archivePrefix={arXiv}, primaryClass={cs.HC} }

Recommended citation: @misc{balayn2024understanding, title={Understanding Stakeholders' Perceptions and Needs Across the LLM Supply Chain}, author={Agathe Balayn and Lorenzo Corti and Fanny Rancourt and Fabio Casati and Ujwal Gadiraju}, year={2024}, eprint={2405.16311}, archivePrefix={arXiv}, primaryClass={cs.HC} } https://arxiv.org/abs/2405.16311

A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities

Andrea Tocchetti*, Lorenzo Corti*, Agathe Balayn*, Mireia Yurrita, Philip Lippmann, Marco Brambilla, Jie Yang

@article{10.1145/3665926, author = {Tocchetti, Andrea and Corti, Lorenzo and Balayn, Agathe and Yurrita, Mireia and Lippmann, Philip and Brambilla, Marco and Yang, Jie}, title = {A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities}, year = {2025}, issue_date = {June 2025}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {57}, number = {6}, issn = {0360-0300}, url = {https://doi.org/10.1145/3665926}, doi = {10.1145/3665926}, abstract = {Despite the impressive performance of Artificial Intelligence (AI) systems, their robustness remains elusive and constitutes a key issue that impedes large-scale adoption. Besides, robustness is interpreted differently across domains and contexts of AI. In this work, we systematically survey recent progress to provide a reconciled terminology of concepts around AI robustness. We introduce three taxonomies to organize and describe the literature both from a fundamental and applied point of view: (1) methods and approaches that address robustness in different phases of the machine learning pipeline; (2) methods improving robustness in specific model architectures, tasks, and systems; and in addition, (3) methodologies and insights around evaluating the robustness of AI systems, particularly the tradeoffs with other trustworthiness properties. Finally, we identify and discuss research gaps and opportunities and give an outlook on the field. We highlight the central role of humans in evaluating and enhancing AI robustness, considering the necessary knowledge they can provide, and discuss the need for better understanding practices and developing supportive tools in the future.}, journal = {ACM Comput. Surv.}, month = feb, articleno = {141}, numpages = {38}, keywords = {Artificial intelligence, robustness, human-centered AI, trustworthy AI} }

Recommended citation: @article{10.1145/3665926, author = {Tocchetti, Andrea and Corti, Lorenzo and Balayn, Agathe and Yurrita, Mireia and Lippmann, Philip and Brambilla, Marco and Yang, Jie}, title = {A.I. Robustness: a Human-Centered Perspective on Technological Challenges and Opportunities}, year = {2025}, issue_date = {June 2025}, publisher = {Association for Computing Machinery}, address = {New York, NY, USA}, volume = {57}, number = {6}, issn = {0360-0300}, url = {https://doi.org/10.1145/3665926}, doi = {10.1145/3665926}, abstract = {Despite the impressive performance of Artificial Intelligence (AI) systems, their robustness remains elusive and constitutes a key issue that impedes large-scale adoption. Besides, robustness is interpreted differently across domains and contexts of AI. In this work, we systematically survey recent progress to provide a reconciled terminology of concepts around AI robustness. We introduce three taxonomies to organize and describe the literature both from a fundamental and applied point of view: (1) methods and approaches that address robustness in different phases of the machine learning pipeline; (2) methods improving robustness in specific model architectures, tasks, and systems; and in addition, (3) methodologies and insights around evaluating the robustness of AI systems, particularly the tradeoffs with other trustworthiness properties. Finally, we identify and discuss research gaps and opportunities and give an outlook on the field. We highlight the central role of humans in evaluating and enhancing AI robustness, considering the necessary knowledge they can provide, and discuss the need for better understanding practices and developing supportive tools in the future.}, journal = {ACM Comput. Surv.}, month = feb, articleno = {141}, numpages = {38}, keywords = {Artificial intelligence, robustness, human-centered AI, trustworthy AI} } https://dl.acm.org/doi/abs/10.1145/3665926

supervision

teaching

Master Cloud Data Architect

Organisation employees training, Quantia Consulting and Cefriel, 2021

Full-day training on Information Retrieval and the ELK (Elastic, Logstash, and Kibana) stack provided for BIP Consulting as part of the Master Cloud Data Architect.

Master Cloud Data Engineering

Organisation employees training, Quantia Consulting and Cefriel, 2021

Two full-day training on

  • Information Retrieval and the ELK (Elastic, Logstash, and Kibana) stack
  • NoSQL technologies (MongoDB, Neo4j, Cassandra)

Provided for BIP Consulting as part of the Master Cloud Data Engineering.

Computer Science Fundamentals

BSc course, Politecnico di Milano, 2021

Introductory course for 1st-year students in Computer Science and Engineering. Topics covered include: problem analysis, design of algorithmic solutions, and coding (in C++). Particular emphasis is placed on abstraction, data types, control structures, functions, dynamic data structures, and recursion.

Master Data Science

Organisation employees training, Quantia Consulting and Cefriel, 2021

Half-day session on Generative Adversarial Networks provided as part of the Master Data Science.

Master Big Data Science

Organisation employees training, Quantia Consulting and Cefriel, 2021

Two half-day sessions on

  • Generative Adversarial Networks
  • Working with Cassandra in Python

Provided for Allianz as part of the Master Big Data Science.

Master Advanced Data Science

Organisation employees training, Quantia Consulting and Cefriel, 2022

Half-day session on Generative Adversarial Networks provided for Nestlè as part of the Master Advanced Data Science.