Research project
Hybrid Explainable Workflows for Security and Threat Intelligence (HEWSTI)
How can humans and machines collaborate in a meaningful way in a restrictive environment?
- Duration
- 2023 - 2027
- Contact
- Brecht Weerheijm
- Funding
- NWO
In this project, researchers from computer science, law, psychology, and public administration research in practice how artificial intelligence (AI) can be leveraged to make decision-making in the security domain more effective, while also keeping it safe and accountable.
Leiden University’s contribution of this project consists of the empirical study of how AI is implemented into decision-making in the security domain. AI offers potential for the analysis of much more data, making its implementation unavoidable. At the same time, implementing AI comes with its own unique challenges, ranging form organizational structure and culture, regulatory compliance, technological complexity, and responsible decision-making. Whereas much of these factors have been studied in isolation, this project provides in-depth understanding of real-world collaboration between AI and humans.