AI Lab launched for effective and responsible supervision
How can you increase the effectiveness of inspectors using responsible artificial intelligence (AI)? This is the question the Innovation Center for Artificial Intelligence (ICAI) Lab AI4Oversight is tackling. By developing algorithms and methods, they try to provide optimal support for, for example, inspectors of government agencies.
The ICAI Lab AI4Oversight is a collaboration between several government inspection agencies and two universities. Instead of inventing the wheel, these parties are joining forces. Cor Veenman of LIACS is the scientific leader and a co-initiator together with Jasper van Vliet (Inspection Environment and Transport). This project provides LIACS with two PhD students, both under the supervision of Thomas Bäck and Catholijn Jonker.
The ICAI Lab AI4Oversight consists of five partners and two universities: Inspection Environment and Transport (ILT), the Dutch Labor Inspectorate, Inspection of Education, the Dutch Food and Consumer Product Safety Authority and the Dutch Organization for Applied Scientific Research (TNO), Utrecht University and Leiden University.
Joining forces not only ensures responsible and transparent AI but also that inspectors and AI systems cooperate in the best possible way and utilize each other's strengths.
Supervision does not mean inspecting everything continuously - that is simply impossible. We have to make choices, and the trick is to inspect exactly where the societal contribution is greatest. But how do you achieve a risk-based approach where inspectors are deployed at the right time and in the right place? That is the task that the various parties are jointly seeking a solution to. Here, AI plays a major role as it becomes increasingly advanced.
Optimal support by algorithms
'Where possible, we already make use of AI for responsible, selective and effective deployment of our inspectors. But there are even more opportunities ahead,' says ILT Inspector General Mattheus Wassenaar, 'Together with the two universities, we are going to develop methods that will ensure that our people are optimally supported by algorithms. However, we do want to prevent undesirable selection bias. We are doing everything in our power to arrive at an AI that we can deploy responsibly in the inspection domain.'
Developing and testing new methods
The practice has made it clear that new AI methods are needed within the surveillance domain. Extensive testing therefore always takes place first to determine the extent to which an AI application can be used effectively and responsibly in practical situations. The lab's research agenda therefore focuses on the following themes: cooperation between humans and machines, faster and fairer learning algorithms, and the contribution of AI in improving inspector behavior.
More information about the AI4Oversight Lab.