Ethical Considerations from Child-Robot Interactions in Under-Resourced Communities
Dr. Eduard Fosch-Villaronga from eLaw collaborates with researchers from the Indraprastha Institute of Information Technology Delhi (IIIT-Delhi) and University of Delhi (DU) in an effort to explore and reflect upon the potential legal, ethical and pedagogical challenges of deploying a social robot in under-resourced communities.
Recent advancements in socially assistive robotics (SAR) have shown a significant potential of using social robotics to achieve increasing cognitive and affective outcomes in education. However, the deployments of SAR technologies also bring ethical challenges in tandem, to the fore, especially in under-resourced contexts.
While previous research has highlighted various ethical challenges that arise in SAR deployment in real-world settings, most of the research has been centered in resource-rich contexts, mainly in developed countries in the ‘Global North,’ and the work specifically in the educational setting is limited.
Together with Divyanshu Kumar Singh, Dr. Manohar Kumar and Dr. Jainendra Shukla from the Indraprastha Institute of Information Technology Delhi (IIIT-Delhi), Deepa Singh from the University of Delhi and Eduard Fosch-Villaronga from eLaw - Center for Law and Digital Technologies at Leiden University evaluated and reflected upon the potential ethical and pedagogical challenges of deploying a social robot in an under-resourced context.
The researchers based their findings on a 5-week in-the-wild user study conducted with 12 kindergarten students at an under-resourced community school in New Delhi, India. They used interaction analysis with the context of learning, education, and ethics to analyze the user study through video recordings. They collected ethnographic data while deploying a social robot at a community school in New Delhi, India, in which children engaged in a language learning task using the 'learning-by-teaching' methodology.
Their findings highlight four key ethical considerations:
- Language (Accent) & Context: The current assistive technology design does not cater to diverse linguistic and non-linguistic socio-cultural features, impacting a child's learning process. They suggest integrating and training intermediaries to facilitate such interactions.
- Trust: While there exist guidelines helping researchers conduct Wizard-of-Oz studies, we ask for re-engaging, critically and ethically scrutinizing it. Especially when engaging with children who tend to form emotional bonds with such robots. They argue to upload children's agency throughout the knowledge-making process, which allows them to comprehend the intelligibility and actions of such artifacts. They believe it might require researchers to create a paradigm shift toward such methods.
- (Un)Intended Harms, Safety, Regulatory Framework: Current regulatory frameworks across the globe essentially frame the physical harms caused by such tech artifacts but largely disregard their psychological dimension. Hence, there is a need to consider such technology's psychosocial and social harms. Unfulfilled aspirations, failure to access expensive tech, and emotional are some of the (un)intended harms of using such artifacts. This becomes more critical with children in under-resourced communities who could face further marginalization.
- Balancing Innovation with Right to Education: Ecological viability is a significant challenge in adopting any technological artifact. They argue that the researcher should engage in looking at any given context as a 'site of ethical inquiry.' In general, society needs to look beyond mere deployments to understand the challenges of context: finance (incl. the state's role), training, accessibility, and usability. Overlooking such issues has the potential to reinforce the digital divide.
This project contributes to advancing the knowledge in the field of Diversity & AI, which Dr. Eduard Fosch-Villaronga started at the eLaw Center for Law and Digital Technologies some time ago. Within that topic, he also chairs the Gendering Algorithms initiative at Leiden University, a project aiming to explore the functioning, effects and governance policies of AI-based gender classification systems.