Operationalizing a Responsible AI System in the Public Sector
AU Library and Scholarly Resources
Abstract:
Responsible AI (RAI) describes a set of practices, processes and technologies that work towards improving trust when developing and deploying AI systems. Numerous studies examine ethical principles that underpin RAI systems, however relatively few case studies explore how practical it is to implement an RAI system. The first goal of this study was to evaluate the feasibility of integrating an RAI system with an existing recruitment system for digital talent roles in the public sector. The research determines a level of organizational readiness in the context of employing RAI tools, methods and processes that translate ethical principles into system requirements and design. The second goal was to evaluate barriers to adoption of RAI. A systematic review of the extant literature was conducted identifying 25 articles that were then synthesized by mapping the articles to the research questions. For the first goal, the case study relied on a participatory action research methodology and took an abductive approach to assess the feasibility of operationalizing RAI in the public sector. A responsible AI maturity model was used to compare a group’s experiences with base practices and then determine levels of proficiency. For the second goal, a combination of theoretical frameworks including the Technology-Organization-Environment (TOE), Technology Acceptance Model (TAM) and Diffusion of Innovation (DOI) were applied to generate survey questions and evaluate barriers to RAI adoption. The findings support that RAI is not feasible to the extent that it is not currently being prioritized. Organizational readiness is low, with a government team demonstrating low proficiency in the necessary capabilities outlined by responsible AI maturity models. Barriers to adoption include factors in the ‘Leadership Characteristics’ and ‘IT Expertise’ categories. Recommendations are for the public sector to consider prioritizing RAI, make shifts in organizational structure, incorporate staff training for AI, create a risk-based AI policy and adopt a modified team composition that adds RAI specific roles to traditional, cross-functional software development teams.