02074nas a2200253 4500000000100000008004100001260004600042653002800088653002100116653002900137653001800166653001900184653001600203100001300219700001400232700001100246700001600257245014400273856004700417300000900464490000600473520132700479022001401806 2024 d bAssociation for Computing Machinery (ACM)10aArtificial Intelligence10aMachine Learning10aCommunity Health Workers10aMobile Health10aExplainability10aRural India1 aOkolo CT1 aAgarwal D1 aDell N1 aVashistha A00a"If it is easy to understand then it will have value": Examining Perceptions of Explainable AI with Community Health Workers in Rural India uhttps://dl.acm.org/doi/pdf/10.1145/3637348 a1-280 v83 a

AI-driven tools are increasingly deployed to support low-skilled community health workers (CHWs) in hard-to-reach communities in the Global South. This paper examines how CHWs in rural India engage with and perceive AI explanations and how we might design explainable AI (XAI) interfaces that are more understandable to them. We conducted semi-structured interviews with CHWs who interacted with a design probe to predict neonatal jaundice in which AI recommendations are accompanied by explanations. We (1) identify how CHWs interpreted AI predictions and the associated explanations, (2) unpack the benefits and pitfalls they perceived of the explanations, and (3) detail how different design elements of the explanations impacted their AI understanding. Our findings demonstrate that while CHWs struggled to understand the AI explanations, they nevertheless expressed a strong preference for the explanations to be integrated into AI-driven tools and perceived several benefits of the explanations, such as helping CHWs learn new skills and improved patient trust in AI tools and in CHWs. We conclude by discussing what elements of AI need to be made explainable to novice AI users like CHWs and outline concrete design recommendations to improve the utility of XAI for novice AI users in non-Western contexts.

 a2573-0142