AI models seek to intervene in increasingly higher stakes domains, such as cancer detection and microloan allocation. What is the view of the world that guides AI development in high risk areas, and how does this view regard the complexity of the real world? In this talk, I will present results from my multi-year inquiry into how fundamentals of AI systems—data, expertise, and fairness—are viewed in AI development. I pay particular attention to developer practices in AI systems intended for low-resource communities, especially in the Global South, where people are enrolled as labourers or untapped DAUs. Despite the inordinate role played by these fundamentals on model outcomes, data work is under-valued; domain experts are reduced to data-entry operators; and fairness and accountability assumptions do not scale past the West. Instead, model development is glamourised, and model performance is viewed as the indicator of success. The overt emphasis on models, at the cost of ignoring these fundamentals, leads to brittle and reductive interventions that ultimately displace functional and complex real-world systems in low-resource contexts. I put forth practical implications for AI research and practice to shift away from model centrism to enabling human ecosystems; in effect, building safer and more robust systems for all.
Nithya Sambasivan leads the human-computer interaction (HCI) group at Google Research India. Her current research focuses on designing responsible AI systems by focusing on the humans of the AI/ML pipeline, specifically in the non-West. Specific sub-areas are data, fairness, privacy, abuse, and consent. Her research is a core contributor to Google’s products and strategy for Next Billion Users, and has won numerous best paper awards and nominations at CHI, SOUPS, and ICTD. Nithya has a PhD. in Information and Computer Science from UC Irvine and a Master’s in HCI from Georgia Tech.