Values of fairness, antidiscrimination, and inclusion occupy a central place in the emerging ethics of data and algorithms. Their importance is underscored by the reality that data-intensive, algorithmically-mediated decision systems—as represented by artificial intelligence and machine learning (AI/ML)—can exacerbate existing (or generate new) injustices, worsening already problematic distributions of rights, opportunities, and wealth. At the same time, critics of certain “fair” or “inclusive” approaches to the design and implementation of these systems have illustrated their limits, pointing to problems with reductive or overly technical definitions of fairness or a general inability to appropriately address representative or dignitary harms.
In this talk, Anna Lauren Hoffmann extends these critiques by focusing on problems of cultural and discursive violence. She begins by discussing trends in AI/ML fairness and inclusion discussion that mirror problematic tendencies from legal antidiscrimination discourses. From there, she introduces “data violence” as a response to these trends. In particular, she lays out the discursive bases of data-based violence—that is, the discursive forms by which competing voices and various “fair” or “inclusive” solutions become legible (and others marginalized or ignored). In doing so, she undermines any neat or easy distinction between the presence of violence and its absence—rather, our sense of fair or inclusive conditions contain and feed the possibility of violent ones.
She concludes by echoing feminist political philosopher Serene Khader’s call to move away from justice-imposing solutions toward justice-enhancing ones. Importantly, justice-enhancing efforts cannot simply be a matter of protecting or “including” vulnerable others, but must also attend to discourses and norms that generate asymmetrical vulnerabilities to violence in the first place.
I am Anna Lauren Hoffmann, a scholar and writer working at the intersections of data, technology, culture, and ethics. I am currently an Assistant Professor with The Information School at the University of Washington.
My work centers on issues in information, data, and ethics, paying specific attention to the ways discourse, design, and uses of information technology work to promote or hinder the pursuit of important human values like respect and justice. I am concerned with the ways data, information, and technological systems (or the ways we talk about them) discriminate by undermining the development of self-respect of some, especially through the infliction of symbolic and discursive violences. In addition, I work on issues around ethics education for data professionals and computer scientists, as well as the possibilities (and limits) of research ethics and professional codes of ethics.