Call for Papers: Computer-Based Learning in Context

Special Issue on The Long Tail of Algorithmic Bias in Education: Intersectionality and Less-Studied Categories of Identity

Call for Papers

The last couple of years have seen an explosion of interest in algorithmic bias in education, matching greater societal awareness of the problem of algorithmic bias in general, and problems of discrimination and social justice more broadly.

However, most of the work on algorithmic bias in education (and algorithmic bias in general) has focused on easily identified and well-known demographic categories. A recent review by Baker and Hawn (2021) finds that in the relatively rarer cases when researchers have looked for algorithmic bias in terms of other categories, they often find evidence for its existence. This suggests that other unknown categories may be impacting algorithmic effectiveness. Furthermore, algorithmic bias often is posed in terms of single categories, ignoring the possibility that bias may emerge at the intersection of categories as well.

This special issue seeks to promote research and practice that investigates and attempts to resolve less-studied algorithmic biases in education. Work on biases going beyond widely-studied demographic categories is welcome; this includes work that spans both widely-studied and non-widely-studied categories. Work on intersectional biases is also welcome. We welcome theoretical papers, conceptual and position papers, empirical papers, methodological papers, and papers of practice.

Sample topics may include:

  • Empirical research on whether algorithmic bias investigating less-studied categories is present in a specific application
  • Including but not limited to work involving indigenous populations, sub-categories of widely-studied demographic categories, learners with specific disabilities, neurodiversity, military-connected children, migrant workers and their families, non-binary and transgender learners, religious minorities, refugees, rural learners, learners in small or remote cities or communities, non-WEIRD countries, speakers of less common dialects or non-prestige dialects, second-language speakers, and international students or students of specific national backgrounds
  • Empirical research on intersectional algorithmic biases
  • Empirical work to address and resolve less-studied algorithmic bias and intersectional algorithmic biases
  • Mathematical work and methodology related to studying less-studied and/or intersectional algorithmic biases, including but not limited to power analyses and  sample size calculations
  • Conceptual, theoretical, and position pieces related to journal special issue themes
  • Work around data systems and methods that enable research on less-studied groups
  • Case studies around efforts to reduce algorithmic bias (of the type this special issue focuses on) in practice

Submission and Inquiries

Please see the submission information. We welcome manuscripts of any length and welcome dual-publication both in English and other languages.

When you submit your paper, please note that it is for this special issue in your cover letter.

All submissions will go through the journal’s usual peer review process.

Important Dates

  • Email inquiry of interest for submitting to special issues or abstract: Any date before August 1, 2022 (optional)
  • Paper submission: September 1, 2022
  • All articles will be published online as soon as fully accepted, and as part of a special issue when all submissions have completed their processes

Guest Editors

  • Nigel Bosch
    University of Illinois Urbana-Champaign, pnb@illinois.edu
  • Ibrahim Dahlstrom-Hakki
    TERC, idahlstromhakki@terc.edu
  • Ryan S. Baker
    University of Pennsylvania, rybaker@upenn.edu

Comments are closed.