personalized pricing algorithms
17
as well as YouTube’s ineffective user controls,
18
which
demonstrate the opaque and fallible nature of an ADMS. Risks associated with
automated systems can be rooted in a variety of different issues
19
, all of which need to
be addressed: from the data used to train and test machine learning-based systems as
well as their design to the unsubstantiated capability of some systems,
20
or their very
purpose and the context in which they are deployed.
Importantly, automated decision-making systems (ADMS) are often trained or
“taught” using historical data sets, making them susceptible to replicating and potentially
perpetuating biases found within our society — and in some cases at great scale. Even
without discriminatory intent, these systems are still capable of producing disparate
impact because of this training data. Research by Mozilla fellows Abeba Birhane,
Deborah Raji, and others has repeatedly pointed to harmful and toxic data as well as
privacy risks in datasets widely used for machine learning. For instance, they have
uncovered misogynistic and racist imagery in large computer vision datasets
21
and
issues of obtaining genuine consent in the construction of such datasets.
Equity harms caused by ADMS are of particularly grave consequence when deployed in
critical areas and where they affect people’s livelihoods, safety, or liberties — be it a
rejected loan, wrongful termination of a job, or discriminatory pricing of goods and
services. Additionally, certain categories of data pose greater risks when used as input
for ADMS. Like other biometric information, reproductive health data
22
has
22
See footnote 10.
21
See, for example, Birhane et al., “Multimodal datasets: misogyny, pornography, and
malignant stereotypes”, 2022, https://arxiv.org/pdf/2110.01963.pdf; Prabhu & Birhane, “Large Datasets: A
Pyrrhic Win for Computer Vision?”, 2020, https://arxiv.org/pdf/2006.16923.pdf
20
See Narayanan, Arvind. “How to recognize AI snake oil.”
https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf
19
See: Mozilla’s “Movement Building Landscape”
https://movementbuilding.mozillafoundation.org/category/ai-impacts-in-the-consumer-space-social-justice/
18
Ricks, Becca and McCrosky, Jesse. “Does This Button Work? Investigating YouTube’s ineffective user
controls.” September 2022.
https://assets.mofoprod.net/network/documents/Mozilla-Report-YouTube-User-Controls.pdf
17
Mozilla. “New Research: Tinder’s Opaque, Unfair Pricing Algorithm Can Charge Users Up to
Five-Times More For Same Service.” February 8, 2022.
https://foundation.mozilla.org/en/blog/new-research-tinders-opaque-unfair-pricing-algorithm-can-charge-u
sers-up-to-five-times-more-for-same-service/