Biased technology

Most of us do not think of “things” as having bias. Almost by definition, “things” are not alive. They do not think, or feel, or have families, or religious traditions, or culture. Even when we talk about certain technologies as being “good” or “bad”, we generally separate out the object from the user. Think about the familiar argument that “guns don’t kill people; people kill people.” This is a common-sense expression of the idea that technologies, in and of themselves, do not embody values.

At the same time, it is easy to think of the ways in which particular designs might reflect the values of the designers or users of a technology. The decision to purchase an electric vehicle might reflect an individual driver’s attitudes about the environment, for example. [Although it might also be that they love high-tech gadgets, or prefer the performance characteristics of electric motors, or want to be seen as progressive or cutting edge, or even that they want to flaunt their ability to own an expensive vehicle.] But even when we talk about the “values” reflected in particular designs, we generally still separate the values of the designer/user from any inherent “bias” in the underlying technology.

Feature Heading

Subheading

Bias is defined as a prejudice in favor of or against one thing, person, or group compared with another. While bias could be considered neutral or harmless (for example, “I am biased towards chocolate-flavored deserts”), it is generally used to describe prejudices that are unfair or harmful, such as racial bias or bias against people with disabilities.

It is easy to see how people might be biased. We can even think about ways in which we might evaluate people’s biases (Is this a harmless or harmful form of bias? Is it a bias that is protected by free speech, or religious exemptions? Is it a bias from which we need to protect other people?). But again, what does it mean to think of a technology as being biased?

The most obvious way in which a technology can be biased is that it is designed specifically to enable or exacerbate the bias of its designers. For example, there is the famous case of the bridges that were constructed by the influential New York City planner Robert Moses over the Long Island Parkway. These bridges were allegedly designed to be lower than normal, the desired effect of which was to prevent large vehicles like buses from passing under them. This made it difficult for New Yorkers who did not have access to automobiles --- and specifically, people of color --- from easily accessing the beautiful (and publicly funded) parks and beaches that Robert Moses had designed and constructed. There is some question about whether Moses was being intentionally racist (although his other construction projects and policies strongly suggest that he was), or inadvertently racist, but the end result is the same: the technologies that comprised the Long Island Parkway and the public recreation areas it provided access to seem to be biased against certain kinds of people. The bias was not enabled by law, or economics (for example, by charging high fees for access), or by choices that individual consumers made, but were embedded in the technology. The technology itself was biased, and continued (and continues) to be biased long after Robert Moses was long gone.

So, it is now possible to imagine that a technology could be designed to be inherently biased by people who had the deliberate goal of enacting or imposing on others their own personal prejudices. But there are other, more subtle ways in which technologies can be biased.

watch this video:

Placeholder image

Additional Resources:

For an overview of the three basic types of technical bias, see Friedman and Nissenbaum, Bias in Computer Systems