week 17: algorithms as ideology

Janus’s talk was so intense and thought provoking. Here are some questions I’m thinking about for this week:

  • how did this week’s lecture expand or change your understanding of algorithms?
  • what are some examples of algorithm-driven automated systems?
  • what opportunities do we have to bring this information to the public or make changes to our systems or policies?

Thought of Janus’s presentation as soon as I came across this on the website of a face detection company that lives in Paris, France (https://sightengine.com/docs/face-attribute-model#gender). It’s a description of some of the capabilities of the company’s computer vision products which are proclaimed to determine ‘gender’ (emphasis and highlighting mine hereafter):

"Gender properties are determined solely using the face. So other signs such as clothes or context will not influence the result. Males dressing up as females or females dressing up as males should therefore be correctly classified .

Create profile data based on profile pictures
Check that users correctly entered their gender
Group or classify your images"

Notice that function follows function. The function being to encode a gender binary into whatever happens next in the system.
  1. Gender properties are determined solely using the face” First, let’s go ahead and negate personal agency.

  2. "Males dressing up as females or females dressing up as males should therefore be correctly classified." Excuse my French but fuck this shit.

  3. correctly classified” THEY know better! See no. 2.

  4. "Check that users correctly entered their gender" We do not believe you when you tell us who you are you are wrong Sir or Ma’am. The last word on your gender will be determined by this greasy point-of-sale mounted mini cam and our shitty computer vision coding project.

From the perspective of the developers, the person on film being scrutinized by this product is the unreliable narrator of their own existence. ‘Gender’ is a trick, potentially a presentation attack, or it’s true: the developers say when. It’s wicked transphobic and potentially very dangerous. That’s the point, the worldview, the ideology.

New understanding: That some algorithms, like some tools, are designed to replicate oppressions across sectors and systems.
Opportunity: Record less data about our patrons - especially gender. Let patrons hack some of the fields on patron sign up forms: give them a choice to record their preferred name or legal name or a nickname. There’s no reason our patron databases shouldn’t reflect what patrons know to be true about themselves.


This is the literal narrative about trans women in particular, that they’re out to “trick” men, the subtext being that they deserve to be excluded from bathrooms or physically harmed or killed for it. As you say, replicating oppressions, but making them seem objective with math!

These are really two of the fundamental things I keep coming back to in all this work: a whole lot less data, and more human control/consent.

1 Like

I think this week’s lecture expanded my understanding of algorithms of just the data sets that are used. It seems like developers use anything. YouTube as a data set seems problematic for a variety of reasons. Like what if you’re feeding a machine a data set full of deep fakes?

Harvesting a data set of human behavior is also straight up disturbing. I feel like you can’t truly “learn” from second-hand human experience, how can a machine?

As for algorithm-driven automated systems, I think of streaming media providers. I think is driving the entertainment industry. It seems like that a lot of algorithm driven design is a shortcut to actual research on human behavior that can be used to (purposefully?) ignore outliers or doesn’t fit into the mold of developers’ conceptions when designing AI.

As for bringing this information public, I think it’s important to tell people that you can’t completely trust machines, AI, or algorithms. They are inherently flawed systems that can do both a lot of help and harm, but it really depends on who’s controlling it, has access to its data, and what it is used for.

The biggest questions that I think of after Janus’s presentation is “who is harmed from it and who benefits from it.” I feel like framing algorithms or systems built with algorithms this way is a great way to think critically on their impact. Maybe this would be another way to inform patrons; use these questions to contextualize algorithms when they’re using library resources?


Yes, I love this framing. The “who benefits” also includes the more passive benefit of “developers who are too lazy or ignorant or privileged to think about the unintended consequences of their flawed designs”

I think this would make for a cool interactive activity in a library. You could have some basic info about the issues with algorithmic decision-making, and then you could have a whiteboard with a “who is harmed” and “who benefits” heading on each side, and then have people write their thoughts about it on post-it notes.


That’s a great idea. Perhaps it could be incorporated into an advanced privacy workshop or even into a staff training exercise.

1 Like

very interesting. Snowden was discussing algorithms on 11th Hour as he was promoting his new book. Algorithms are dangerous and can be bias imagine when you entire life and what you have access to is determined by algorithms.

I keep thinking about the older lady being put in a men’s facility because she was on hormones. And how government is building in algorithms to systems to be inhumane to trans people on purpose. And while government systems can be inhumane without algorithms, this reliance on/building in harmful ideologies into them seems to be automating cruelty.

I have been thinking since the lecture hoping to come up with some better answers to the questions posed.

What is the answer? I was hopeful when Janus told us about tech workers refusing to work on a Muslim registry.

Janus’ lecture expanded my thinking about how fascists use algorithms to “automate” horrible ideologically-based “projects.” Personally, I’m trying to be more cognizant of techno-sanitized language & how it is weaponized by either private companies working for Federal gov/gov. agencies or the government & it’s various agencies.

Some examples of algorithmically automated systems: some academics are doing a really problematic study about youtube and autism. i believe the are training an algorithm to recognize facial gestures or bodily movement of autistic children… i have no idea what their end goal is, but they are using the videos without the creators consent.

In NY - i see an opportunity in doing a display with fliers about local automated algorithmically driven systems. if anything, creating awareness about what is being automated/shaped by algorithms in everyday life is a good start.

1 Like

It’s hard when the problems are so big and systemic, but I think the answer is multifaceted and is so much of what we’re trying to do here. Here are some of those things:

  • Educate our communities about these issues and introduce them to avenues where they can advocate for change (like how to contact legislators etc)
  • Educate the same set of people about harm reduction strategies (how to use technology a little more safely)
  • Make our libraries known as privacy-protective spaces, but more than that, spaces where we care about each other and look out for each other and share resources for resisting the worst parts of the surveillance economy.

We can’t necessarily stop these algorithms from making decisions about our lives (or at least, it’s gonna take a while to stop them) but we can be aware of them and vigilant and ready to fight back when we see the results of their discriminatory decision-making.

Yes, exactly. Let’s start by understanding this tech. Right now, the power differential between us and the technology is heightened by the fact that we collectively understand so little about how it all works. Shining some light on how all this works is the most necessary first step to taking our power back.