LFI.4: Week 10, AI, algorithms, and privacy

  • Discuss the unique impacts AI has on labor, data privacy, and planetary resources.
  • What do you make of the multiple connections between the AI industry and individuals with reactionary politics?
  • How can we begin to address these issues? What are the implications for library programs and services? For example, we often assist our patrons who’ve just received holiday gadgets. How can we incorporate these critiques into those services? How can we broaden the critique so that we’re not just telling people not to plug in their Echo device?

Today I’m thinking a lot about how ~The Algorithm~ influences politics. Whenever I search for something about politics or recent events on Twitter or Facebook (rather than looking at my own home timeline), the algorithm, despite my own preferences which it knows, will always prioritize showing me far right-wing journalists like Andy Ngo and Ben Shapiro. The disconnect between what I saw at the West Philly Rosh Hashonah rally against fascism, and what showed up on twitter when I searched “clark park,” was glaring. You would think it was a riot, and not essentially a regular Saturday at the park but with more people in Gritty masks and slightly more people holding signs or playing the accordion. Tech workers say even they don’t even know, or can’t explain, how these algorithms/AI work to decide what people see, but we know that far-right people work in these companies and the algorithms seem to have some opinions of their own on what lens people should be seeing society through.

I think a way I’d want to address the issue of AI taking over is working on information literacy workshops for patrons to help them find reliable news and assess whatever bias and framing is being placed on the story. If we can educate patrons on how they are being manipulated by these algorithms and the language used in the content being promoted by skeevy journalists, then they can develop their own security mindset for information and manipulation.


I think about this same thing a lot with regard to YouTube auto-play, how if you let videos autoplay you’re basically only ever 2-3 videos away from Jordan Peterson. I think demonstrating any of these things in real-time would be a great way to kick off an info literacy program about algorithms and political bias.


Sorry I missed seeing this earlier. I think what you just said is skynet is already running things, just no physical robots chasing you (“Tech workers say even they don’t even know, or can’t explain, how these algorithms/AI work to decide what people see”).
I’m mentioning skynet a lot recently, that tells me something.

Just a bit of horrifying irony for your morning(-ish):

Jason Griffey has posted the talk he gave at the virtual Computers in Libraries conference on AI and his thoughts of its place in libraries. He covers briefly some of the positive uses of AI and machine learning we are seeing in the world, but then goes into the ways that AI systems are horribly broken, especially regarding facial recognition.
He has the video of his presentation and his presentation slides here: http://jasongriffey.net/2020/09/30/facial-recognition-is-broken-and-racist/

The AI Idea that is an example of the many bad ideas floating around right now, besides facial recognition AKA the COVID-19 cough detector app

AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app


1 Like