LFI.2 week 6 discussion: facial recognition (the corporate side)

#1
  • How can we bring information about this technology into our communities and how are those conversations going to be different from the conversations we have about police and government use of facial recognition?
  • How can we use this week’s lecture to influence our efforts around getting library orgs to sign on to facial recognition bans?
  • How can librarians get involved in efforts to control the corporate use of this technology?
0 Likes

#2

Not necessarily related to facial recognition specifically, but policing in general… I am exploring other options for “security” during Drag Queen Storytime programs. We’ve had protestors shout down the performer, so I am looking into community groups that offer protestor training / de-escalation training.

4 Likes

#3

omg, Varoon’s talk was incredible. I am so motivated to get these library association bans happening!!!

4 Likes

#4

Just bumping the week six discussion thread. What are people’s thoughts about the lecture from Varoon?

0 Likes

#5

I think it was pretty fantastic, he’s a great speaker and obviously knows his stuff. I love anyone who cites there resources so I was extremely thankful for that, being able to follow up on what he said is a boon. I just wish we had more time with him, I think he had a lot more to say. Nothing was particularly surprising, but the way he put everything to gather flowed fantastically. I loved the discussion about the socio-eco-economic extractions and the discussion on how it’s evolved over the past decade. I wish we spent a little more time on the built in racism behind the algorithms rather than the imperfections of software (though I tried it and it got…some stuff right about me).

plus posters!

1 Like

#6

I worry about how many companies think of the algorithmic bias as a “feature and not a bug.” Since this technology will eventually flow down to us via library vendors, I’m glad to have data so we can keep our guard up on how data sets are being created.

I was thinking about how you might teach someone about this if you only have a few seconds to get their attention. People ignore the the mundane dangers of AI, focusing on the Hollywood things like “the robots are coming to take our jobs” or the “machines are going to rise up and kill us.” What we might want to stress is that the minor annoyances of technology not working properly (Siri opens the wrong app, the books we downloaded disappeared overnight) get proportionally worse when they’re in the hands of law enforcement or a greedy corporation.

1 Like

#7

Oh! And this related story just popped into my news feed: a surveillance camera was found outside the New Orleans home of a person who owns a surveillance company. Even though it had a police logo on the outside, the police don’t have the feed.

Cool. Private individuals enjoying their little piece of the surveillance state. Or as they would say, “no comment.”

1 Like

#8

I too would like to like hear more about the racism behind algorithms. Algorithms of Oppression is currently in my office from our collection, but I haven’t had a chance to read it yet.

I worry about the automated decision systems that was in the longer reading from the AI report coupled with the affect systems that Varoon mentioned. I’m pretty sure someone asked about it in chat re: the inclusion of a data set of expressions from non-western people. Couple that with an ADS, and I feel like you’re going to have a really faulty system when analyzing people.

I’m especially worried on how it would be used by government, such as vetting immigrants or refugees. Like, I can easily imagine a dystopian future where an ADS system is in place and someone cannot enter the country because they didn’t convey the appropriate emotion as dictated by AI.

0 Likes

#9
  1. I read Alogrithms of Oppression and it was pretty alright, they didn’t go into too much detail on how the algorithms actually work, mostly just the ramifications and damage they cause. Still a really important read.

  2. It’s sort of like the Black Mirror (and actually Chinese rollout) where you get social scores. I think your point is well taken. But I think a lot of work is going to come out on how to defeat the software. I know there are flashback jackets and facial paint that can do a decent job though I’m sure the engineers will try to figure out how it works. We have a cold war on the tech coming up and we just need to make sure we put everything we can into defeating in the avenues we can and disrupting it where we cannot legislate.

2 Likes

#10

Right, or imagine that same buggy technology making decisions about your health care or whether or not you qualify for a mortgage. I mean, that’s already happening.

I also liked Automating Inequality by Virginia Eubanks, which takes a broader view of predictive technologies and their use in the social sector.

2 Likes

#11

+1 on Automating Inequality. I usually use the Allegheny County Office of Children, Youth and Families case study when I talk with colleagues or patrons about algorithmic decision-making.

5 Likes

#12

For the last question - how can librarians get involved in efforts to control the corporate use of facial recognition tech - we have to first start with education. This institution is teaching me SO much that I didn’t know (or assumed but didn’t have details of) and I am sure that many librarians just don’t know. Mass education efforts for our own staff are needed. And we need to practice what we preach. If I am correct, the ALA Privacy Toolkit site, for example, is not secure or encrypted. There aren’t many examples of libraries doing EVERYTHING they could to protect patron privacy.

And then from there, informed librarians have to make sure they have a seat the table in making decisions about products, software, hardware, etc that is purchased and brought into the library. As trusted members of the public, if the library doesn’t trust it, others might be wary too - like education systems/institutions.

2 Likes

#13

I like Varoon’s to foster diversity in the STEM field in order to bring more awareness to bias in these AI systems. I know for certain that there are, or have been, efforts in the past on promoting more diversity in the STEM field at my college. That is not a new idea, but it could be possible to provide workshops, presentations or create partnerships at the college to educate faculty on bias in AI. This could lead to further conversations about the importance of continuing efforts to provide opportunities to diversify the field. I’d really have to do more research to see the current efforts in this area. I’d also be interested in delving into the reading lists around gender, diversity and race that Varoon references in the video to learn more.

2 Likes

#14

This is not about facial recognition but definitely corporate surveillance. Two pieces about how school districts are using software to monitor students’ social media posts. Not sure how many of you have kids in school (or family members) but these are interesting conversations to have with them, school administrators, and other parties. School districts do not do enough to explain to parents how children are monitored. In the meantime, tech companies are benefiting from the fear of school shootings.

1 Like

#15

Educating our staff is absolutely essential. I would like to create some sort of formal training around privacy issues even if it means only being able to share or converse on just a fraction of what we have learned so far and will continue to learn. I think making staff aware of the bigger issues and creating a space where they can continue to learn on their own can start to build interest/momentum on privacy issues.

3 Likes

#16

This just popped up on Twitter and I saw some of the images displaying this way this morning when posting for our branch. Suuuuper creepy and I’m sure a wake up call for a lot of people.

0 Likes

#17

Just watched the recording of the lecture- what a great talk from Varun!
I think there’s a lot of 'aw cool!'surrounding some of this technology which is definitely useful in sweeping criticism of these technologies aside. All the promises, the salespitches for AI and facial recognition are a sleight of hand.
Where I think the conversation may differ is in describing how the technology proliferates; on the one side we have companies pitching and selling these things largely behind a wall of secrecy to municipalities and on the other they are right in the marketplace, for sale in the form of camera doorbells, AI assistants and other consumer products that make up an ad hoc surveillant assemblage (jinx). I think the differences highlight the fact that there are small scale personal things we can do to mitigate the harm of surveillance but also that some of these threats require political action at the community level.
Varun really hammered home the inherent bias in this stuff that Kade also talked about - I think highlighting those biases will really bring home to people the fact that we need to regulate as soon as possible, meaning now.

1 Like

#18

I agree Symphony – I am learning a lot here that I was not aware of! I already thought facial recognition software was dangerous, but the implicit racial bias blew my mind. I keep thinking about the movie “Minority Report” and that we’re headed towards people getting arrested because they were “thinking about committing a crime” according to their micro-expressions.

1 Like

#19

I also think of the difference between We/1984 and Brave New World. I think less about how Big Brother is watching and more about how we willing give up our rights for perceived happiness. The eugenics aspect of Brave New World has me a little more concerned, because it takes facial recognition software and brings it to a logical extreme by making everyone look the same. Once we’re able to follow through with things like Crispr, facial recognition may not longer be useful. As long as we get those soma pills.

1 Like

#20

I think, for me, the biggest takeaway was awareness of affective bias – the idea of ascribing emotions or motivations from a person’s face, when we know these are embedded with racial and LGBTQ bias. Humans, of course, can often be adept at registering another person’s emotion, but this is a nuanced reaction that can sometimes be wrong. I am sure we have all had the experience of someone asking us what is wrong/why are you mad/etc. when we were not, in fact, experiencing those emotions. (Not to mention, those that are afflicted with RBF!)

2 Likes