- Which stakeholders do you need the most help influencing? What do they value? What has happened in conversations with these stakeholders in the past?
- Which anti-privacy arguments have you encountered before?
- Which pro-privacy arguments have helped you, or which do you think would be most helpful?
Not to say our library doesn’t do much in terms of privacy, because we do, but there is definitely a lot we could work on. There is a lot of “us vs. them” mentality at my library, especially between the board members/administration and branches/main staff. And even within those groups they are split. So it makes it very difficult to have a conversation about anything, let alone privacy issues. It’s not uncommon to get stonewalled when bringing concerns up, even with the help of our union. So change is very slow unless it’s something the administration or board sees as very important; and to be considered important it has to be cost effective, boosts their reputation pretty quickly or a major significance over time, be new and upcoming, and/or is a major patron complaint.
When we do have the ability to give feedback, oftentimes the general consensus is that privacy is too complicated, takes too much time, and it just isn’t as important as outreach. Why do all these extra steps when we can just do this? It’s frustrating. I found that by telling patrons how things work and how they can improve, they would be willing to contact the library administrators and board members. Patrons have way more leverage than staff.
Ugh, so many libraries are plagued with some level of organizational dysfunction like what you describe here.
This is so important and something I would lean on HEAVILY. If you run a program or offer some new privacy service, make sure to have some feedback mechanism that patrons can use. You’ll get primarily positive results, and you can also include a question on the form like “would you like to see more privacy programming at the library and if so what” and then that’s some really strong feedback to share with admin.
I’m not sure if this could be seen as an anti-privacy argument, but I feel like librarians at my institution have to work harder to convince our students that it would be wise to check in on their Google settings (we use the Google suite for just about everything).
Last semester, my coworkers and I led a privacy workshop and discussed how to change various settings and why it’s crucial to be up-to-date on what those settings actually mean for the user. Some students seemed confused, one even saying, “Campus wouldn’t set something up for us to use if it’s not safe…” I’m struggling with convincing students that this is something to at least be aware of, rather than leaving it up to campus to protect them.
Appreciate the points in this thread about the leverage that patrons have, and how to turn that into getting evidence for more programming!
Does anyone have any talking points or tips for the whole home listening device thing?
I struggle so much with this one, their very existence freaks me out, and people are so casual about it! Or even start talking about how they can be accessibility tools for helping with executive function and stuff and I just !!!
Here’s a perfect example of the way that academic institutions fail to live up to the level of trust that students place in them by default.
And it’s a very tricky problem to handle, because it’s not like you want to encourage students to distrust the school. What I would do is look for examples of how Google has violated student privacy in the past (I just did a quick search and this came up, and I know it’s a K-12 example, but there are many of these: Google secretly monitors millions of schoolkids, lawsuit alleges - CBS News)
I hope that we’ll be able to get deeper into the listening device stuff when we cover AI, but usually in these convos I at least try to complicate the narrative with examples of privacy issue with these devices (eg Study Reveals Extent of Privacy Vulnerabilities With Amazon’s Alexa | NC State News). I think that this AI Now Institute study is an excellent in-depth exposure of some of the privacy/data, labor, and environmental issues with these devices: https://anatomyof.ai/
But I definitely have encountered people who are very defensive about their beloved listening devices and won’t listen to any of this, so your mileage may vary. Though I have also talked to many people who use these devices who just assumed that they had better privacy defaults, and were horrified to learn the truth. It gets back to that assumption of trust that many people have in these big tech companies.
Thanks, I didn’t even know about “skills” so that’s interesting/terrifying, especially since they could target specific groups depending on what they’re “helping” with (e.g. elder care, health stuff…) Although that article does sort of imply that you can trust Amazon and it’s the third parties you need to worry about!
Some things I am more readily able to empathize with people’s feelings but I get so flaily around these since my internal model is just completely aghast that people would even consider it Might have to memorize a talking point or two for when my baseline response is just internal screaming lol.
Yeah that article is just one example of what’s wrong with Alexa. There are many others that get at Amazon specifically, and not just the third parties. I totally agree about the value of having a few talking points ready to go for these conversations…not just to combat your own flailing, but because people who use these devices can get pretty defensive about them! I’ll be sure to incorporate some talking points strategies into our weekly discussion during the AI week.
I’m just going to add this here for anyone following along:
Amazing to have it encourage you to disclose your personal preferences and identity… and all the talk about fostering trust with the AI
“That means, being able to ask the device to let you know… when your kids are playing video games for too long.”
First off I just have to laugh at how “smart” this AI is – it’s literally just checking for playlist titles with words like “happy” or “sad” in them
Second, this piece reminds me of one of my favorite pieces in the last few years, Liz Pelly’s analysis of Spotify and its emotional surveillance game: Big Mood Machine | Liz Pelly
Thanks for the link! And yes the emotion detection thing is a whole mess, it’s such a popular topic in music information retrieval and I’m always surprised that people aren’t more critical of the very premise of it all…
This is generally the push-back I receive too. Also, I think we (staff collectively) shy away from anything that will add to the tome of policies and procedures that are already in place.
I definitely became too relaxed in my use of Spotify. I “accepted” the link to Waze feature so that Spotify continues to play while I get directions to Walmart. Just occurred to me that Spotify probably has a list of the routes taken and locations of all the spots I searched for.