What threat models do you know exist in your community? Which ones (eiter the examples we used in class, or any you can imagine) resonate with you the most? How could you use threat modeling to make arguments in favor of privacy in your library? In what ways is the library an adversary or a potential adversary?
I’ve pushed folks at my library to take a threat modeling approach to research questions at our desk–to avoid recording identifying data in our datatracking systems, especially. For a while, we worked off the threat model of academic espionage/“Scooping”–that we needed to make sure an unscrupulous person couldn’t read over our shoulders to find out what classes/projects a researcher was using.
That mostly failed. Then one of my colleagues suggested we shouldn’t be complacent in helping stalkers find their victims, in class or at their dorm, based on what the stalker may have heard/seen. That got some traction, but only on the “yes, this is bad, so I’ll play along” type. There’s mostly a sense that the information we get from students isn’t ever of the sort that could harm anyone, therefore it can be public, and I’d love to hear how others have grappled with that.
For me, there’s an ethical responsibility to safeguard the data we’re handling, and that’s shared by most of my colleagues–I’m just not sure there’s much enthusiasm for actual action based on this.
Within the academic library where I work, we have a (new-ish) Assessment Committee where we look at PII collected across departments: how long we keep the information, if the information is necessary, how it might harm the individual, and how we might mitigate the harm. So basically, a sort of threat model where the library is a potential adversary of our users, but also capable of preventing potential harm. What we don’t look at is how the library might put employees at risk. A few examples from the class activity are particularly relevant, as a critical race theory scholar or BLM organizer might also be a library employee (although any of the scenarios might apply). I’m looking forward to the internal audit that Alison mentioned.
Using the model to make arguments in support of privacy also made me think of law enforcement as a potential adversary, which came up a lot in our discussions. The Auraria Library has a campus police department, with security officers on site. My interaction with them has been minimal but looking back on the Police Reports that I have filled out for them- they collect completely unnecessary information, such as my home address (which I refused to give)! Beyond the report, In one instance the officer asked me if I wanted them “to do anything” about a patron acting inappropriately and I said no. If I had answered differently, I realize the patron would have even more information collected, creating a record that would impact any future interaction with law enforcement. As a library, we are currently reviewing the police presence and this type of interaction (both the data collected and arbitary enforcement) could be used as arguments against the police presence. Where instead of offering protection, they often cause harm.
In terms of the third question, about using threat modelling as a sort of rhetorical strategy for privacy advocacy, I wanted to bring up something I was thinking about after our discussion this afternoon – namely, how can this sort of use of threat modelling start to overlap in tricky ways with respectability politics? And is that a problem?
e.g., do we need to come up with the most “sympathetic” person who is threatened by a given situation in order for it to have the desired effect? It obviously depends on the people/organization we’re trying to convince but I think this can get tricky.
Lots of very vulnerable groups who might be most impacted by poor threat modelling are also highly stigmatized (trans people, sex workers, [illegal] drug users, people with certain health conditions/illnesses/disabilities, people with criminal records, I could go on) and in those cases the idea of certain groups either don’t deserve privacy or shouldn’t exist can come up.
Maybe this is actually fine and okay for a rhetorical strategy – even the more “respectable” people in these scenarios can of course be real people and referring to their experiences to advocate for better policy can have benefits for everyone of course – but I do think it’s worth taking a hard look at areas where personal beliefs and rhetorical/strategic practise might diverge and make sure we can in fact arrive at the same end and are aware of the costs incurred.
Yeah, I think there’s a real disconnect in worldview that comes up a lot, I guess it’s part of the security mindset idea again, that the general vibe is so often “bad things [related to privacy violations] mostly happen in places that are not here to people that are not us/me, and I basically have nothing to worry about” (sometimes with an added dose of “and when bad things do happen, people are probably themselves to blame for those problems”). I think the urgency to actually turn an ethical impulse into policy has to come from really truly believing that something bad could actually happen – and of course the power to actually make the institution do something differently! It also requires not trusting the government/the institution/etc to “do the right thing” or be prioritizing your best interests, which is a mindset some of us come by more easily than others…
e.g. The university for example might think it wants first gen students, Black students, queer students, but when their “threat model” is for a financially secure student with wealthy parents who provide resources, full citizenship, no health problems, and stable housing and food, that is reflected in the bureaucratic structures over and over again even when it’s not explicit.
We’ve got a number of threat models in my community. There’s our growing immigrant population, and also an active border patrol presence. We have multiple Native communities, like someone mentioned in chat, with unique cultural and legal assets. We also have a growing transgender population, which I’m probably the most sensitive to, since I’m nonbinary.
In a way, the library as organization has a threat model, too, just not for privacy specifically. This is the first year we’re doing a drag queen storytime for Pride month, and management have a plan in place for likely pushback. So the asset is the inclusive programming, adversary is our conservative community, capabilities are word of mouth, organizing against it, spreading rumors about what the program is, even. Consequences include reputation damage extending to possible financial implications (levies.) I think management/administration could understand threat modeling applied to privacy when compared to damage control decisions. It IS a damage control decision, because a consequence of a data breach is loss of reputation and community trust.
Like we talked about with hold slips, libraries can be adversaries through negligence. We can also be perceived adversaries – undocumented patrons might assume we’ll track them and readily give their info to law enforcement, for example, especially if we’re not reaching out to those communities. Our actual practice may be different, but it doesn’t matter to the patron who has had harmful experiences with other community organizations. It’s more systemic than just the library. That’s a much bigger picture than privacy, for sure, but I think it should be factored into privacy decisions.
This comment made me also think about how every single one of the public libraries where I’ve worked has had issues with patrons who give unwanted attention to employees (mostly female). Sometimes this escalated to harassment or stalking. There were various forms of mitigation, like not putting employee last names on websites, or being careful about having staff schedules out and visible, things like that. But it never felt like there was enough consideration, especially as it kept happening.
Your home address! OMG. This makes me think about how people will fill out information on a form if the form asks for it, especially if it’s from a “trusted” or “official” source. I can easily imagine staff filling out that form and thinking like, “the police wouldn’t ask for this unless it was necessary”.
This is an important point! And it will definitely come up, depending on the people or groups you’re talking to. I mean, even talking openly about the police being an adversary would not work in a lot of library settings. Knowing your audience, their (likely or potential) personal beliefs, how they regard the humanity of others…this is all helpful in making these arguments. Thinking more about the police as adversary…I have often found that people who bristle at hearing about the police as an adversary are more comfortable with hearing about ICE as an adversary. I definitely see this as revealing feelings about which victims are “worthy”, which I think reflects anti-blackness and white supremacy. My own strategy has been to push the limits of respectability wherever and whenever possible, but also remembering that that isn’t always possible, and trying to make sure that I am doing the best I can for the most impacted people, while also recognizing that a lot of libraries aren’t ready for some of these conversations.
Yes! And this is a great example.
YES! If we aren’t actively building relationships with our undocumented community members, what reason do they have to trust us? In a world where so many institutions are untrustworthy or have actively betrayed them.
Without reckoning with these bigger picture issues, our privacy policies and practices will always suffer.
When I got the job at the library 3 years ago, it was the first time in my adult life receiving health insurance through my employer. Needless to say, I went to every kind of doctor you could think of once it went into effect. There is one major, affordable healthcare organization that most doctors in town work for as well as running the biggest hospital in the state, serving a number of counties and urgent cares. At almost every single doctor office, I was given the option to enroll in their online portal, pitched as an option for easy access to my records and scheduling.
My partner works at a non-profit which gives their patients options to view test results online. This seems to be the norm for all healthcare providers in the area. Not long ago, the state’s health department had a data breach when an employee mistakenly uploaded thousands of COVID testing results to a wrong location. (yours truly was affected) Threat modeling this is not hard to do!
It is widely accepted in our community to access health care information online. I help someone do this about once a week at the library. Every time I assist someone print one of these records, I think about what more could we do on our end to make this more secure. I also think about every possible worst case scenario; what if they leave their print jobs at the printing station? What if they walk out the door without properly closing down their session? Worse yet, what if they leave their record up on the computer and someone else catches it before we do? Patrons log on to our computers with their library cards which adds another identifier, what if our security was ever compromised? On a good day, they log on using a one-time-use guest pass and I only have to worry about the other what-ifs.
I cannot bring myself to trust the digitization of medical records, ever. There are too many risks compared to the benefits.
I hate feeling like the adversary.
I don’t collect fines for books (which is apparently radical compared to other school librarians) and students losing books/leaving them in other countries leaves many students super stressed. I become the adversary because they don’t understand that the book is only one of many assets I provide and I don’t stress over the loss; I just want to be able to mark the book as lost. Too many teachers have an outdated mindset and pass it on to the students. It stresses out ELL’s and undocumented students most, which makes me feel terrible.
I think my own organization can be guilty of not not doing enough with threat modeling to help our customers make informed decisions. In a rush to get what is the new, shiny technology we don’t often consider the privacy implications of the tools we are using. There was a big stink a little while back about LinkedIn Learning requiring users to have a LinkedIn account even though some customers just wanted access to the content.
Apparently that was resolved enough to make library staff feel comfortable with renewing for another year. It’s hard with products that have name recognition to get library staff to think about the fact that these organizations are businesses and we need to question them on how patron data is used.
In the future, I would like to see conversations about what resources to renew to include some threat modeling. Without necessarily meaning to, we are potential adversaries to our customers who are often so dependent upon us for help with technology.
I mentioned this briefly last week, but yesterday’s discussion made me keep thinking about the regional service center model that is used at my last library and the library I just moved to. These buildings hold the public library, service center (aka DMV), and criminal/traffic court, among some other county services. Has anyone heard of this setup happening anywhere else? 3 of our 41 libraries are designed this way, though the court in the 3rd moved recently and for a while that library was going to move into a former department store space in the nearby mall.
Often people come over to us from court after being just released from jail or are later being transported to jail from the building. They may need to access or print personal information, find transportation, or just need somewhere to go for a while after being released. The security officers that are assigned to the library in these service centers are covering the courts at the same time, and there is a lot of law enforcement coming in and out the building every day. Other than a small group of staff advocating to remove security from all our libraries, no one in the library really seems to question the impact this has on our patrons. The sense I get is that the library thinks that we are far enough removed (as if a hallway or a flight of stairs apart from the court makes us exempt from being an adversary here) so no threat modeling has been done.
I think this is further complicated by the fact that HIPAA is widely misunderstood, and by how many digital 3rd parties are involved in the provision of health care.
This is sadly all too common! This sense of urgency, the feeling like we’ll get left behind if we don’t collect data and analytics just in case, the way that we adopt new services without critically evaluating their privacy practices.
I have not seen this before!! This is definitely alarming. I can imagine that this setup makes many members of your community think of the library and the courts, as well as the library and law enforcement, as inextricably linked. If I were threat modeling with fellow staff in a setup like this, I’d want to imagine the ways that this makes the library appear to be an adversary, and work on minimizing the relationship with the courts/LE, or if that’s not entirely possible, using various methods to affirmatively communicate to patrons that this is a different space. And then being active and intentionally about making that true. It definitely sounds like a big challenge though. The other thing I’d be wondering about is what kind of information-sharing agreements exist between the library and the courts/LE. The official policy might be “get a warrant”, but with that kind of closeness, patron data might be getting shared with LE more informally. So having convos with staff about why this is bad (without shaming them of course, because people mostly just want to be helpful, even with bad practices), why police always need a warrant to access data, etc, would be a great practice to start.
Libraries act like/become adversaries when they minimize or refuse to consider privacy and/or when they fail to come up with basic privacy policies/practices. When this happens, the library collects patron information (in my library this includes: name, address, phone number, and date of birth) and storing that information on ILS software without considering what may happen in the event of a breach and why we even need this information in the first place. As mentioned in our discussion, using patrons’ names on hold slips further exacerbates privacy issues. Moreover, libraries’ partnerships and agreements with 3rd party vendors also damages patrons’ privacy when libraries don’t consider how patron data will/won’t be used, stored, collected, etc. Additionally, libraries’ partnerships with law enforcement (events/programs with police or police presence in libraries as “security”) also threaten patron privacy. Last, the issues with technology use in libraries - ex., public wifi, barely protected public computer terminals, and so on, also expose patrons’ data to other patrons, staff, and potential malicious actors. In these ways the lack of knowledge around or willingness to engage in privacy measures turns the library into at least a potential adversary for putting patrons’ data at risk.
In the community, “assets” or people such as LGBT+ teens, people without home access to computers/internet, people still learning about digital literacy, and people affected by domestic violence or other types of surveillance are or can be impacted by poor library privacy policies/practices. Like Emily mentioned, marginalized people who may also be most impacted by poor threat modeling are also victims of a certain type of apathy in libraries that says “at least we give them access to computers” and trivializes the importance of privacy in favor of “access”. That is, rather than making attempts to improve privacy (which is either not considered or considered to be too difficult to implement), the notion that the library offers something is good enough and it’s up to each individual to be responsible for their own privacy. I think that this apathy also makes it difficult to use threat modeling to advocate for privacy as a counter argument can be “well we’re offering an important service, isn’t that good?” or “we’ll just get rid of this service to avoid privacy issues” or even “this is the only technology we can use - that’s just the way it is”.
Yes!! I think it even goes beyond apathy in many cases, wherein people with “good intentions” are sometimes (often!! perhaps even moreso than those who have not considered the “goodness” of their activities) super upset or hostile to any suggestion that what they are doing may be harmful or inadequate.
Understandably this runs rampant in librarianship which is full of well-intentioned people trying to do good things! ofc I say all this as a well-intentioned person trying to do good things, to be clear
This text seems relevant to bring up at this point, it’s really foundational to how I think about librarianship and gender and race:
I find it extra pertinent to myself as a white woman in libraries but I think it’s broadly relevant as well as the profession itself has been so intensely influenced by white femininity and how that shakes out in the context of, like, charity and white supremacy and is definitely a helpful way to re-think the whole “helping profession” thing.
(As an aside: I remember my music master’s thesis was originally based on studying some free classical music programs for low-income, indigenous and otherwise “at-risk” communities and children. One program was originally happy to have my attention but once they got to understand that I was not like 100% uncritical in my take on the situation they told me they had decided to “evaluate themselves” and were no longer willing to participate in any interviews I remember having to shift gears quite a bit and then when I finally presented some findings at a student conference a rather high up person in the department asked – “so are you saying poor children shouldn’t have free music education???” The hostility is real lol)
Yeah, I sometimes wish my colleagues were, like, more terrified of data! That we should not collect data “just in case” rather than the opposite!
I helped run a giant conference back in the fall and we had industry sponsors who ran Zoom meeting booths and a few months after the fact one of them contacted us wanting to be able to contact the specific attendees of their session. Fortunately it was really easy to say no because we hadn’t downloaded attendance records! It wouldn’t have been appropriate for us to give out names anyways but not having the record in the first place sure made it easy to politely decline!
Thanks, Ettarh’s piece is great and the Honma is new to me, will check it out!
I like Schlesselman-Tarango’s more recent one about cuteness in libraries and race and gender too:
I think about the university aspect you mentioned a lot, emily, especially now that i’m in grad school (again). i have very little library experience but discussions in my classes always find their ways back to the lack of diversity in libraries and i question why library school cohorts themselves aren’t more diverse. i think you’re right in saying that universities making an effort to bring in more BIPOC or Queer students, among others, is much more ‘threatening’ to their current structures.
i’ve heard of this model!..someone in my library school cohort works in a library that is connected to a court system and some other government buildings. when she explained the setup, i was honestly shocked for the same reason that alison mentioned - how can you visit the library without assuming it is connected to the rest of that system? with the higher presence of law enforcement, i imagine that some people would not feel safe in an environment surrounded by that many security guards and police.
Thanks Emily! I’ll check that out!