LFI.2 week one discussion thread

Hi all, here’s our discussion thread for week one!

Here are some questions to consider:

How do you see the surveillant assemblage at work in our lives?

If human contact is becoming a luxury good, what are the implications for privacy?

What terms or concepts were new and noteworthy to you in “The New Terminology for Privacy”? Which of these do you want to learn more about in LFI?

I considered Surveillant Assemblage as a sort of review of the literature in the field of surveillance. I felt sad for the elderly person where there was human connection through avitar.
Definitely think that the public needs to know the implications of what is given up for convenience.

Towards the middle of the article the authors talk about how the “surplus value increasingly refers to the profit … derived from the surplus information that… trail[s] behind them in their daily lives” and this really hits home for me as a socialist who is thinking about labor exploitation and how pervasive technologies have let rent-seeking proliferate vis-a-vis monetizing literally almost anything. Recently read someone somewhere who referred to the surveillant assemblage as a ‘soft cage’!
Also been reading about Google Sensor vault recently and connected it to location data from the reading; then I came across the term ‘geofencing’ which I’d never heard before and upon further reading was like yikes.


I really enjoyed reading the surveillant assemblage article. Data double concept blew my mind. I use web analytics information at my library to create personas, and that’s exactly what data doubles are. The other thing about this article is that it’s 20 years old and it describes how well we use social media and research surveys. We trade up a pieces of our privacy for special deals, products, and services.

I agree with the idea of screen-free time as a luxury but I didn’t like the use of exclamatory language on the damage of screen time. They brought up health data in a way that made it seem declarative and confident when really the data were about preliminary examinations of screen-time-health. To be clear, I am not trying to advocate that screen time is healthy, I just don’t like when journalists draw conclusions that were not explicitly stated by the research.

1 Like

I think one of the most important things to keep in mind about the surveillance assemblage paper is that it was written almost 20 years ago so they miss some key elements of the current culture.

I think they might have picked Brave New World instead of 1984 because we’re dealing with more apathy than was present in 1984. If anything it’s like Zamyatin’s ‘We’. The reason the prols aren’t surveiled in 1984 is because they cannot be commodified, there is no profit in following their movements. The Party supplies what is needed, so there is no threat from the underclass.

They were very honest from the outset about not using too much of Deleuze’s terminology and concepts but I do with they had gone a little more into how the assemblage straits information and how that differs from the smooth space that information is broken from. The concept that they miss, that I think would have been a boon to helping make this paper more approachable would be the concept of flow and deterritorialization .

It’s not even so much that a ‘data double’ is created but that the human assemblages flow is broken and redirected to a different desiring machine, the surveillance state. We are less data doubles now as data tripled (or even more) as we are aggregated not only in a person profile but with out demographics and even modes of being throughout the day. It understands our multiplicity.

edit If you want a book that covers assemblages really well and is a bit easier to digest, I recommend Assembly, but Hardt and Negri.

I enjoyed “The Surveillant Assemblage,” and was really surprised when I realized the 2000 publication date! The authors iterate so many issues that are really salient these days (the exploitation of digital labor, trading personal data for “free” services, the creation of digital profiles based on online behavior, the flailing against specific technologies while the total onslaught of surveillance threatens to overwhelm us). Obviously things have changed in 20 years, but I was still surprised by how contemporary it felt.

I think the question may be more ‘is there any arena of life where surveillant assemblages are not at work?’ How many hours a week do we spend without a cell phone as a constant companion? How many cameras do we pass by on our way to buy groceries? who controls those images? Which apps have we given access to our location data? Who do they share that data with?

One point that I thought was important, though, is how, while “reams” of data are constantly being collected by a variety of institutions, usually these data points aren’t assembled in any meaningful way (except maybe to serve you an ad, or tell you what song to add to your playlist). This sense of data as “white noise” or “harmless” marketing tool might be what creates apathy about privacy and evokes the argument that “if you’re not doing anything wrong, then you have nothing to worry about.” However, when there is “some perceived ex post facto or prospective need…” these data driven versions of our lives can be put together in great detail.

On the other hand, another thing that’s great about this article is the idea that many of these assemblages are often ad hoc, piecemeal, and deeply flawed. Just one example–facebook’s advertising profile pegs me as Black, Christian, and single. Spoiler alert: I’m not any of those things. I’m not sure what that means overall, but it’s good to be reminded these things aren’t monolithic or omniscient.

The human contact as luxury good also hit home for me in several ways. Two examples of this are school choice and smart meters. Just like in the article, many of the wealthier families around here send their kids to Waldorf school where screens are pretty taboo, while the local public school has adopted a tablet-based curriculum. Many of those same families have opted not to have wi-fi enabled “smart meters” in their homes, but instead pay to have a human meter-reader come out every month to check their electrical usage. In both cases the choice is motivated more by health than privacy, but in both cases it means the less affluent family is subject to more surveillance and data collection. It ends up being the opposite of Orwell’s dystopia, where the proles were relatively unsurveilled.

II enjoyed reading “The Surveillant Assemblage,” I found the “disciplinary aspect of panoptic observation involves a productive soul training which encourages inmates to reflect upon the minutia of their own behavior in subtle and ongoing efforts to transform their selves” to be a fascinating premise. In contemporary society, at least in urban America, citizens are under constant surveillance from street corner cameras in high crime areas, speeding and red light cameras, to patrol cars, bicycles, and neighborhood watches, reminders of “see something, say something,” etc., along with every technological device that we voluntarily use that listens, watches, and tracks us. I wonder how much these assemblages actually affect human behavior and one’s transformation, today? My guess is that, in American society, they make us better consumers and these assemblages do not lead to an improved and positive human transformation; and we are the product, our personal data, of these assemblages. On the other extreme, it is frightening to think about who gets to decide what is acceptable human behavior and how humans should transform themselves. What occurs when the “state” colludes or controls the surveillant assemblage and decides certain human behaviors, aspects of a culture, are unacceptable? As seen with the Uighurs in China, these assemblages are often used to subdue and control ethnic minorities until they break and become model citizens.

The human contact aspect of my work has always been the best part of what I do. I have witnessed how people independently transform themselves, even if it’s just for that moment, when they receive positive human contact that respects their agency and is supportive and open. I am my library patrons’ best resource and this relationship is what leads to positive transformation. The inability to have positive contact between humans is a threat to our communities, democracy, and society as a whole. Increasing one’s ability to to interact with one another with human contact is a skill that we have to teach and encourage.

After reading the article on the surveillant assemblage, I thought about our introductory question, and what we would destroy if we had the option. According to Haggerty and Ericson, “the surveillance assemblage cannot be dismantled by prohibiting a particularly unpalatable technology” (p. 609). I think this means that there will always be one more in the assemblage to replace it, or at least another type of technology or organization collecting similar data. This article also brought up other issues such as voluntary self surveillance or the mainstreaming of data collection where we give up data for rewards and entertainment. This concerns me in that it’s too easy for people not to care or think too hard about it when they have to make a choice between taking back control or just giving up their data. I see these challenges in the classroom and in myself. I gave up Facebook more than 2 years ago because I didn’t like their privacy policies, but yet I have an iPhone which collects all kinds of juicy data on me. I’m so conflicted, so I certainly can understand (and cringe) when a student tells me that they don’t mind being tracked by Google because it personalizes their shopping experience. I typically do find that students care and want to change their behavior up to a point, so I am optimistic. Anyhow, this article definitely gave me much to explore and consider. Even though it was written nearly 20 years ago, it is still very relevant; technology has just gotten more sophisticated.

Totally with you on thinking about the economic and labor implications of surveillance, and it’s why I’ve really connected with Shoshana Zuboff’s work. We aren’t the product, we are the surplus value, the raw material that makes the products that get sold and re-sold and monetized in all these new ways that we never consented to.

Geofencing is wild and is one way that companies like Uber skirt local legislation about their use

Yeah honestly we have infinite data avatars at this point, from full shadow profiles to disembodied parts, some of those pieces aggregated with other people’s data to make data Frankensteins, new creatures reanimated after they learned how to be human from studying us. The article definitely doesn’t predict all that, but I think it’s uncanny in how much it does get right.

And if we want to learn the answers to these questions, where do we even start?

So true, when it’s abstracted away and brings some personalization value to our lives, many people are hard pressed to find a problem. But something that I find resonates with people in that set is everyone’s increasing suspicion and disdain for online advertising, especially after the Facebook/Cambridge Analytica scandal. The idea that even that abstracted few data points can be used to create psychologically manipulative ad content that was credited with influencing over 200 elections globally…people really are pissed about that! So how can we use our positions as librarians to make space for that collective anger and hopelessness and harness it into support and action?

Yupppp. I hope we have the chance to talk about this during April Glaser’s lecture, or maybe a future one where we examine shadow profiling and data doubles in more depth, but yeah, SO MUCH OF THE DATA IS WRONG. But in some ways, that doesn’t make it less harmful. First, lots of datasets are pretty precise in their knowledge about what we’ve done, it just might have flawed AI or whatever to use that data to figure out who we are (like, all the websites you visited are stored by your web browser, but their assessment of what kind of person visits all those sites might still be wrong). Second, that flawed AI gives us things like racist facial recognition that misidentifies 28 members of the Congressional Black Caucus in a mugshot database. Though even if the technology becomes better functioning, it’s still gotta go. We have our work cut out for us.

I think this is so right. The studied effects of constant surveillance, especially the panoptic kind where there’s an assumption that someone is listening/watching, are self-censorship, distrust, paranoia – definitely not positive human connection. When you consider that our surveillant assemblage is mostly in the service of digital advertising, it’s like you said, the main quality that all this is improving in us humans is our ability to be consumers.

You totally nailed it. I mean, in my opinion, this is what we’re doing here. It’s not just that we want to make space in our libraries to respect privacy and teach about protecting it. It’s that we want to counter the deeper effects on society that all of this is producing. We are the right ones to do it because this is already what we do.

Very good point Michele. I struggle with thinking about this, because I do think that small victories matter and if we stand a chance against an even worse future, we have to hack away at the vulnerable parts of the system until we gain some real ground. The article’s authors called it rhizomatic, and it reminds me of an invasive species that my friend is dealing with in her background that just keeps…growing…back because it’s a single interconnected organism that won’t die unless you somehow find and destroy every piece of it. It’s exhausting to think about. But the article says something that I think is hopeful – the response to the surveillant assemblage must also be rhizomatic and that’s where it comes back to libraries for me. We are a kind of rhizome. We have interconnected roots leading to each other, and we can strengthen those connections through intentional partnerships and solidarity (and group work!!! lol).

Something we will work on a lot in LFI is how to talk about these things in a way that is persuasive but also not overwhelming, polemic but not pedantic. How do we meet people where they are, convince them that change is necessary and possible, and then help them do it? It’s possible but it’s definitely challenging. It starts with us knowing as much as we can about what’s actually going on. Like your iPhone example – if we’re making the tradeoff (and I have) between Facebook and an iPhone tracking us, I’d choose the iPhone easily. iPhones aren’t sending as much of that stuff back to a central server, and Apple has prioritized security on their devices. Apple also doesn’t make its money from data the way that Facebook does. That said, it’s not feasible for lots of people to leave Facebook, nor is it feasible for everyone to just get an iPhone, nor do they solve the same needs anyway. So we have to be conversant in multiple strategies that work for many kinds of people in many different situations. This week, we’re gonna talk about exactly that when we talk about threat modeling.

1 Like

I got to see the inside of a Homeland Security control center in 2004 while I was working on a TV show in NYC. (I’m not clear how a 26-year-old, low-level locations PA on a second-rate reality show got chosen to do detective work regarding suspected vandalism, and then was granted access to the inside of this office. Even with my hazy recall of how I got in there, I clearly remember the dozens of television screens and available recordings.) We were looking for footage from a particular part of a street during a few hours over the course of one night. Even though there were cameras everywhere, they were focused on different parts of the street, and the ones that were panning missed the spot we needed them to hit in the time frame when the vandalism happened.

What I’ve taken away from that one random hour on the inside of the surveillance state was that the wholeness of datasets are spotty, at best. But when they find a target, they are creepy good. It’s all so very human. This moment sticks in the back of my head when I read about surveillance, and when I think about what my data double might look like. How many private thoughts of mine have I put into Google? But how many ones just stayed in my head? Or never got picked up because the sweep was looking the other way?

If I knew that years later I’d end up here, I would have recorded better memories of the inside of that office! I wonder how much has changed as new technology gets added, and how much is still lost in the sweeps.


From “The New Terminology for Privacy” I was most interested in how the use of personal data is being regulated. Having read the first two articles, I felt a little overwhelmed by how difficult it seemed to be able to control who is using personal data and how they’re doing it (especially the portion of the surveillant assemblage about rhizomatic surveillance).
I hope to learn more about preliminary steps taken to start regulating the use of personal data, their effectiveness, and if expanding upon some of these acts/regulations/directives/groups would be a productive use of time and effort.

Reading the New Terminology for Privacy was super helpful in introducing me to regulations/laws that I did not know existed (or that I assumed existed, but didn’t know the actual names/language). What strikes me is how western this list seems to be with most regulations reflecting US and UK laws. I’d love to know more about what this conversation looks like internationally. Clearly, assemblages are global and other countries have mobilized extremely sophisticated means of tracking people. How are these conversations/vocabulary different elsewhere?

1 Like

I’m curious about this as well! I was just listening to an On The Media podcast and learned that in Myanmar, “Facebook” is synonymous with “the internet.” I wonder if that is true for other countries in the Global South. I’m fascinated by the cultural origins of privacy, and how that changes in countries with high population density. If you live in a place where physical personal space is smaller than the American concept of personal space, do you feel differently about your online space?

How do you see the surveillant assemblage at work in our lives?

Around here (Chicago) the police employ a long list of costly and invasive surveillance technologies that might be recording you, your family, and your neighbors right now. Cameras, automatic license plate readers, cell site simulators (IMSI catchers), social media monitoring, Shotspotter (audio surveillance system), biometrics, facial recognition software - all of this funded in part by CPD civil forfeiture proceeds, BTW. The assemblage’s power touches everyone, but its hand is heaviest in communities already disadvantaged by their poverty, race, religion, ethnicity, and immigration status. Surveillance of “mainstream” citizens tends to come at a distance, with hard-to-measure effects. Among the poor and powerless, surveillance is local, ubiquitous, and palpable, with harms that include physical force, harsh financial pressures, and humiliating exposure of intimate lives.

Whoa! I can’t imagine getting access to something like Homeland Security! I wonder how far their technology has come since 2004 & how much more they trust it now. I think that’s what scares me so much about so much surveillance tech. The more accurate it is, the more likely we are to trust it & rely on it… & that seems so dangerous to me. It reminds me so much of how DNA has become the new standard within the criminal justice system for convictions despite the fact that it is very flawed & not well understood by most of the population (you know, your peers, the people who are entrusted with deciding your fate). We give up so much freedom & privacy for relatively small amounts of safety (whatever that means today) & convenience in today’s world. I do find that more & more people are questioning this trade off today vs. when Assemblage was written, but I will always be uneasy with this trade off.


The Human Contact article really struck me as an example of a “low cost” way of solving a problem–human loneliness–and the ways that technology could be use to harm someone. The article mentioned that the app has a video feed of the gentleman and is able to see what he is eating and drinking. Will at a future time it become more difficult for him to medical care because he is considered high risk because of lifestyle choices? Which feeds into questions I have about what if any regulations there are around surveillance particularly outside of space I can control. People can make the choice about whether or not to use certain technologies due to concerns about their privacy, but as they ride the public transportation, pass by certain buildings, or drive down the street, information is being gathered about them. People can’t really say that what kind of impact it is having on them if they don’t know who is collecting what, with whom are they sharing it, and how, if they are going to gather that information, they are going to keep it secure.

1 Like

@SymphonyBruce you’re so right about the US/UK focus, besides the GDPR mention. what I know about privacy/surveillance stuff outside of the US and UK is all over the map, but an organization I recommend following is Privacy International. If there’s sufficient interest in hearing a more global perspective I can definitely get someone from there to be a guest lecturer. They could probably speak to @CarolynGlauda’s question too about how much of the world thinks of Facebook as “the internet” (I would add WhatsApp to that too).

@distanttaxa What you describe at work by the CPD is (as you probably know) at work in communities across the country. We will examine police surveillance in one of the later weeks, but I’m so glad you mentioned the funding, because that’s what we’re going to cover in Week 3 with Kade Crockford. We’ll talk about the aquisitions and budgets for surveillance equipment, and how local communities might attack vulnerabilities in that process. Just showing up to budget hearings can be really effective!!!

@qianaj great point about how the app for lonely seniors could be used to make judgments about their lifestyle. This kind of thing already happens with workplace wifi/browsing history and insurance companies. Since there are few regulations about private companies selling your data to other private companies, it can happen easily.

1 Like

I recently read the book Patriot Number One, which is about a dissident in China who sought political asylum in the US after organizing protests against his corrupt local government. Although not a central point to the story, the book discusses how much of the protests were organized on WeChat and other Chinese social media sites and chat apps, which are definitely not impervious to government surveillance, even though many of the protestors worked under anonymous identities. Of course, the Chinese government has, unfortunately, encouraged a lot of old fashioned surveillance (spying on your neighbor) in the last 50 years, so the monitoring of citizens’ behavior is not something new. I did about five minutes of searching about Chinese social media surveillance and found half a dozen articles I want to come back to on this topic!

1 Like

I find the intersection between surveillance and SES and/or race really interesting. I find that in many ways, poor folks are not encouraged to protect their privacy as much as wealthy folks. When you receive government benefits, for example, you have to supply a huge amount of information about yourself, which is tracked and monitored. Even in the library, I encounter people offering way more information than they need to for, say, a library card or a fine waiver, just because they are conditioned to spill their whole lives to government institutions. That is one thing I dislike about being a county employee.