Week 13: Digital Redlining, Divisions

This thread is for digital redlining and other division issues

An article that I published in 2003: The Next Digital Divides: https://www.researchgate.net/publication/265182610_The_Next_Digital_Divides


Two questions for us to consider after this week’s talk:

  • How are digital redlining and privacy connected? What are some examples?
  • What can librarians do to fight back against digital redlining?

Also, per our discussion, here is the LFP booklist on the wiki: https://libraryfreedom.wiki/html/public_html/index.php/Main_Page/Reading_List#Books

One example that readily springs to mind is the fact that those redlined usually don’t get digital literacy training, which increases the digital divide. It’s a common example, but it’s common for a reason. It’s true.

And what can we do? Outreach and properly fund libraries in their area. Along with fighting against filtering software. (I’m firmly no filter, but I hear other ‘reasonable’ arguments).

I want to think this over, but that was what was fresh in my mind.

1 Like

One connection to privacy is what Dr. Gilliard mentioned in class with filters. Like we discussed, even in cases where one can ask for the filter to be removed, if in a library, a patron’s privacy is compromised because they have to ask someone to remove the filter. I would imagine in some cases depending on the library or setting, one may even need to explain the type of research and have it vetted by the person responsible for altering the filter to allow the research. This definitely disrupts the flow of discovery and chills research.

Preparing for this week, I read this article-https://www.commonsense.org/education/articles/digital-redlining-access-and-privacy co-authored by our speaker wherein an example is given of a high school student research the poet e.e. cummings whose name was flagged with a buzzer no less as “inappropriate.”

A lot of these topics bring me back to a question I am often asking myself. “Who gets to decide?” We discussed in class how there are no real guidelines for filters. So, filter designers with whatever biases can build in just whatever.

This week’s lecture also reminded me of just how unequal or disparate access to broadband remains in most places. Even in my small city, it wasn’t very many years ago (2012) that the city councilperson who represents the ward I live in ran specifically on the platform of “broadband is a utility” because her somewhat wealthy neighborhood couldn’t even buy broadband access if they wanted to. It just wasn’t available.

Fortunately, the library does have broadband. My particular library isn’t CIPA compliant, so we don’t have to follow those filter guidelines. However, we do have filters in place that I haven’t exactly gotten a complete answer on what they all they filter. So if I with at least some background knowledge have trouble navigate conversations with the powers that be about what is being filtered, then certainly students and others who may not be aware of what is happening may encounter trouble with their research and not know why, another point that Dr. Gilliard made that I appreciated.

I think the best thing we can do is keep having these conversations and keep educating others about digital redlining and what it means, especially to marginalized communities.

Also, this seems like as good as time as any to bring up that (in my opinion) pornography is an information resource, and I don’t understand why visual images of sex are the giant line in the sand. So, if anyone can help me understand that, I would appreciate it.


I have thought a lot about what makes representations of violence OK, but not human sexuality. I think it goes all the way back to the beginning of human history. There was a great documentary a few years ago called “This Film is Not Yet Rated” that looked into the film rating system to dissect how the MPAA decides what is appropriate or not.

There’s also something about our squeamishness into viewing real things. We see images of simulated violence and sex all the time, but the actual stuff is too actual.

Just this morning I was renewing my car registration and realizing how privileged I am to be able to get to work early and hop on the internet so I can pay the state hundreds of dollars to keep my car. On the other side of the digital redline, you have to take the time to go to the DMV, pay for parking, wait in line, and then hope you have all the right paperwork and all the money. And if you’re a day late on that registration, you get a huge fine.

1 Like

ugh, yes, such a great and chilling example.

that’s right. it’s even worse than our unvetted vendor agreements. we let the manufacturers of the filter software do what amounts to digital collection development for us.

if you haven’t done this yet, I’d test them out with a bunch of LGBTQ and other typically blocked sites and see what you find.

I think about this a lot too. While I think it makes sense to want to protect children from seeing these images in public places, I think we could solve that problem pretty easily. But we don’t, and I think that speaks to a whole bunch of other phobias we (as a society) have about sex and poverty and queerness and what have you.

Here is a brain dump that probably won’t read in any logical manner :slight_smile:

Chris’s lecture reminded me of a bunch of practices in the k12 environment that I would have to explain to students all the time while they were doing research. I taught a research/presentation course for “advanced” sophomores who loved to choose mature, sometimes controversial topics. Much of their research had to be done at home, on home computers, because of the filtering systems. Students who didn’t have access to personal devices or wifi were stuck researching topics vanilla enough to make it past the filters.

What I had to spend a lot of time explaining to students was the surveillance behind those filters. As edtech and 1-1 tech programs become the norm in school settings (higher ed included), school administrations have more data on our students than ever. I constantly had to remind students that in order to receive the filter notification, something had to be “watching” their activity in the first place. These things aren’t really explained to students or to their parents. At most, they are often just told “don’t look up bad stuff on the internet”

With this being said, students are becoming savvy and are using VPNs and whatnot to get around filters and trackers. But when districts rely on tech like Chromebooks, which have very limited operating systems, they have more control over what applications are downloaded or which apps/extensions are added. All of this has greater impact on the less privileged students.

It is hard for me to say what k12 environments can do. They are stuck in a hard place. Districts want to even the playing field for less privileged children by providing tech that they may not have at home but feel compelled to “protect” students and comply with online safety legislation. When these things happen in higher ed, it seems more like data gathering for the sake data gathering. And in Chris’s case at the community college, a clear message that the administration did not trust the students, faculty, or staff.


Reading the article Chris co-authored (AJB’s post above has the link) in which digital redlining is described as an outcome of the coalescence of policy, tools, extra-institutional pressures and motives that inevitably define the boundaries of resources for some students, specifically at community colleges. In this case the result is predetermined learning outcomes based on the class character of the institution and the ‘needs of the class’ the institution is defined as serving. I’m cribbing a little but in the article they describe community college education as being perceived as ‘job training’ while college/university curricula an ‘intellectual pursuit’. Inequities and disparities cascade…
My response is kind of broad but redlining seem to be a facet of a diffuse, permeating aesthetic, in which a shared and localized reality has been subsumed by economy. I see this aesthetic often in some of the non-profit and foundation-funded programming brought in for teens in our library; often the stated goals of the programs look an awful lot like bullet points of job descriptions. It’s disheartening because the end result looks like we’re preparing teens for work that doesn’t emerge from their understanding of the world, from their curiosity, or represent their interests as an individual or as part of a community, the imperatives come from without, above.
As a teen librarian one of things we try to do is create space for improvisation, spontaneity and also to not look at them as walking deficits. We include them in some decision making. We haven’t lost if they leave the library short a bullet point. We lose when they don’t understand that it’s necessary and important for them to demand things of their community.



I have thought a lot about what makes representations of violence OK, but not human sexuality.

This is something that has been bothering me for a very long time. In Chris’s example, why is revenge porn block yet something like gun control likely is not? We’ve decided that the word porn should not be allowed in the academic realm, yet guns are perfectly A-ok!

A local LA news station did an “exposé” on my library system (you can see a few of the clips here: https://www.nbclosangeles.com/news/local/Porn-at-the-Public-Library-457872613.html). Naturally, they sensationalized the story to elicit outrage from the public. Although it’s freaking hilarious to me that the county library says they filter “the most sexually explicit” websites yet you can access PornHub. :thinking::roll_eyes::crazy_face: The former City Librarian here was adamant about not filtering internet at the library, but I know our current City Librarian has been a bit of a weeble on the issue. I’m really hoping he doesn’t cave & the city council don’t reverse their decision. At the time these new stories were running I heard lots of librarians who were in favor of filtering the internet & I got a lot of wonky looks when I made arguments against it. People like to always fall back on “But what about the children?” As much as I know I have solid arguments against that, I’m always wary to unleash them as someone with no children and no plan to ever have any children. Also, it’s interesting to note that staff internet is filtered here… but our public internet is (currently) not. I legitimately got a block notice when I tried to access a site about green/net zero energy home building… I’m still wondering what the hell triggered the filters on that one.

When I worked at a university library I worked in course reserves. We purchased straight up hardcore porn DVDs for one of the courses. Some people in my department were all up in arms about it saying the “university shouldn’t be supporting things like porn” et al. But from what I remember, the course had something to do with the different ways women are portrayed in media, so I can absolutely understand why those materials could add to the discourse.

Internet filters are like those people who want to sweep all uncomfortable topics under the rug & pretend they just don’t exist, even if that is detrimental to people’s privacy & academic pursuits.


Chris’ lecture made me think of a number of issues that impact our broader community, but I also think of how redlining impacts the students at my college. I think of how redlining can impact their access to affordable housing. The cost of real estate and rent in San Francisco has made it unsustainable to live for many SF city residents. Many have been displaced by tech workers that can afford to pay the outrageously high rents. Being at a community college, our students vary in socioeconomic status, ethnicities, races, gender identities, sexual orientations, religions……Imagine not being able to afford access to the internet, food, textbooks, affordable housing and continue to succeed in their courses. I had a student that said she lived in a studio apartment with her parents and 3 sisters. I have students that tell me that they can’t rent the textbook I helped them find because they don’t have a mailing address because they live in their car. I have a student that failed my online class because he tried to do it all from his phone because he didn’t have internet access at home [I tried to refer him to other resources]. There are many happy successful stories, too. I promise. Chris’ talk brings up lots of thoughts. I don’t know what the answer would be to how to fight redlining. Perhaps raising awareness so that people get good and pissed off enough to fight against big corporations like Facebook. In the news yesterday, a group of Facebook users sued Facebook in San Francisco’s federal court over the discrimination (filtering) of housing ads in Facebook’s Marketplace. The plaintiffs refer to Facebook’s use of multicultural affinity to exclude specific groups. As Chris mentioned, Facebook was supposed to stop using ethnic affinity, but I guess they really didn’t. My experience so far, living in San Francisco, is that corporations like Google and Facebook get what they want. Hopefully, enough attention can impact them in ways that brings attention to issues like redlining, particularly as it pertains to access to housing.


“protect kids”


This stories you shared are bleak Michele, and it’s hard to think about what can be done to fight back, but to me it’s more of this:

How do we help get people angry about these conditions and encourage them to fight back?

Corporations like Google and Facebook brought their huge companies to the Bay Area and didn’t think about or care that it would have a huge impact on housing. Protesters used to block the shuttle buses that ferried tech workers from San Francisco to the Silicon Valley (and it seems they have been at it again). The buses still exist, and they use city bus stops (allowed by the city), but the protests brought awareness to the fact that tech workers were moving to San Francisco and other nearby cities, driving up the rents. Now add this ad discrimination to the mix and it becomes harder to find housing if one is being excluded because of their ethnicity, gender, disabilities, etc… So, Facebook, a company that helped to displace Bay Area residents is also now supplying a tool to create further displacement via discrimination is troubling. I think lawsuits like the one referenced in the story can further draw attention to their shady practices. I don’t think lawsuits or protests will ever fully fix this, as again, Google and Facebook have a lot of power and sway, but if we continue to keep showing the ugly side of these corporations people will get pissed off and fight back. Protests? Union support? My union is very aware of housing and affordability in Bay Area for faculty, staff and students. Article in college newspaper?


Dr. Gilliard’s talk on digital redlining really gave me a lot to think about in terms of internet infrastructure & the physicality of data. I have been following him on twitter for awhile now, and was so hyped to hear his thoughts and research on this topic. I truly appreciated him answering my question towards the very end of the lecture about the differences between U.S. digital redlining & global digital inequalities to accessing technology/internet networks.

Slight tangent - I’ve been thinking and reading about digital colonialism, and different forms of resource extraction/disruptions of nature that enable the internet to be what it is today. I wonder if the obscurity of these unequal power dynamics can fit into Gilliard’s research on digital redlining. For example, a lot of the companies I use to access the internet/digital tech are based out of the U.S. and these companies use products (ranging from devices, servers, cables, etc.) to connect me & yet the materials needed to make these products typically do not come from the U.S. in short - how does the “hidden” cost of resources and labor needed to make the internet work (unequally) around the world fit into digital redlining? Or is this example easier to think about in terms of digital colonialism?

An example of digital redlining specific to NYC - I’m thinking about the public wifi situation and the high cost of subscribing to a telecom provider for wifi. So if you can’t afford a monthly wifi bill, you are forced to use LINKNYC kiosks/ transit wireless wifi networks - which are less secure than a LAN (local area network) - and this creates more risks for individuals - like banking info can be stolen if you have that app open while on public wifi (maybe???). more data becomes visible on public wifi networks, and NY has done a bad job at regulating telecom providers so I think this is probably an example of digital redlining.

A way that librarians can fight back against digital redlining - this isn’t direct action per say, but I think that doing outreach (in different forms) like pamphlets, talks, displays - about computer networks, internet service providers, the “digital divide” & capitalism, etc. can be a way to raise awareness about larger power structures that make bills high & why public wifi isn’t secure, etc.


Interesting thought. I think the definition can fit, because broadly speaking, digital redlining just means creating and perpetuating inequalities through technology. I think access to information about how the internet works is a piece of this. I think it there’s an additional piece that feels more like digital colonialism, and that’s the second part of what you wrote…the part about extractive industries, and also how internet governance is concentrated in the US.

Public education is a hugely important step we can take to addressing some of these inequities!