LFI.3 Week eleven discussion

This week with Varoon Mathur on the social implications of Artificial Intelligence was another talk that was so good I don’t even know what questions to ask in here. What takeaways do you all have? What stood out to you in the conversation?

Also, here are some of the links we discussed:

anatomyof.ai (one of our readings for this week)

The discussion about how Flickr fueled so many of the facial recognition tools made me feel so much rage. I’d heard it before, but it was a reminder of the distance between how I thought of social media vendors in 2004-5 vs. now and how much trust has been irrevocably broken. I remember using Flickr quite a bit between 2004 and 2011. It was a vibrant community; a place to stay connected with friends and family. It would never have occurred to me that my photos could be used for something so nefarious. Since my Flickr days, I only post pictures of my family in spaces where my content is only viewable by friends, but even then, it’s hard to believe it isn’t being mined and used in some way we’ll find out about later on.

It also makes me think a lot about college students taking online classes and how they have no idea that when they’re using the learning management system, they are being surveilled (and by more than their instructor). Their every interaction with the system is recorded and the LMS companies can easily monetize that data, share it with other vendors, and use it for machine learning. Here are some disturbing quotes from Instructure (Canvas) last year about how they planned to use student data from their many client institutions to make even more money through machine learning and AI:

"What’s even more interesting and compelling is that we can take that information, correlate it across all sorts of universities, curricula, etc, and we can start making recommendations and suggestions to the student or instructor in how they can be more successful. Watch this video, read this passage, do problems 17-34 in this textbook, spend an extra two hours on this or that. When we drive student success, we impact things like retention, we impact the productivity of the teachers, and it’s a huge opportunity. That’s just one small example.

Our DIG initiative, it is first and foremost a platform for ML and AI, and we will deliver and monetize it by offering different functional domains of predictive algorithms and insights. Maybe things like student success, retention, coaching and advising, career pathing, as well as a number of the other metrics that will help improve the value of an institution or connectivity across institutions."

People can say that young people are post-privacy and don’t care, but I don’t believe that is true across the board. When I taught a Writing class I was working with about learning analytics, they were totally grossed out. The biggest issue is that they don’t know, they aren’t told, and there’s no way (at most institutions) to opt-out of data collection. The idea that their instructor or an advisor could know how much they are using the library (something that is true at the University of Wollongong with their “Library Cube”) so the instructor/advisor can “intervene” to help the student because library use has a positive correlation with grades is so disturbing.

All the things I read about AI in the LMS talks about truly personalized learning experiences and an experience that adapts to the user, and that sounds so nice, but at what price? And why do we even need teaching faculty if the system can grade everything and respond to students who are having issues with standard interventions? I read about one system that can now grade discussion board postings itself using some kind of algorithm. At that point, are teachers even teachers or are they just content-creators? But so many colleges and universities are going in for these bundled textbook models where quizzes and other supplemental learning content come with the textbook so they’re really not creating anything. It’s shocking to me that more instructors aren’t wary of going down these paths to their eventual irrelevance; so many are just excited to have a lighter workload. Will the true neoliberal university actually have any teaching faculty??? Not to mention that learning AI (like other racist AI) will probably just help privileged white guys to succeed and leave everyone else in the dust.

Sorry this is long and cranky – the use of AI and predictive analytics in academia is something that really pisses me off.

Ugh, I hate when people say this. It’s definitely not true of the students I work with, and I remember reading this piece about the crazy Instagram stenography that teens engage in where they have like, global clusters of people on accounts so they can mess with geotagging, and there’s tons of other examples of similar things out there.

I think this is absolutely true:

Also, what is this??

The personalized learning experience stuff always seems so… idk, uninformed by actual educators to me. Like, what if there was a way for faculty to easily find open source equivalents to some of the super-pricey resources they’re familiar with and default to assigning because it’s what they know? What if we could use some of this machine learning and AI to try to identify similar resources so we can get out from under the oppressive pricing of academic publishing? It’s too bad to hear this about Canvas because it’s the one LMS that seems to be gaining some traction on our campus (we don’t officially use one, which leads to all kinds of fun inconsistencies and difficult knowledge transfer).

There’s some good news about this stuff from our little corner of the world, which is that I’ve spent some time observing a few classes at Olin where a very context-heavy presentation of artificial intelligence and algorithms was being presented. I visited the Artificial Intelligence class last semester, and there was a student who did a presentation on generative adversarial networks (the things used to make deepfakes), and they struggled to explain what good a GAN could actually do when a couple of their classmates asked. The classmates went on to do a final project that was basically like, a booklet of gut-checking your personal ethics as an engineer. When they did their final presentation, instead of printing out a poster with complex terminology and data sets, they had one-on-one conversations with visitors about what AI means to them and why they thought it was important, and they essentially did activism to show people the dangers of facial recognition and using images without consent. It was amazing.

In another class, I let students take my picture to include me in a dataset they were using to try to work on recreating an algorithm, and as part of the process, they asked me how I felt. What was it like to realize that I might not know what would ultimately be done with my photo? I told them that it would have been disturbing if I didn’t know them and feel comfortable and trusting of them. And they noted immediately that this was not a guarantee with people who’s faces have been included in other datasets. So, I’m saying this because these kinds of moments have made me feel good not only about my job choice but also about the future of engineering and people who will have this powerful tech in their hands. At least some of 'em might save us from ourselves. :slight_smile:

Last thought here and then I’ll shut up - I think the understanding of AI in libraryland is really lacking, and I’ve seen a lot of presentations about “the future of AI in libraries lololol” that are, well, about the present, and they present a lot of things that I don’t feel particularly moved by, or that I think undermine other issues we confront in the field. For instance, there’s a few AI-driven reference bots out there that keep coming up in these talks. Does anyone really think that libraries of the future will be staffed by expensive reference robots? This seems like tilting at a windmill that directs us away from the real threat, which is austerity cutting positions and closing the library for good.

And for folks working on those technologies, what is the goal? The automation that makes sense to me in libraries is in physical components, like circulation - but that is an argument that needs to be carefully made. In my mind, the idea would be to stop repetitive stress injuries and minimize the panic of having to do five things at once at the circ desk, freeing people up to pay fuller attention and provide better service to patrons. But that might not be how other directors or administrators see it at all - that could be a dollar-signs-in-the-eyes moment for them for position cuts.

“how they can be more successful” I am so interested in a breakdown of what is meant by this.

Completely agree. If anything, I think young people might feel more despair and resignation about it, since they didn’t necessarily experience the early days that the rest of us had where we actually trusted these companies for a time.

That’s right…what’s the real goal with all this personalization? Identifying the “ideal” student with all of its problematic connotations, and automating away most of the duties of the teacher so that eventually we can rationalize making those teachers into independent contractors, if we have any remaining jobs for them at all.

god that is REALLY sad, like it’s bad enough for them under the current conditions that they can’t fully recognize what’s coming

yep and worse yet it’ll be marketed as “deracialized” or something, objective and unfeeling deterministic data

that’s what we’re here for!!!

totally, I feel like young people have an instinctive sense of threat modeling because they are used to trying to hide things from parents/principals etc

it’s software developers and private equity guys, a winning combo

these are great stories, and I think it underscores the importance of what we’re doing here. even if the biggest influence we can have in our institutions is introducing some pushback, and some critical information into the conversation, that’s how you get culture changed. the AI ethics discussion is the perfect example of this – it wasn’t happening just a couple of years ago. but lots of critical voices have changed that.

yes, and here’s our opportunity. the info is lacking and the convo in favor of it is only just starting. so this is why we start submitting our critical conference talks and showing up to the favorable presentations with tons of questions.

right, with a lot of consideration of potential unintended consequences, which I think we tend to be bad at in libraries (and really as humans).

1 Like

What stood out was the discussion on where the data for AI is collected because that is what has always made me weary of any AI system. AI could be such an awesome thing to use if only big companies wouldn’t rush out products in the fastest, cheapest ways possible. Because of this push for products, it’s impossible to trust that any content you post online is safe for anyone involved. I’m sure a lot of us have dealt with this feeling when using social media to promote library events. We had a family reading program for our community, and of course we took a lot of pictures. I posted one to my Twitter account and sent pictures to other college employees to post to their social media accounts. Everyone who participated signed a photo release, but is that enough anymore? Should there be a section on the form that includes a warning about the possibility of companies collecting these photos without consent? I guess it’s just really hard to figure out where your responsibility with other people’s data starts and stops.

Right, what does informed consent even mean in this kind of environment? So given that, what I’m interested in is – how can we talk to our communities about how big of an issue this is? One thing I’ve been thinking about is how around the holidays we help with consumer electronics purchases. What would a critical guide to these devices look like? How would it incorporate their many issues with privacy, consent, and consumption?

I’m a big fan of book/reading groups as the first step towards consciousness raising around these issues. I think they work in public libraries especially well because you don’t run into the contingent of coworkers saying, “Well, Harold just wants to check his email, he doesn’t want to be prosthelytized at.” I’m not minimizing the point those people have; it’s just the #1 thing I see stopping a broader role for libraries doing this kind of work. If you make it an opt-in kind of thing and decouple it from the more immediate task/desire at hand, I think you can still be effective.

I was just admiring the Black & Pink prison abolition curriculum and how METRO, the library collective in NYC, has set up a discussion group around this. A cool LFP or adjacent project could be setting up curricula similar to what we do for LFI and have patrons join for some or all of a series discussing these issues. Obviously, what you do in terms of action after consciousness raising would still need to be addressed, but I think libraries can at least play a role in educating and informing people in hopes of spurring that action–after all, where else would most people get this kind of opportunity?

SHUTUP KAREN, HAROLD WANTS TO TALK ABOUT THESE THINGS.

lol but for real I hear you about hearing your colleagues concerns, but I think we have a real tendency in libraries to assume that our communities are dumb and disengaged. That kind of assumption is how we end up with the lowest common denominator type programming.

I was SO STOKED to see that METRO had organized that reading group, and I would love to see us come up with something similar. On the wiki is a booklist (other types of media too) that might be helpful in crafting something like this: https://libraryfreedom.wiki/html/public_html/index.php/Main_Page/Reading_List

In terms of getting people to take action after the political education part, I think getting in the habit of having “next steps” for people is great – I like starting with a couple of small things (sign this petition, call this legislator, donate to this group) all the way up to other more involved forms of engagement.

1 Like

What really gets me about this kind of thinking (this kind of thinking being ok, let’s automate as much as possible) is that it makes me wonder what anyone who is involved in these systems (owners, creators, instructors) thinks that education is or is for. To my mind, and in my approach as an educator (I think most public-facing librarians are, in some way, educators at least part of the time), education at its best is a transformative process that results in permanent changes in how a person (an “educated” person) sees, processes, and understands the world, and new information that comes in about the world. Interactions with humans in relationship with each other are pretty essential to this transformative process of education. Reading, writing, conversation, feedback.

Using AI and ML to make instructors more “productive,” and students more “successful” seems nonsensical if you view the purpose of education as I do (and as I’m sure many of you do!).

This is exactly it Eliza! And this gets us back to much bigger themes, like the discussion we had about safety. If what actually makes us safe is community, then whose idea was it to make “safety” about cameras on all our homes and cops with military equipment? Likewise education, which is meant to be all the things that you put so beautifully, cannot be achieved with some bullshit “personalized” learning experience, so what IS it for? In both cases, it’s for turning a profit for someone who has no interest in community safety or education, and as a means of social control that that very same class can exert over the rest of us. Capitalism knows no bounds.