Here’s a good article about the rise of surveillance in schools. It is rather alarming and I’m hoping the community isn’t too indoctrinated with security theater to fight it:
ayy cool, I came here to post the same thing.
I was going to post this in conjunction with this article – https://www.nature.com/articles/d41586-019-01679-5 --since they both bring up this idea of data collection for social good. It makes me wonder if there is any arena of social services in which spending money on data is the most effective use of that money. Do you have any strong feelings on this? I imagine it comes from observing the sciences and health industry rely heavily on data for their progress. In the nature article one thing that stuck out to me was this quote – “Is there no way around understanding how isolated refugees are besides using an invasive technique to track people through mobile technology? Another way to find out whether refugees are isolated would be to ask them questions, which allows them to decide what to share." – and I feel this same thought can be used to address the prevention of school shootings using personal data idea. As in, the idea is stupid and a waste of money that is overly complicated and will not contribute anything meaningful to a solution.
For me, it’s a level of trust. I think of something like what Jason Griffey is doing where he tracks movements in libraries but has the information anonymized, so it’s really only bodies passing through a space (Deleuze article anyone?)
Mobile tracking would be less of an issue if we believed it to be anonymous. I mean we have to do the same thing with our collections to evaluate them, but as long as that information is anonymized, it’s better.
Do I think they do that? Hell no. I also agree with the questioning type of thing, but that would be probably more expensive since it requires more man-hours and is more prone to errors (plus you’re still not anonymizing the data).
But I don’t think either option is the right one, for things like refugees and school shootings the answer isn’t tracking or technology at all, it’s understanding the underlying causes of the issue and addressing them i.e. getting students access to a licensed therapist, or work with tumultuous countries to make them somewhere people want to stay (and I mean this without ANY military intervention). I mean aid (which still has a lot of issues), but I wanted to stress that I don’t think technology in these situations is useful, and is certainly a waste of capital.
Thanks, I hate it.
No seriously tho, the article is frightening, especially given that it illustrates how widely and broadly these applications search and record perceived ‘threats’ from the wide web and how the strictly textual nature of the criteria for ranking threats exists without semantic or contextual scrutiny except after the fact. I guess that’s besides the point given that there’s no evidence to support that this kind of surveillance is effective and in fact is in and of itself THE existing threat, not a potentiality. The anecdote about a teen writing a warning via email on the school’s network and then just listing cuss words so it would flag also speaks to how much of a replacement for human involvement these systems are being used as in some institutions.
The teen space I work in was just offered Chromebooks as part of a pilot lending program, and I’ve two ways of thinking about it, can’t seem to cross that water in my head. I mean, screw Google, but also these teens use Google apps for school (and have for years) and literally every time they need to print something or need help with homework the material is located in a Google app. They have Google accounts made for them when they enroll. Finding an acceptable alternative that’s congruent and privacy-minded might be time-consuming (for them), have a difficult learning curve, potentially effect their grades, e.g. We have an on-site laptop lending program as many students do not have their own for various reasons. What to do?
I’d love to hear more about threats to privacy in ed tech (esp. public schools) if we have time during the cohort!
My husband works in a middle school, and it’s the same with the Chromebooks and Google Classroom. His students struggle with the basics of reading comprehension, helping them to understand the complexity of technology is not on anyone’s radar. I know of very few teachers who feel comfortable with the technology beyond making the apps work. They have so much work to do just to prepare for lessons, they have no time to understand the systems, and they don’t know what they don’t know.
I have a theory on ed tech and the dominance of Google. Google set the base price of ed tech to free. Anyone who wants to develop software for schools is competing with the notion that the best price is “free.” So they can’t charge a lot. And therefore, they can’t pay their developers enough to care about UX, privacy, or sustainability. Those priorities don’t register at all compared to making programs that prepare students to do well on tests. The developers are barely even paid enough to make a functional product. I can’t believe how janky some of those programs are!
Virginia Eubanks incredible book Automating Inequality gets into this question a bunch. Her very thorough research comes to the same conclusion that you have here: that whatever benefits we might get from big data approaches to social problems, the harms VASTLY outweigh them and actually human interaction is much more effective at doing the thing in the first place anyway.
week 8: digital redlining is gonna cover this a bunch (this might get moved to a different week but we’re doing it at some point for sure)
week 23 likely but at least somewhere down towards the end of the schedule: we will have Abigail Goben from the Data Doubles project to talk about her work. This will be mostly college-level focused but I think there’s a lot of parallels. And someone else to follow for other EdTech stuff is Jessy Irwin.