I think this week’s lecture expanded my understanding of algorithms of just the data sets that are used. It seems like developers use anything. YouTube as a data set seems problematic for a variety of reasons. Like what if you’re feeding a machine a data set full of deep fakes?
Harvesting a data set of human behavior is also straight up disturbing. I feel like you can’t truly “learn” from second-hand human experience, how can a machine?
As for algorithm-driven automated systems, I think of streaming media providers. I think is driving the entertainment industry. It seems like that a lot of algorithm driven design is a shortcut to actual research on human behavior that can be used to (purposefully?) ignore outliers or doesn’t fit into the mold of developers’ conceptions when designing AI.
As for bringing this information public, I think it’s important to tell people that you can’t completely trust machines, AI, or algorithms. They are inherently flawed systems that can do both a lot of help and harm, but it really depends on who’s controlling it, has access to its data, and what it is used for.
The biggest questions that I think of after Janus’s presentation is “who is harmed from it and who benefits from it.” I feel like framing algorithms or systems built with algorithms this way is a great way to think critically on their impact. Maybe this would be another way to inform patrons; use these questions to contextualize algorithms when they’re using library resources?