RightsCon report back

#1

Hi all,

I will be adding notes to this thread during my time at RightsCon. Check out the schedule here, and if there’s anything you think looks particularly interesting that I should go to and report back on, let me know!

0 Likes

#2

Here are some talks I will likely attend tomorrow, taking a break to work on my own talk for Thursday:

https://rightscon2019.sched.com/event/Pvhj/a-warrant-for-your-dna-what-the-golden-state-killer-police-and-genetic-testing-services-share-in-common or https://rightscon2019.sched.com/event/QHT0/getting-a-handle-on-facial-recognition-tech

https://rightscon2019.sched.com/event/Pvf1/tumblr-porn-sex-work-and-queer-lives-on-a-healthy-internet-what-happens-when-a-platform-decides-to-kill-communities or https://rightscon2019.sched.com/event/PvjZ/if-you-keep-suggesting-blockchain-i-swear-to-god-i-will-fing-scream

2 Likes

#3

Lots of really good presentations at this. Also: Tunisia!
Been interested in memes for library stuff lately so the Defense Against the Dark Arts session sounds interesting!
Also:

This one sounds good too as I am an emotional worker and feel that the emotional climate in a library is an important component in driving library use. Our cities, towns and schools place a lot of trust in libraries to do the right thing when it comes to providing relevant collections and services but also a ‘safe’ environment to work in. With the internet running right through our libraries in almost all areas it’s important to understand the fears, worries and threats facing our communities so we can better address them in our purchasing, policy-making and day-to-day work.

1 Like

#4

The first session I’m in today is this one: https://rightscon2019.sched.com/event/Pvp2/defense-against-the-dark-arts-meme-campaigns-and-propaganda

One of the speakers is my friend Caroline Sinders, who spoke to LFI last year: https://vimeo.com/294490621

The panelists are talking about the limits to content moderation in response to harassment. They’re relating it to older internet tools, like Livejournal, where much more content moderation was in the hands of the individual user. These controls had the effect of giving better privacy too, by limiting at a granular level who could see certain posts. Content moderation at a scale doesn’t work as well because it doesn’t give users this control and because content moderators often lack the context to understand why something in particular is harassment. Why can’t users have the control?

They’re talking about this in the context of increased harassment of journalists, marginalized people online. For example an Egyptian female journalist getting trolled online by posting pictures of a woman who looks just like her doing things that she wouldn’t do – not wearing hijab, drinking, etc. So this got into local conceptions of “honor” and such, and was damaging to her reputation.

The greater context of all this is that the platforms have no incentives to make any of this better because this is their business model, so the real need is for regulation of the platforms. The panelists are talking about what the right entity is for regulation (it’s not the US government, it might be the European Parliament),

1 Like

#5

Now I’m at https://rightscon2019.sched.com/event/PvmK/future-proofing-human-rights-documentation-tools-for-protecting-endangered-evidence

Speakers on this panel who’ve spoken at LFI before:

Dia Kayyali from WITNESS: https://vimeo.com/279547809

Harlo Holmes from the Freedom of the Press Foundation: https://vimeo.com/291960653

This panel is thinking about internet ephemerality, preservation, and protection.

I’m only half paying attention to this one while I work (I came to support my friends on this panel).

This was interesting to me – Dia is talking about content moderation and how machine learning algorithms take down citizen journalist reports from conflict areas because the algos flag these stories as “glorifying violent extremism”. So the censorship of citizen journalists in the most vulnerable parts of the world is being done automatically by corporate platforms using criteria determined by the War on Terror. Jillian points out that state actors are rarely affected by this (their violent content is not moderated in this way).

1 Like

#6

The one on human right impacts and implications of 5G looks interesting. I’ve heard hype about how 5G will lay the ground work for more affordable health care for disadvantaged communities (surgical implants transmitting real time data from patient to doctor, for example). We know networks already throttle competitors (Verizon/Netflix) and now we want to put life saving (literally) data in the hands of these actors?

1 Like

#7

Well, jet lag got me last night and I couldn’t fall asleep, so I missed the morning sessions. I have some work to do before my panel at 5:15 but I hope to attend the session on indigenous data sovereignty and maybe the one on ranking digital rights.

0 Likes

#8

I was in a discussion about artificial intelligence and informed consent, and the internet is slow here and somehow part of my post got cut off. So I’m editing to add what’s missing.

The panel was sort of…meh. They didn’t have a lot of ideas about what we’re supposed to do to sort this problem out. Below are some notes I took during. Sorry about whatever got cut off!

One of the other panelists is now talking about GDPR as a model for reconciling this but imo that wouldn’t work either, because just getting a nag button about whether or not you consent IS NOT INFORMED CONSENT. People don’t actually understand what they’re agreeing to.

Now they’re talking about how consent is even possible when you’re talking about proprietary algorithms making decisions automatically, without transparency. How can you hold that kind of thing accountable? We know more or less what it means to consent to human processing of data, but computers/machine learning technology doesn’t understand that.

Another audience participant from Africa talking about how most people don’t understand how AI works, so he’s advocating for moving from an individualized concept of consent to a collective concept of consent. One of the speakers from the US is agreeing that the individualized consent idea doesn’t work in the US/Global North either.

I’m thinking about this conversation in the context of LFI and how we can teach people about these high level technical concepts so that they can even get to a place where it’s possible for them to have informed consent.

Another audience member is talking about how thorny informed consent is when we’re talking about terabytes of data, when everyone is carrying a camera around all the time, when data about is us being shared on the internet constantly without any consent conversation. The audience member mentioned this in the context of human rights related data, like sharing videos of police violence or military crackdowns or the like. The intention is good, but the people in the videos are people experiencing the worst and most violent moments of their lives. How do we reconcile this? A panelist is responding and saying that one of the solutions is that we have to collect a lot less data. I mean, yeah, but like, where do we start? How do we change the culture? How do we make decisions about what data is worth sharing for the public interest and what isn’t?

Another audience member is rightly pointing out that consent is often sold as individual user control, when really it’s about companies protecting their asses from liability later. They’re asking how to we shift to a model where information is freely given because people fully understand what’s at stake and they want to share their data anyway.

Oh by the way, on behalf of all of us I yelled at the Facebook representatives earlier and told them to quit their jobs. Still took their free Yubikeys though.

Also this is SUCH the vibe here: https://twitter.com/zeynep/status/1139135978231046144

We have some funding to do some conferences in the future and honestly I think I want to use it to take as many of you to the HOPE conference as possible, cause imo that one is way more interesting (politically and technically) than this one, plus it’s always in NYC so easier to get to. We could submit a really cool talk. Here’s the website from last year: https://www.hope.net/ (next one is in 2020).

2 Likes

#9

At the reference desk, totally laughed aloud at this. Your Twitter play-by-play is great, btw.

1 Like

#10

lol thank you @Steph, I am here to entertain.

my talk went well! it was me and my colleague Gus from Tor, and then some people who…didn’t really agree with us on much. but I got to talk about attacking Facebook, Google, etc by attacking their capital, and how much organized labor could reshape the tech industry and therefore help us take back our privacy rights, and people seemed into it!!!

1 Like

#11

This morning I’m at this session https://rightscon2019.sched.com/event/Pvg2/the-ghost-in-the-machine-remedy-for-algorithmic-discrimination

So far it’s mostly about algorithmic decision-making in content moderation, and I just don’t care as much about that. Sure, it has important implications, but there are way more important considerations for algorithmic bias that I’d rather be talking about.

Oh god they’re making us do a group exercise. brb.

0 Likes

#12

Okay I left the other session and I’m listening to Zeynep Tufekci’s talk about creating a a digital bill of rights.

I missed a lot of it but she’s currently pushing back against the very RightsCon-y anti-regulation line (they loooooooooove making space for private companies at this shindig). She also compared the data industry to the automobile industry, which, yeah. I think it’s more accurate to compare it to the fossil fuel industry.

Question from the audience – someone who is extremely skeptical about another regulatory project, saying that it would be impossible to go bigger than GDPR.

Another question – also skeptical of the idea of a digital bill of rights, says that there were attempts to do it ten years ago. Says we already have rights, they’re just not being enforced. How do we get the institutions to use our existing rights to fix the situation on the internet right now – eg anti-trust laws? How do we enforce the existing laws/rights?

Another question – asking about the energy consumption issues with machine learning. One machine learning model uses the energy of five cars in a lifetime (citation needed but I would believe it).

Zeynep – the energy consumption issue is real, burning tons of carbon to target me for ads sounds terrible. In terms of GDPR, I applaud the effort, but anything that puts the onus on the user is going to fail/not address the problem sufficiently. It’s like nutrition labels on food – fine, do it, but the food should not have E. coli. The onus has to not be “did I click to consent or not in order to have my rights violated”.

There’s some background noise (this venue is really open and terrible for sound) so I can’t hear the rest of what she’s saying.

0 Likes