De Gemeente Amsterdam wil een meldingsplicht voor sensoren in de publieke ruimte. Iedere organisatie die sensoren in de buitenruimte plaatst zou vanaf het najaar moeten melden waar die sensoren staan. Dit is nuttig om meerdere redenen. Allereerst omwille van transparantie en om de discussie over nut en noodzaak van al die sensoren om ons heen diepgaand te kunnen voeren. Of om te zien welke gegevens die nu door private organisaties worden verzameld, eventueel ook voor een gedeeld publiek belang kunnen worden gebruikt.

Amsterdam gaf eerder al een voorbeeld dat navolging verdient met de start van een algoritme-register, en dit sensorenregister lijkt me een uitstekende aanvulling.

(Defect) reclamebord op Utrecht Centraal Station dat ik in 2018 fotografeerde, met een ondoordachte camera in de publieke ruimte om aandacht voor de advertentie te meten. Burgerlijk verzet plakte de camera af.

It’s odd to see how conspiracy fantasies, suspect sources, disinformation and deliberate emotionally provocative or even antagonistic wording are on the rise on my LinkedIn timeline.

I first encountered a QAnon account in a comments section last August, but that person was still many steps away in my network. Now I see things popping up from direct connections and their connections. I had assumed that LinkedIn being tied to your professional reputation would go a long way to prevent such things, but apparently not any longer. In some instances, it’s almost as if people don’t realise they’re doing it, a boiling-a-frog effect of sorts.

One person being called out for some under-informed reactionary content by pointing out that their employer has the capabilities and resources to prove them wrong even responded “leave my employer out of it”. That’s not really possible though, as your employer is in your by-line and accompanies your avatar with every post and comment you make. Seven months after first encountering something like that on my LinkedIn timeline it is now a daily part of my timeline, and all coming from my Dutch network and their connections.

LinkedIn is starting to feel as icky as Facebook did three years ago. Makes me wonder how long LinkedIn will remain a viable tool. I don’t think I will be spending much or any attention on my timeline moving forward, until the moment LinkedIn is as much a failed social platform as others and it’s time to let go of it completely. That doesn’t mean disengaging with the people in my network obviously, but it is not at all my responsibility to help LinkedIn reach a certain level of quality of discourse by trying to counteract the muck. I was an early user of LinkedIn (nr. 8730, look at the source of your profile page and search it for ‘member:’ to find your number) in the spring of 2003, I know there’s already a trickle of people leaving the platform, and I wonder when (not if) I’ll fully join them.

De Open State Foundation en SETUP lanceren de SOS Tech Awards gericht op transparantie en verantwoordelijkheid in de digitale samenleving.

De Glass & Black Box Awards gaan over openheid en transparantie. De Dode & Levende Mussen Award gaan over verantwoordelijkheid nemen na technologische missers, en de mate waarin bedrijven en overheden niet alleen excuus aanbieden maar ook echt hun handelen aanpassen. De genomineerden worden in de komende weken bekend gemaakt. De SOS Tech Awards worden op dinsdag 23 maart 2021 uitgereikt via een livestream vanuit de centrale Bibliotheek Utrecht.

(In het kader van transparantie: ik ben bestuurslid bij de Open State Foundation)

My colleagues Emily and Frank have in the past months been contributing our company’s work on ethical data use to the W3C’s Spatial Data on the Web Interest Group.

The W3C now has published a draft document on the responsible use of spatial data, to invite comments and feedback. It is not a normative document but aims to promote discussion. Comments can be filed directly on the Github link mentioned, or through the group’s mailing list (subscribe, archives).

The purpose of this document is to raise awareness of the ethical responsibilities of both providers and users of spatial data on the web. While there is considerable discussion of data ethics in general, this document illustrates the issues specifically associated with the nature of spatial data and both the benefits and risks of sharing this information implicitly and explicitly on the web.

Spatial data may be seen as a fingerprint: For an individual every combination of their location in space, time, and theme is unique. The collection and sharing of individuals spatial data can lead to beneficial insights and services, but it can also compromise citizens’ privacy. This, in turn, may make them vulnerable to governmental overreach, tracking, discrimination, unwanted advertisement, and so forth. Hence, spatial data must be handled with due care. But what is careful, and what is careless? Let’s discuss this.

"Here"
2013 artwork by Jon Thomson and Alison Craighead. Located at the Greenwich Meridian, the sign marks the distance from itself in miles around the globe. Image by Alex Liivet, license CC-BY

In the oil industry it is common to have every meeting start with a ‘safety moment’. One of the meeting’s participants shares or discusses something that has to do with a safe work environment. This helps keep safety in view of all involved, and helps reduce the number of safety related incidents in oil companies.

Recently I wondered if every meeting in data rich environments should start with an ethics moment. Where one of the participants raises a point concerning information ethics, either a reminder, a practical issue, or something to reflect on before moving on to the next item on the meeting’s agenda. As I wrote in Ethics as a Practice, we have to find a way of positioning ethical considerations and choices as an integral part of professionalism in the self-image of (data using) professionals. This might be one way of doing that.

I’m participating in the IndieWebCamp East 2020. It’s nominally held on the US East Coast, but as everything else, it’s online. The 6 hr time difference makes it doable to take in at least part of it.

The first introductory talk today was by David Dylan Thomas, which I thoroughly enjoyed. He’s a content strategist, and took acknowledging the existence of cognitive biases (and the difficulty of overcoming them, even if you try to) as a perspective on content strategy. How do you design to mitigate bias? How do you use bias to design for good? It’s been the basis for his podcast series.

A short 106 page book was published this fall, and after the talk I bought it and uploaded it to my reader. Looking forward to reading it!