As part of the Techfestival last week, the Copenhagen 150, which this time included me, came together to write a pledge for individual technologists and makers to commit their professional lives to. A bit like a Hippocratic oath, but for creators of all tech. Following up on the Copenhagen Letter, which was a signal, a letter of intent, and the Copenhagen Catalog which provide ‘white patterns’ for tech makers, this years Tech Pledge makes it even more personal.

I pledge

  • to take responsibility for what I create.
  • to only help create things I would want my loved ones to use.
  • to pause to consider all consequences of my work, intended as well as unintended.
  • to invite and act on criticism, even when it is painful.
  • to ask for help when I am uncertain if my work serves my community.
  • to always put humans before business, and to stand up against pressure to do otherwise, even at my own risk.
  • to never tolerate design for addiction, deception or control.
  • to help others understand and discuss the power and challenges of technology.
  • to participate in the democratic process of regulating technology, even though it is difficult.
  • to fight for democracy and human rights, and to improve the institutions that protect them.
  • to work towards a more equal, inclusive and sustainable future for us all, following the United Nations global goals.
  • to always take care of what I start, and to fix what we break.

I signed the pledge. I hope you will do to. If you have questions about what this means in practical ways, I’m happy to help you translate it to your practice. A first step likely is figuring out which questions to ask of yourself at the start of something new. In the coming days I plan to blog more from my notes on Techfestival and those postings will say more about various aspects of this. You are also still welcome to sign the Copenhagen Letter, as well as individual elements of the Copenhagen Catalog.

Last week I attended Techfestival in Copenhagen. I participated in a day long Public Data Summit. This posting are thoughts and notes based on some of what was discussed during that Public Data Summit.

Techfestival.co Public Data SummitGroup work at the Public Data Summit

Martin von Haller Groenbaek (we go back in open data a long time) provided an interesting lightning talk at the start of the Public Data Summit. He posited that in order to realise the full potential of open (government) data, we probably need to be more relaxed in sharing personal data as well.

There is a case to be made, I agree. In energy transition for instance your (real time) personal electricity use is likely key information. The aggregated yearly usage of you and at least 10 neighbours e.g. the Dutch electricity networks make available is not useless by far, but lacks the granularity, the direct connection to real people’s daily lives to be truly valuable for anything of practical use.

I agree with the notion that more person related data needs to come into play. Martin talked about it in terms of balancing priority, if you want to fix climate change, or another key societal issue, personal data protection can’t be absolute.

Now this sounds a bit like “we need your personal data to fight terrorism” which then gets translated “give me your fingerprints or your safety and security is compromised”, yet that is both an edge case and an example of the types of discussions needed to find the balancing point, to avoid false dilemma’s or better yet prevent reductionism towards ineffective simplicity (such as is the case with terrorism, where all sense of proportionality is abandoned). The balancing point is where the sweet spot of addressing the right level of complexity is. In the case of terrorism the personal data sharing discussion is framed as “you’re either with us, or with the terrorists” to quote Bush jr., a framing in absolutes and inviting a cascade of scope creep.

To me this is a valuable discussion to be had, to determine when and where sharing your personal data is a useful contribution to the common good or even should be part of a public good. Currently that ‘for the common good’ part is not in play mostly. We’re leaking personal data all over the tech platforms we use, without much awareness of its extend or how it is being used. We do know these data are not being used for the common good as it’s in no-one’s business model. This public good / common good thinking was central to our group work in the Public Data Summit during the rest of the day too.

Martin mentioned the GDPR as a good thing, certainly for his business as a lawyer, but also as a problematic one. Specifically he criticised the notion of owning personal data, and being able to trade it as a commodity based on that ownership. I agree, for multiple reasons. One being that a huge amount of our personal data is not directly created or held by me, as it is data about behavioural patterns, like where my phone has been, where I used my debit card, the things I click, the time I spent on pages, the thumbprint of my specific browser and plugins setup etc. The footsteps we leave on a trail in the forest aren’t personal data, but our digital footsteps are, because the traces can, due to the persistence of those tracks, more easily than in the woods be followed back to their source as well as can get compared to other tracks you leave behind.

Currently those footsteps in the digital woods are contractualised away into the possession of private owners of the woods we walk in, i.e. the tech platforms. But there’s a strong communal aspect to your and my digital footsteps as personal data. We need to determine how we can use that collectively, and how to govern that use. Talking about the ownership of data, especially the data we create by being out in the (semi) public sphere (e.g. tech platforms) and the ability to trade for it (like Solid suggests), has 2 effects: it bakes in the acceptance that me allowing FB to work with my data is a contract between equal parties (GDPR rightly tries to address this assymmetry). Aza Raskin in his keynote mentioned this too, saying tech platforms should be more seen and regulated as fiduciaries, to acknowledge the power asymmetry. And it takes the communal part of what we might do with data completely out of the equation. I can easily imagine when and where I’d be ok with my neighbours, my local government, a local NGO, or specific topical/sectoral communities etc. having access to using data about me. Where that same use by FB et al would not be ok at all under any circumstance.

In the intro’s to the public data summit civil society however was very much absent, there was talk about government and their data, and how it needed the private sector to do something valuable with it. Where to me open (e-)government, and opening data is very much about allowing the emergence and co-creation of novel public services by government/governance working together with citizens. Things we maybe not now regard as part of public institutions, structures or the role of government, but that in our digitised world very well could, or even should, be.

Lesson 1 on crisis communication. Come. Clean. Fully. Immediately. Otherwise it is a drip drip drip drip of trust erosion, until everything crumbles. My university friend D did his thesis on crisis comms, in the aftermath of having lived through and closely experiencing a crisis in our home town. Just under 20 years on, the lesson is still accurate.

It is additionally hard on those who, based on the limited earlier disclosures chose to stand with you, if you then are found out to not have disclosed the entire story.

(previously posted on this.)

(ht Ronan Farrow via Hossein Derakhshan)

Read How an Élite University Research Center Concealed Its Relationship with Jeffrey Epstein (The New Yorker)

New documents show that the M.I.T. Media Lab was aware of Epstein’s status as a convicted sex offender, and that Epstein directed contributions to the lab far exceeding the amounts M.I.T. has publicly admitted.

Via Jeannie McGeehan, an interesting read. How do you inoculate against online hate speech.

Read Researchers propose a new approach for dismantling online hate networks (The Verge)

The paper, “Hidden resilience and adaptive dynamics of the global online hate ecology,” explores how hate groups organize on Facebook and Russian social network VKontakte — and how they resurrect themselves after platforms ban them.

Good to see how various strands combine here, apart from the topic which is governance of smart cities. The immediate trigger for Peter Bihr is Toronto’s smart city plan, on his radar as he was recently in Canada. We both were to visit Peter Rukavina’s unconference. He references how back in 2011 we already touched upon most of the key ingredients, at the Cognitive Cities conference in Berlin, which he organised, and where I spoke. And he mentions doing a fellowship on this very topic for the Edgeryders, my favourite community in Europe for these type of issues, and which I try to support where I can.

Read How to plan & govern a smart city? (The Waving Cat)

What does governance mean in a so-called smart city context. What is it that’s being governed and how, and maybe most importantly, by whom?