During our work on shaping the Tech Pledge last week, we looked into progress as it is mentioned in the Copenhagen Letter as being different from just innovation. The Copenhagen Letter was written in 2017 as output of the same process that now delivered the Tech Pledge.

20190906_163157
Thomas, Techfestival’s initiator, reading the Copenhagen Letter
at the start of this year’s CPH150

Progress is not a fixed point on the horizon we said. What is progress shifts, with public opinion and new ideas of what is good, just, true and beautiful emerging, and with the movement of time itself. When the first steam engines appeared their plumes of smoke heralded a new age, that drove industrialisation, nation forming and, through rail roads, changed what cities were for and how city and countryside relate to each other. Steam engines still exist at the very heart of every electricity plant in the world, but progress has moved on from the steam engine.
We also realised that progress does not have fixed and static definition, and so we are free to fill it with whatever definition we think fits in the context we are looking at.

In terms of technology then, progress is a motion, a process, and in our group we defined it as (new) technology plus direction/sense of purpose. Technology here, to me at least, being not just ‘hard tech’, but also ‘soft tech’. Our methods, processes, organisational structures are technology just as much as fountain pens, smart phones and particle accelerators.

So we named a number of elements that fit into this understanding progress as a process and search for direction.

  • It is a part of human nature to strive for change and progress, even if not every single individual in every context and time will be so inclined. This desire to progress is more about setting a direction than a fixed end state. Hence our use of “(new) technology with intended direction” as working definition.
  • We need to be accountable to how anything we make fits the intended direction, and additionally whether it not externalises human or ecological costs, or conflicts with our natural systems, as these are often ignored consequences.
  • We recognise that direction may get lost, or ends up in need of change, in fact we realise that anything we make is largely out of our control once released into the world.
  • So we pledge to continuous reflection on the direction our tech is taking us in practice. Not just during its development or launch, but throughout its use.
  • Whether we want to use the tech we created ourselves, or see our loved ones use it is a smell test, if it doesn’t pass our emotional response something is off.
  • What doesn’t pass the smell test needs to be explored and debated
  • We have a civic duty to organise public debate about the value and direction of our tech right alongside our tech. Not just at the start of making tech, but throughout the life cycle of something you make. If you make something you also need to organise the continuous debate around it to keep a check on its societal value and utility, and to actively identify unintended consequences.
  • If our tech is no longer fit for purpose or takes an unintended turn, we have a duty to actively adapt and /or publicly denounce the aspect of our tech turned detrimental.

20190907_120354Working on the pledge

Regardless of what the Copenhagen Pledge says in addition to this, this reflective practice is something worth wile in itself for me to do: continuously stimulate the debate around what you make, as part and parcel of the artefacts you create. This is not a new thing to me, it’s at the core of the unconferences we organise, where lived practice, reflection and community based exploration are the basic ingredients.

To me what is key in the discussions we had is that this isn’t about all tech in general, though anyone is welcome to join any debate. This is about having the moral and civic duty to actively create public debate around the technology you make and made. You need to feel responsible for what you make from inception to obsolescence, just as you always remain a parent to your children, regardless of their age and choices as adult. The connection to self, to an emotional anchoring of this responsibility is the crucial thing here.

So there I was on a rainy Copenhagen evening finding myself in a room with 149 colleagues for 24 hours, nearing midnight, passionately arguing that technologists need to internalise and own the reflection on the role and purpose of their tech, and not leave it as an exercise to the academics in the philosophy of technology departments. A duty to organise the public debate about your tech alongside the existence of the tech itself. If your own tech no longer passes your own smell test then actively denounce it. I definitely felt that emotional anchoring I’m after in myself right there and then.

As part of the Techfestival last week, the Copenhagen 150, which this time included me, came together to write a pledge for individual technologists and makers to commit their professional lives to. A bit like a Hippocratic oath, but for creators of all tech. Following up on the Copenhagen Letter, which was a signal, a letter of intent, and the Copenhagen Catalog which provide ‘white patterns’ for tech makers, this years Tech Pledge makes it even more personal.

I pledge

  • to take responsibility for what I create.
  • to only help create things I would want my loved ones to use.
  • to pause to consider all consequences of my work, intended as well as unintended.
  • to invite and act on criticism, even when it is painful.
  • to ask for help when I am uncertain if my work serves my community.
  • to always put humans before business, and to stand up against pressure to do otherwise, even at my own risk.
  • to never tolerate design for addiction, deception or control.
  • to help others understand and discuss the power and challenges of technology.
  • to participate in the democratic process of regulating technology, even though it is difficult.
  • to fight for democracy and human rights, and to improve the institutions that protect them.
  • to work towards a more equal, inclusive and sustainable future for us all, following the United Nations global goals.
  • to always take care of what I start, and to fix what we break.

I signed the pledge. I hope you will do to. If you have questions about what this means in practical ways, I’m happy to help you translate it to your practice. A first step likely is figuring out which questions to ask of yourself at the start of something new. In the coming days I plan to blog more from my notes on Techfestival and those postings will say more about various aspects of this. You are also still welcome to sign the Copenhagen Letter, as well as individual elements of the Copenhagen Catalog.

Since the summer I am holding three questions that are related. They all concern what role machine learning and AI could fulfil for an individual or an everyday setting. Everyman’s AI, so to speak.

The first question is a basic one, looking at your house, and immediate surroundings:

1: What autonomous things would be useful in the home, or your immediate neighbourhood?

The second question is more group and community oriented one:

2: What use can machine learning have for civic technology (tech that fosters citizen’s ability to do things together, to engage, participate, and foster community)?

The third question is perhaps more a literary one, an invitation to explore, to fantasise:

3 What would an “AI in the wall” of your home be like? What would it do, want to do? What would you have it do?

(I came across an ‘AI in the wall’ in a book once, but it resided in the walls of a pub. Or rather it ran the pub. It being a public place allowed it to interact in many ways in parallel, so as to not get bored)

This from Wendy Grossman hits the nail quite precisely on its head.

The problem isn’t privacy,” the cryptography pioneer Whitfield Diffie said recently. “It’s corporate malfeasance.”

This is obviously right. Viewed that way, when data profiteers claim that “privacy is no longer a social norm”, as Facebook CEO Mark Zuckerberg did in 2010, the correct response is not to argue about privacy settings or plead with users to think again, but to find out if they’ve broken the law.

I think I need to make this into a slide for my stock slide deck. It’s also I think why the GDPR focuses on data protection and the basis for data usage, not on privacy as such.

(Do add Wendy Grossman’s blog net.wars to your feedreader.)

Read net.wars: Hypothetical risks

Kicks Condor dives deeply into my info-strategy postings and impressively read them all as the whole they form (with my post on feed reading by social distance as starting point). It’s a rather generous gift of engagement and attention. Lots of different things to respond to, neurons firing, and tangents to explore. Some elements with a first reaction.

Knowing people is tricky. You can know someone really well at work for a decade, then you visit their home and realize how little you really know them.

Indeed, when I think of ‘knowing someone’ in the context of information strategies, I always do so as ‘knowing someone within a specific context’. Sort of what Jimmy Wales said about Wikipedia editors a long time ago: “I don’t need to know who you are“, (i.e. full name and identity, full background), but I do need to know who you are on Wikipedia (ihe pattern of edits, consistency in behaviour, style of interaction). As Wikipedia, which is much less a crowdsourced thing than an editorial community, is the context that counts for him. Time is another factor that I feel is important, it is hard to maintain a false or limited persona consistently over a long time. So blogs that go back years are likely to show a pretty good picture of someone, even if the author aims to stick to a narrow band of interests. My own blog is a case in point of that. (I once landed a project where at first the client was hesitant, doubting whether what I said was really me or just what they wanted to hear. After a few meetings everything was suddenly in order. “I’ve read your blog archives over the weekend and now know you’ll bring the right attitude to our issue”) When couch surfing was a novel thing, I made having been blogging for at least a year or two a precondition to use our couch.

I wonder if ‘knowing someone’ drives ‘social distance’—or if ‘desire to know someone’ defines ‘social distance’. […] So I think it’s instinctual. If you feel a closeness, it’s there. It’s more about cultivating that closeness.

This sounds right to me. It’s my perceived social distance or closeness, so it’s my singular perspective, a one way estimate. It’s not an estimation nor measure of relationship, more one of felt kinship from one side, indeed intuitive as you say. Instinct and intuition, hopefully fed with a diet of ok info, is our internal black box algorithm. Cultivating closeness seems a worthwhile aim, especially when the internet allows you to do so with others than those that just happened to be in the same geographic spot you were born into. Escaping the village you grew up in to the big city is the age old way for both discovery and actively choosing who you want to get closer to. Blogs are my online city, or rather my self-selected personal global village.

I’m not sure what to think about this. “Neutral isn’t useful.” What about Wikipedia? What about neighborhood events? These all feel like they can help—act as discovery points even.

Is the problem that ‘news’ doesn’t have an apparent aim? Like an algorithm’s workings can be inscrutable, perhaps the motives of a ‘neutral’ source are in question? There is the thought that nothing is neutral. I don’t know what to think or believe on this topic. I tend to think that there is an axis where neutral is good and another axis where neutral is immoral.

Responding to this is a multi-headed beast, as there’s a range of layers and angles involved. Again a lot of this is context. Let me try and unpick a few things.

First, it goes back to the point before it, that filters in a network (yours, mine) that overlap create feedback loops that lift patterns above the noise. News, as pretending to be neutral reporting of things happening, breaks that. Because there won’t be any potential overlap between me and the news channel as filters, no feedback loops. And because it purports to lift something from the background noise as signal without an inkling as to why or because of what it does so. Filtering needs signifying of stories. Why are you sharing this with me? Your perception of something’s significance is my potential signal.

There is a distinction between news (breaking: something happened!) and (investigative) journalism (let’s explore why this is, or how this came to be). Journalism is much closer to storytelling. Your blogging is close to storytelling. Stories are vehicles of human meaning and signification. I do follow journalists. (Journalism to survive likely needs to let go of ‘news’. News is a format, one that no longer serves journalism.)

Second, neutral can be useful, but I wrote neutral isn’t useful in a filter, because it either carries no signifcation, or worse that has been purposefully hidden or left out. Wikipedia isn’t neutral, not by a long-shot, and it is extensively curated, the traces of which are all on deliberate display around the eventually neutrally worded content. Factual and neutral are often taken as the same, but they’re different, and I think I prefer factual. Yet we must recognise that a lot of things we call facts are temporary placeholders (the scientific method is more about holding questions than definitive answers), socially constructed agreements, settled upon meaning, and often laden with assumptions and bias. (E.g. I learned in Dutch primary school that Belgium seceded from the Netherlands in 1839, Flemish friends learned Belgium did so in 1830. It took the Netherlands 9 years to reconcile themselves with what happened in 1830, yet that 1839 date was still taught in school as a singular fact 150 years later.)
There is a lot to say for aiming to word things neutrally. And then word the felt emotions and carried meanings with it. Loading wording of things themselves with emotions and dog whistles is the main trait of populistic debate methods. Allowing every response to such emotion to be parried with ‘I did not say that‘ and finger pointing at the emotions triggered within the responder (‘you’re unhinged!‘)

Finally, I think a very on-point remark is hidden in footnote one:

It is very focused on just being a human who is attempting to communicate with other humans—that’s it really.

Thank you for this wording. That’s it. I’ve never worded it this way for myself, but it is very to the point. Our tools are but extensions of ourselves, unless we let them get out of control, let them outgrow us. My views on technology as well as methods is that we must keep it close to humanity, keep driving humanity into it, not abstract it so we become its object, instead of being its purpose. As the complexity in our world is rooted in our humanity as well, I see keeping our tech human as the way to deal with complexity.