Kicks Condor dives deeply into my info-strategy postings and impressively read them all as the whole they form (with my post on feed reading by social distance as starting point). It’s a rather generous gift of engagement and attention. Lots of different things to respond to, neurons firing, and tangents to explore. Some elements with a first reaction.

Knowing people is tricky. You can know someone really well at work for a decade, then you visit their home and realize how little you really know them.

Indeed, when I think of ‘knowing someone’ in the context of information strategies, I always do so as ‘knowing someone within a specific context’. Sort of what Jimmy Wales said about Wikipedia editors a long time ago: “I don’t need to know who you are“, (i.e. full name and identity, full background), but I do need to know who you are on Wikipedia (ihe pattern of edits, consistency in behaviour, style of interaction). As Wikipedia, which is much less a crowdsourced thing than an editorial community, is the context that counts for him. Time is another factor that I feel is important, it is hard to maintain a false or limited persona consistently over a long time. So blogs that go back years are likely to show a pretty good picture of someone, even if the author aims to stick to a narrow band of interests. My own blog is a case in point of that. (I once landed a project where at first the client was hesitant, doubting whether what I said was really me or just what they wanted to hear. After a few meetings everything was suddenly in order. “I’ve read your blog archives over the weekend and now know you’ll bring the right attitude to our issue”) When couch surfing was a novel thing, I made having been blogging for at least a year or two a precondition to use our couch.

I wonder if ‘knowing someone’ drives ‘social distance’—or if ‘desire to know someone’ defines ‘social distance’. […] So I think it’s instinctual. If you feel a closeness, it’s there. It’s more about cultivating that closeness.

This sounds right to me. It’s my perceived social distance or closeness, so it’s my singular perspective, a one way estimate. It’s not an estimation nor measure of relationship, more one of felt kinship from one side, indeed intuitive as you say. Instinct and intuition, hopefully fed with a diet of ok info, is our internal black box algorithm. Cultivating closeness seems a worthwhile aim, especially when the internet allows you to do so with others than those that just happened to be in the same geographic spot you were born into. Escaping the village you grew up in to the big city is the age old way for both discovery and actively choosing who you want to get closer to. Blogs are my online city, or rather my self-selected personal global village.

I’m not sure what to think about this. “Neutral isn’t useful.” What about Wikipedia? What about neighborhood events? These all feel like they can help—act as discovery points even.

Is the problem that ‘news’ doesn’t have an apparent aim? Like an algorithm’s workings can be inscrutable, perhaps the motives of a ‘neutral’ source are in question? There is the thought that nothing is neutral. I don’t know what to think or believe on this topic. I tend to think that there is an axis where neutral is good and another axis where neutral is immoral.

Responding to this is a multi-headed beast, as there’s a range of layers and angles involved. Again a lot of this is context. Let me try and unpick a few things.

First, it goes back to the point before it, that filters in a network (yours, mine) that overlap create feedback loops that lift patterns above the noise. News, as pretending to be neutral reporting of things happening, breaks that. Because there won’t be any potential overlap between me and the news channel as filters, no feedback loops. And because it purports to lift something from the background noise as signal without an inkling as to why or because of what it does so. Filtering needs signifying of stories. Why are you sharing this with me? Your perception of something’s significance is my potential signal.

There is a distinction between news (breaking: something happened!) and (investigative) journalism (let’s explore why this is, or how this came to be). Journalism is much closer to storytelling. Your blogging is close to storytelling. Stories are vehicles of human meaning and signification. I do follow journalists. (Journalism to survive likely needs to let go of ‘news’. News is a format, one that no longer serves journalism.)

Second, neutral can be useful, but I wrote neutral isn’t useful in a filter, because it either carries no signifcation, or worse that has been purposefully hidden or left out. Wikipedia isn’t neutral, not by a long-shot, and it is extensively curated, the traces of which are all on deliberate display around the eventually neutrally worded content. Factual and neutral are often taken as the same, but they’re different, and I think I prefer factual. Yet we must recognise that a lot of things we call facts are temporary placeholders (the scientific method is more about holding questions than definitive answers), socially constructed agreements, settled upon meaning, and often laden with assumptions and bias. (E.g. I learned in Dutch primary school that Belgium seceded from the Netherlands in 1839, Flemish friends learned Belgium did so in 1830. It took the Netherlands 9 years to reconcile themselves with what happened in 1830, yet that 1839 date was still taught in school as a singular fact 150 years later.)
There is a lot to say for aiming to word things neutrally. And then word the felt emotions and carried meanings with it. Loading wording of things themselves with emotions and dog whistles is the main trait of populistic debate methods. Allowing every response to such emotion to be parried with ‘I did not say that‘ and finger pointing at the emotions triggered within the responder (‘you’re unhinged!‘)

Finally, I think a very on-point remark is hidden in footnote one:

It is very focused on just being a human who is attempting to communicate with other humans—that’s it really.

Thank you for this wording. That’s it. I’ve never worded it this way for myself, but it is very to the point. Our tools are but extensions of ourselves, unless we let them get out of control, let them outgrow us. My views on technology as well as methods is that we must keep it close to humanity, keep driving humanity into it, not abstract it so we become its object, instead of being its purpose. As the complexity in our world is rooted in our humanity as well, I see keeping our tech human as the way to deal with complexity.

Via Iskander Smit an interesting editorial on practices in digital ethics landed in my inbox: Translating Principles into Practices of Digital Ethics: Five Risks of Being Unethical, by Luciano Floridi, director of the Digital Ethics lab of Oxford University’s Internet Institute.

It lists 5 groups of practices that subvert (by being distracting or destructive) actual ethical principles being applied in digital technology. They are very recognisable from ethical discussions I’ve been in or witness to. The paper also provides some pointers in how to address them.
I will list and quote the five practices, and add a sixth that I come across regularly in my own work. Together they are digital ethics dark patterns so to speak:

  1. ethics shopping: “the malpractice of choosing, adapting, or revising (“mixing and matching”) ethical principles, guidelines, codes, frameworks, or other similar standards (especially but not only in the ethics of AI), from a variety of available offers, in order to retrofit some pre-existing behaviours (choices, processes, strategies, etc.), and hence justify them a posteriori, instead of implementing or improving new behaviours by benchmarking them against public, ethical standards.
  2. ethics bluewashing: the malpractice of making unsubstantiated or misleading claims about, or implementing superficial measures in favour of, the ethical values and benefits of digital processes, products, services, or other solutions in order to appear more digitally ethical than one is.
  3. ethics lobbying: the malpractice of exploiting digital ethics to delay, revise, replace, or avoid good and necessary legislation (or its enforcement) about the design, development, and deployment of digital processes, products, services, or other solutions. (can you say big tech?)
  4. ethics dumping: the malpractice of (a) exporting research activities about digital processes, products, services, or other solutions, in other contexts or places (e.g. by European organisations outside the EU) in ways that would be ethically unacceptable in the context or place of origin and (b) importing the outcomes of such unethical research activities.
  5. ethics shirking: the malpractice of (a) exporting research activities about digital processes, products, services, or other solutions, in other contexts or places (e.g. by European organisations outside the EU) in ways that would be ethically unacceptable in the context or place of origin and (b) importing the outcomes of such unethical research activities.
  6. To which I want to add a sixth, based on observations in my work in various organisations, and what pops up ethics-related in my feedreader:

  7. ethics futurising: the malpractice of discussing ethics and even hiring ethics advisors only for technology and processes that are 10 years into the future for the organisation in question. E.g. AI ethics in a company that as yet has nothing to do with AI. At the same time that same ethical soul searching is not applied to currently relevant and used practices, technology or processes. It has a part of ethics bluewashing in it (pattern 2, being seen as ethical rather than being ethical), but there’s something else at play as well, a blind spot to ethical questions being relevant in reflecting on current digital practices, tech choices, processes and e.g. data collection methods, the assumption being that current practice is all right.
  8. I find this sixth one both distracting and destructive: it let’s an organisation believe they are on top of digital ethics issues, but it is all unrelated to their core activities. As a result staff currently involved in real questions are left to their own devices. Which means that for instance data protection officers are lonely figures and often all but ignored by their organisations until after a (legal) issue arises.

SimCity200, adapted from image by m01229 CC-BY)

Came across an interesting article, and by extension the techzine it was published in: Logic.
The article was about the problematic biases and assumptions in the model of urban development used in the popular game SimCity (one of those time sinks where my 10.000 hours brought me nothing 😉 ). And how that unintentionally (the SimCity creator just wanted a fun game) may have influenced how people look at the evolution of cityscapes in real life, in ways the original 1960’s work the game is based on never has. The article is a fine example of cyber history / archeology.

The magazine it was published in, Logic (twitter), started in the spring of 2017 and is now reaching issue 7. Each issue has a specific theme, around which contributions are centered. Intelligence, Tech against Trump, Sex, Justice, Scale, Failure, Play, and soon China, have been the topics until now.

The zine is run by Moira Weigel, Christa Hartsock, Ben Tarnoff, and Jim Fingal.

I’ve ordered the back issues, and subscribed (though technically it is cheaper to keep ordering back-issues). They pay their contributors, which is good.


Cover for the upcoming edition on tech in China. Design (like all design for Logic) by Xiaowei R. Wang.

At Open Belgium 2019 today Daniel Leufer gave an interesting session on bringing philosophy and technology closer together. He presented the Open Philosophy Network, as an attempt to bring philosophy questions into tech discussions while preventing a) the overly abstract work going on in academia, b) not having all stakeholders at the table in an equal setting. He aims at local gatherings and events. Such as a book reading group, on Shoshana Zuboff’s The Age of Surveillance Capitalism. Or tech-ethics round table discussions where there isn’t a panel of experts that gets interviewed but where philosophers, technologists and people who use the technology are all part of the discussion.

This resonated with me at various levels. One level is that I recognise a strong interest in naive explorations of ethical questions around technology. For instance at our Smart Stuff That Matters unconference last summer, in various conversations ethical discussions emerged naturally from the actual context of the session and the event.
Another is that, unlike some of the academic efforts I know, the step towards practical applicability is expected and needed sooner by many. In the end it all has to inform actions and choices in the here and now, even when nobody expects definitive answers. It is also why I myself dislike how many ethical discussions pretending to be action oriented are primarily connected to future or emergent technologies, not to current technology choices. Then it’s just a fig leaf for inaction, and removing agency. I’m more a pragmatist, and am interested in what achieves actual improvements in the here and now, and what increases agency.
Thirdly I also felt that there are many more connections to make in terms of open session formats, such as Open Space, knowledge cafés, blogwalks, and barcamps, and indeed the living room experience of our birthday unconferences. I’ve organised many of those, and I feel the need to revisit those experiences and think about how to deploy them for something like this.This also applies to formulating a slightly more structured approach to assist groups in organisations with naive ethical explorations.

The point of ethics is not to provide definitive answers, but to prevent us using terrible answers

I hope to interact a bit more with Daniel Leufer in the near future.

Kars Alfrink pointed me to a report on AI Ethics by the Nuffield Foundation, and from it lifts a specific quote, adding:

Good to see people pointing this out: “principles alone are not enough. Instead of representing the outcome of meaningful ethical debate, to a significant degree they are just postponing it”

This postponing of things, is something I encounter all the time. In general I feel that many organisations who claim to be looking at ethics of algorithms, algorithmic fairness etc, currently actually don’t have anything to do with AI, ML or complicated algorithms. To me it seems they just do it to place the issue of ethics well into the future, that as yet unforeseen point they will actually have to deal with AI and ML. That way they prevent having to look at ethics and de-biasing their current work, how they now collect, process data and the governance processes they have.

This is not unique to AI and ML though. I’ve seen it happen with open data strategies too. Where the entire open data strategy of for instance a local authority was based on working with universities and research entities to figure out how decades after now data might play a role. No energy was spent on how open data might be an instrument in dealing with actual current policy issues. Looking at future issues as fig leaf to not deal with current ones.

This is qualitatively different from e.g. what we see in the climate debates, or with smoking, where there is a strong current to deny the very existence of issues. In this case it is more about being seen to solve future issues, so no-one notices you’re not addressing the current ones.