Today it is Global Ethics Day. My colleague Emily wanted to mark it given our increasing involvement in information ethics, and organised an informal online get together, a Global Ethics Day party, with a focus on data ethics. We showed the participants our work on an ethical reference for using geodata, and the thesis work our newest colleague Pauline finished this spring on ethical leadership within municipal governments. I was asked to kick the event off with some remarks to spark discussion.

I took the opportunity to define and launch a new moniker, Ethics as a Practice (EaaP).(1)
The impulse for me to do that comes out of two things that I have a certain dislike for, in how I see organisations deal with the ethics of working with data and using data to directly inform decisions.

The first concerns treating the philosophy of technology, information and data ethics in general as a purely philosophical and scientific debate. It, due to abstraction, then has no immediate bearing on the things organisations, I and others do in practice. Worse, regularly it approaches actual problems purely starting from that abstraction, ending up with posing ethical questions I think are irrelevant to reality on the ground. An example would be MIT’s notion that classical trolly problems have bearing on how to create autonomous vehicles. It seems to me because they don’t appreciate that saying autonomous vehicle, does not mean the vehicle is an indepenent actor to which blame etc can be applied, and that ‘autonomous’ merely means that a vehicle is independent from its previous driver, but otherwise fully embedded in a wide variety of other dependencies. Not autonomous at all, no ghost in the machine.


The campus of University of Twente, where they do some great ethics work w.r.t. to technology. But in itself it’s not sufficient. (image by me, CC BY SA)

The second concerns seeing ‘Ethics by design’ as a sufficient fix. I dislike that because it carries 2 assumptions that are usually not acknowledged. Ethics by design in practice seems to be perceived as ethics being only a concern in the design phase of a new technology, process, approach or method. Whereas at least 95% of what organisations and professionals deal with isn’t new but existing, so as a result remains out of scope of ethical considerations. It’s an assumption that everything that exists has been thoroughly ethically evaluated, which isn’t true, not at all even when it comes to existing data collection. Ethics has no role at all in existing data governance for instance, and data governance usually doesn’t cover data collection choices or its deletion/archiving.
The other assumption conveyed by the term ‘ethics by design’ is that once the design phase is completed, ethics has been sufficiently dealt with. The result is, with 95% of our environment remaining the same, that ethics by design is forward looking but not backwards compatible. Ethics by design is seen as doing enough, but it isn’t enough at all.


Ethics by design in itself does not provide absolution (image by Jordanhill School D&T Dept, license CC BY)

Our everyday actions and choices in our work are the expression of our individual and organisational values. The ‘ethics by design’ label sidesteps that everyday reality.

Both taken together, ethics as academic endeavour and ethics by design, result in ethics basically being outsourced to someone specific outside or in the organisation, or at best to a specific person in your team, and starts getting perceived as something external being delivered to your work reality. Ethics as a Service (EaaS) one might say, a service that takes care of the ethical aspects. That perception means you yourself can stop thinking about ethics, it’s been allocated, and you can just take its results and run with it. The privacy officer does privacy, the QA officer does quality assurance, the CISO does information security, and the ethics officer covers everything ethical…..meaning I can carry on as usual. (e.g. Enron had a Code of Ethics, but it had no bearing on the practical work or decisions taken.)

That perception of EaaS, ethics as an externally provided service to your work has real detrimental consequences. It easily becomes an outside irritant to the execution of your work. Someone telling you ‘no’ when you really want to do something. A bureaucratic template to fill in to be able to claim compliance (similarly as how privacy, quality, regulations are often treated). Ticking the boxes on a checklist without actual checks. That way it becomes something overly reductionist, which denies and ignores the complexity of everyday knowledge work.


Externally applied ethics become an irritant (image by Iain Watson, license CC BY)

Ethical questions and answers are actually an integral part of the complexity of your work. Your work is the place where clear boundaries can be set (by the organisation, by general ethics, law), ánd the place where you can notice as well as introduce behavioural patterns and choices. Complexity can only be addressed from within that complexity, not as an outside intervention. Ethics therefore needs to be dealt with from within the complexity of actual work and as one of the ingredients of it.

Placing ethics considerations in the midst of the complexity of our work, means that the spot where ethics are expressed in real work choices overlaps where such aspects are considered. It makes EaaS as a stand alone thing impossible, and instead brings those considerations into your everyday work not as an external thing but as an ingredient.

That is what I mean by Ethics as a Practice. Where you use academic and organisational output, where ethics is considered in the design stage, but never to absolve you from your professional responsibilities.
It still means setting principles and hard boundaries from the organisational perspective, but also an ongoing active reflection on them and on the heuristics that guide your choices, and it actively seeks out good practice. It never assumes a yes or no to an ethical question by default, later to be qualified or rationalised, but also does not approach those questions as neutral (as existing principles and boundaries are applied).(2) That way (data) ethical considerations become an ethics of your agency as a professional, informing your ability to act. It embraces the actual complexity of issues, acknowledges that daily reality is messy, engages all relevant stakeholders, and deliberately seeks out a community of peers to spot good practices.

Ethics is part and parcel of your daily messy work, it’s your practice to hone. (image by Neil Cummings, license CC BY SA)

Ethics as a Practice (EaaP) is a call to see yourself as an ethics practitioner, and a member of a community of practice of such practitioners, not as someone ethics ‘is done to’. Ethics is part and parcel of your daily messy work, it’s your practice to hone. Our meet-up today was a step to have such an exchange between peers.

I ended my remarks with a bit of a joke, saying, EaaP is so you can always do the next right thing, a quote from a Disney movie my 4 year old watches, and add a photo of a handwritten numbered list headed ‘things to do’ that I visibly altered so it became a ‘right things to do’ list.

(1) the ‘..as a Practice’ notion I took from Anne-Laure Le Cunff’s Ness Labs posting that mentioned ‘playfulness as a practice’.
(2) not starting from yes or no, nor from a neutral position, taken from the mediation theory by University of Twente’s prof Peter Paul Verbeek

This is definetely a word I’ll remember: data visceralisation.
The term is suggested for data visualization in virtual reality, so that people can better experience differences in data, understand them viscerally.

It is something that I think definitely is useful, not just in virtual reality but also in making data visualisation physical, which I called ‘tangible infographics’ in 2014. You switch the perspective to one or more other senses, thus changing the phenomenological experience, which can yield new insights.

In both, tangible infographics and data visceralisation, the quest is to let people feel the meaning of certain datasets, so they grasp that meaning in a different way than with the more rational parts of their mind. (Hans Rosling’s toilet paper rolls to convey global population developments come to mind too).

Benjamin Lee et al wrote a paper and released a video exploring a number of design probes. I’m not sure I find the video, uhm, a visceral experience, but the experiments are interesting.

They look at 6 experimental probes:

  1. speed (olympic sprint)
  2. distance (olympic long jump)
  3. height (of buildings)
  4. scale (planets in the solar system)
  5. quantities (Hong Kong protest size)
  6. abstract measures (US debt)

The authors point to something that is also true for the examples of 3d printed statistics I mentioned in my old blog post which are much less useful with ‘large numbers’ because the objects would become unwieldy or lose meaning. There is therefore a difference between the first three examples, which are all at human scale, and the other three which aim to convey something that is (much) bigger than us and our everyday sense of our surroundings. That carries additional hurdles to make them ‘visceral’.

(Found in Nathan Yau’s blog FlowingData)

Recently Stephen Downes linked to an article on the various levels of sophistication of AI personal assistants (by … and …). He added that while all efforts are currently at the third level of those 5 he sees a role in education for such assistance only once level 4 or higher is available (not now the case).

AI assistants maturity levels

Those five levels mentioned in the article are:

  1. Notification bots and canned pre-programmed responses
  2. Simple dialogues and FAQ style responses. All questions and answers pre-written, lots of ‘if then’ statements in the underlying code / decision tree
  3. More flexible dialogue, recognising turns in conversations
  4. Responses are shaped based on retained context and preferences stored about the person in the conversation
  5. An AI assistant can monitor and manage a range of other assistants set to do different tasks or parts of them

I fully appreciate how difficult it is to generate natural sounding/reading conversation on the fly, when a machine interacts with a person. But what stands out to me in the list above and surrounding difficulties is something completely different. What stands out to me is how the issues mentioned are centered on processing natural language as a generic thing to solve ‘first’. A second thing that stands out is while the article refers to an AI based assistant, and the approach is from the perspective of a generic assistant, that is put to use into 1-on-1 situations (and a number of them in parallel), the human expectation at the other end is that of receiving personal assistance. It’s the search for the AI equivalent of a help desk and call center person. There is nothing inherently personal in such assistance, it’s merely 1-on-1 provided assistance. It’s a mode of delivery, not a description of the qualitative nature of the assistance as such.

Flip the perspective to personal

If we start reasoning from the perspective of the person receiving assistance, the picture changes dramatically. I mostly don’t want to interact with AI shop assistants or help desk algorithms of each various service or website. I would want to have my own software driven assistant, that then goes to interact with those websites. I as a customer have no need or wish to automate the employees of the shops / services I use, I want to reduce my own friction in making choices and putting those choices to action. I want a different qualitative nature of the assistance provided, not a 1-on-1 delivery mode.

That’s what a real PA does too, it is someone assisting a single person, a proxy employed by that person. Not employed by whomever the PA interacts with on the assisted person’s behalf.
What is mentioned above only at level 4, retained context and preferences of the person being assisted, then becomes the very starting point. Context and preferences are then the default inputs. A great PA over time knows the person assisted deeply and anticipates friction to take care of.

This allows the lower levels in the list above, 1 and 2, the bots and preprogrammed canned responses and action, to be a lot more useful. Because apart from our personal preferences and the contexts each of us operates in, the things themselves we do based on those preferences and contexts are mostly very much the same. Most people use a handful of the same functions for the same purpose at the same time of day on their smart speakers for instance, which is a tell. We mostly have the same practices and routines, that shift slowly with time. We mostly choose the same thing in comparable circumstances etc.

Building narrow band personal assistants

A lot of the tasks I’d like assistance with can be well described in terms of ‘standard operating procedures’, and can be split up in atomic tasks. Atomic tasks you can string together.
My preferences and contextual deliberations for a practice or task can be captured in a narrow set of parameters that can serve as input for those operating procedures / tasks.
Put those two things together and you have the equivalent of a function that you pass a few parameters. Basically you have code.

Then we’re back to automating specific tasks and setting the right types of alerts.

Things like when I have a train trip scheduled in the morning, I want an automatic check for disturbances on my route when I wake up and at regular intervals until 20 mins before the train leaves (which is when I get ready to leave for the rail way station). I want my laptop to open up a specific workspace set-up if I open my laptop before 7 am, and a different one when I’m re-opening my laptop between 08:30-09:00. I want when planning a plane trip an assistant that asks me for my considerations in my schedule what would be a reasonable time to arrive at the airport for departure, when I need to be back, and I want it to already know my preferences for various event times and time zone differences w.r.t spending a night before or after a commitment at the destination. Searching a hotel with filter rules based on my standard preferences (locations vis-a-vis event location and public transport, quality, price range), or simpler yet rebook a hotel from a list of previous good experiences after checking if price range e.g. hasn’t changed upward too much. Preference for direct flights, specific airlines (and specific airlines in the case of certain clients) etc. Although travel in general isn’t a priority now obviously. When I start a new project I want an assistant to ask a handful of questions, and then arrange the right folder structure, some populated core notes, plan review moments, populate task lists with the first standard tasks. I only need to know the rain radar forecast for my daughter’s school start and finish, and where my preferred transport mode for an appointment is bicycle. For half a dozen most used voice commands I might consider Mycroft on a local system, foregoing the silos. Keeping track of daily habits, asking me daily reflection questions. Etc.

While all this sounds difficult when you would want to create this as generic functionality, it is in fact much more simpler in the case of building it for one specific individual. And it won’t need mature AI natural conversation, merely a pleasantly toned interaction surface that triggers otherwise hard coded automated tasks and scripts. The range of tasks might be diverse but the range of responses and preferences to take into account are narrow, as it only needs to pertain to me. It’s a narrow band digital assistant, it’s the small tech version.

Aazai

For some years I’ve dubbed bringing together the set of individual automation tasks I use into one interaction flow as a personal digital assistant ‘Aazai’ (a combination of my initials A.A.Z. with AI, where the AI isn’t AI of course but merely has the same intention as what is being attempted with AI generally). While it currently doesn’t exist mostly as a single thing, it is the slow emergence of something that reduces digital friction throughout my day, and shifts functionality with my shifts in habits and goals. It is a stringed together set of automated things arranged in the shape of my currently preferred processes, which allows me to reduce the time spent consciously adhering to a process. Something that is personal and local first, and the atomic parts of which can be shared or are themselves re-used from existing open source material.

Frank Meeuwsen’s exploration of the world of newsletters surfaced this interesting conversation with Steve Lord. Steve writes a twice monthly newsletter The Dork Web about tech subcultures. What stood out to me was this statement:

COVID and climate change impacts will drive the creation of new subcultures. Two areas I know of are radio and self-sufficiency. COVID taught us how brittle our supply chains are. Climate change and de-globalization will exacerbate that. Global demand for online amateur radio exams far outstrips supply. I imagine many readers will have at least tried to bake this year. Some will try to go back to their lives as they were before. Some will keep baking, growing food, staying on the air. These people will build the subcultures of the 2020s.

Wait, what? “Global demand for online amateur radio exams far outstrips supply“? Steve’s remark on the let’s call them ‘selective pressures’ of pandemic and the climate emergency on tech subculture development sounds likely. But a rise in demand for amateur radio jumped out at me. I am very curious where that observation comes from.

The recent Dork Web issue Propaganda, Pirates and Preachers: The Weird Wide Web Of Shortwave Radio is full of interesting links to follow, and I assume Lord came across the interest in ham radio exams in the course of researching that edition.

I became fascinated with short wave radio in my pre-teens, and involved with ham radio by the end of primary school (my dad saw my short wave listening efforts and introduced me to a colleague of his who had a radio license). The original promise of short wave and ham radio to me, looking back now, was that the technology mediated access to information (short wave), and brought novel connections (ham radio). I was too young then to get an operator’s license, but got my license after I entered university (and was on the board of the still existing university’s ham radio club), now just over 30 years ago.

I also encountered internet at university at the end of the 1980’s, and that eroded my interest in ham radio, as it enabled both access to information and new connections on such a different scale and in such a more effective way, compared to ham radio.

Ultimately, while the technology is fascinating, there is not much actual agency in ham radio. You can connect to other people, but such connections are scattered, unpredictable if not random, and it doesn’t enable you to do things other than explore the fascination of the tech itself (much like metablogging actually I must say), with others.

There are of course edge use cases where ham radio does provide immediate agency, namely in the case of large scale emergencies. Situations where regular communications are sure to break down under demand (mobile phone networks are the first to falter when everyone really wants to make a call…). Having let my radio license lapse with time, I renewed it 3 years ago, and I do have VHF/UHF ‘walkie talkie’ style transceivers handy for just such a scenario. My callsign now is the same as it was 30 years ago: PE1NOR.

More recently IoT developments and e.g. LORAWAN do also use radio in an agency inducing way. I run a LORAWAN gateway, allowing any radio enabled IoT device in the area to connect through it to the internet so the IoT device can reach the database its operator wants the collected data end up in. And I have a sensorkit in the garden that uses that gateway to send temperature and humidity measurements to a city wide citizen science network, the results of which are used in our city’s climate adaptation efforts.

So if, as Steve Lord suggests, “Global demand for online amateur radio exams far outstrips supply” and is feeding into new tech subcultures, I’m curious. Curious to see how it might find new ways of providing agency.

I’ve subscribed to The Dork Web, not as a newsletter, but through its RSS feed. I’m more of an RSS guy than a newsletter reader. Sorry, Frank! 😉

Following the political turmoil in Kyrgyzstan with interest, the only proper but still fragile democratic republic in Central Asia. I worked in Kyrgyzstan during a few years, 2014-2016, and met a people fiercely proud of their democracy. A democracy that is not easy to maintain in a country where poverty is significant (22% below the poverty line last year), and where Soviet era aspects still echo in the legal framework and in the attitudes towards power of some. We worked on using open data to overcome some of those hurdles, and I encountered hihgly motivated people everywhere, from the then prime minister and the state secretary for economic affairs, members of parliament, officials in data holding government institutions, to the local IT companies, a struggling free press clamoring for access and transparency, NGO’s, and all the way to local primary school teams wanting to use open data to better show parents which schools still have space for more pupils in their free lunch provision programs (remember, poverty). All of those I met want Kyrgyzstan to be better. To function better and more equally, to reduce corruption, to provide agency to people, to provide better public services, to get out of poverty. It seems from afar they are at a new inflection point on their still young path of democracy. Reading the headlines I think of the many people I met and their energy and intentions. I just got a message from Kiva I have room to provide more micro credits again, and will, like I do frequently for countries I’ve worked in, spend it on supporting underbanked (budding) entrepreneurs and students in Kyrgyzstan.

And then think of the fragility in democracies elsewhere, here in and adjecent to the EU.

Bookmarked Sp’ákw’us: ways of seeing (chriscorrigan.com)
As part of the course, we are invited to articulate takeaways and giveaways, naming the gifts received and how we will offer gifts as a result. This cycle of reciprocity is essential.

Juxtaposing takeaways and giveaways, as Chris Corrigan relates here, strikes me as such a strong and beautiful shorthand to use in the future. I’m currently doing a program with new colleagues about networking, where I’ve said networking is to learn things, and to share and give things, so that people can see you and think of you when you may be of value to them in addressing some need.