Recently Stephen Downes linked to an article on the various levels of sophistication of AI personal assistants (by … and …). He added that while all efforts are currently at the third level of those 5 he sees a role in education for such assistance only once level 4 or higher is available (not now the case).

AI assistants maturity levels

Those five levels mentioned in the article are:

  1. Notification bots and canned pre-programmed responses
  2. Simple dialogues and FAQ style responses. All questions and answers pre-written, lots of ‘if then’ statements in the underlying code / decision tree
  3. More flexible dialogue, recognising turns in conversations
  4. Responses are shaped based on retained context and preferences stored about the person in the conversation
  5. An AI assistant can monitor and manage a range of other assistants set to do different tasks or parts of them

I fully appreciate how difficult it is to generate natural sounding/reading conversation on the fly, when a machine interacts with a person. But what stands out to me in the list above and surrounding difficulties is something completely different. What stands out to me is how the issues mentioned are centered on processing natural language as a generic thing to solve ‘first’. A second thing that stands out is while the article refers to an AI based assistant, and the approach is from the perspective of a generic assistant, that is put to use into 1-on-1 situations (and a number of them in parallel), the human expectation at the other end is that of receiving personal assistance. It’s the search for the AI equivalent of a help desk and call center person. There is nothing inherently personal in such assistance, it’s merely 1-on-1 provided assistance. It’s a mode of delivery, not a description of the qualitative nature of the assistance as such.

Flip the perspective to personal

If we start reasoning from the perspective of the person receiving assistance, the picture changes dramatically. I mostly don’t want to interact with AI shop assistants or help desk algorithms of each various service or website. I would want to have my own software driven assistant, that then goes to interact with those websites. I as a customer have no need or wish to automate the employees of the shops / services I use, I want to reduce my own friction in making choices and putting those choices to action. I want a different qualitative nature of the assistance provided, not a 1-on-1 delivery mode.

That’s what a real PA does too, it is someone assisting a single person, a proxy employed by that person. Not employed by whomever the PA interacts with on the assisted person’s behalf.
What is mentioned above only at level 4, retained context and preferences of the person being assisted, then becomes the very starting point. Context and preferences are then the default inputs. A great PA over time knows the person assisted deeply and anticipates friction to take care of.

This allows the lower levels in the list above, 1 and 2, the bots and preprogrammed canned responses and action, to be a lot more useful. Because apart from our personal preferences and the contexts each of us operates in, the things themselves we do based on those preferences and contexts are mostly very much the same. Most people use a handful of the same functions for the same purpose at the same time of day on their smart speakers for instance, which is a tell. We mostly have the same practices and routines, that shift slowly with time. We mostly choose the same thing in comparable circumstances etc.

Building narrow band personal assistants

A lot of the tasks I’d like assistance with can be well described in terms of ‘standard operating procedures’, and can be split up in atomic tasks. Atomic tasks you can string together.
My preferences and contextual deliberations for a practice or task can be captured in a narrow set of parameters that can serve as input for those operating procedures / tasks.
Put those two things together and you have the equivalent of a function that you pass a few parameters. Basically you have code.

Then we’re back to automating specific tasks and setting the right types of alerts.

Things like when I have a train trip scheduled in the morning, I want an automatic check for disturbances on my route when I wake up and at regular intervals until 20 mins before the train leaves (which is when I get ready to leave for the rail way station). I want my laptop to open up a specific workspace set-up if I open my laptop before 7 am, and a different one when I’m re-opening my laptop between 08:30-09:00. I want when planning a plane trip an assistant that asks me for my considerations in my schedule what would be a reasonable time to arrive at the airport for departure, when I need to be back, and I want it to already know my preferences for various event times and time zone differences w.r.t spending a night before or after a commitment at the destination. Searching a hotel with filter rules based on my standard preferences (locations vis-a-vis event location and public transport, quality, price range), or simpler yet rebook a hotel from a list of previous good experiences after checking if price range e.g. hasn’t changed upward too much. Preference for direct flights, specific airlines (and specific airlines in the case of certain clients) etc. Although travel in general isn’t a priority now obviously. When I start a new project I want an assistant to ask a handful of questions, and then arrange the right folder structure, some populated core notes, plan review moments, populate task lists with the first standard tasks. I only need to know the rain radar forecast for my daughter’s school start and finish, and where my preferred transport mode for an appointment is bicycle. For half a dozen most used voice commands I might consider Mycroft on a local system, foregoing the silos. Keeping track of daily habits, asking me daily reflection questions. Etc.

While all this sounds difficult when you would want to create this as generic functionality, it is in fact much more simpler in the case of building it for one specific individual. And it won’t need mature AI natural conversation, merely a pleasantly toned interaction surface that triggers otherwise hard coded automated tasks and scripts. The range of tasks might be diverse but the range of responses and preferences to take into account are narrow, as it only needs to pertain to me. It’s a narrow band digital assistant, it’s the small tech version.

Aazai

For some years I’ve dubbed bringing together the set of individual automation tasks I use into one interaction flow as a personal digital assistant ‘Aazai’ (a combination of my initials A.A.Z. with AI, where the AI isn’t AI of course but merely has the same intention as what is being attempted with AI generally). While it currently doesn’t exist mostly as a single thing, it is the slow emergence of something that reduces digital friction throughout my day, and shifts functionality with my shifts in habits and goals. It is a stringed together set of automated things arranged in the shape of my currently preferred processes, which allows me to reduce the time spent consciously adhering to a process. Something that is personal and local first, and the atomic parts of which can be shared or are themselves re-used from existing open source material.

I’ve read three books by Linda Nagata the past days. The Last Good Man was the final one. Set in a near future it explores what can remain of a warrior code, duty and honor, in an age of AI run autonomous drones of all sizes and for all purposes. Who’s the last good man standing? The take away largely is that it basically can turn any spot on the globe into a hotzone, with humans as mere backdrop and as collateral damage. It also raises the issue of when military power monopolies are a dimension of national sovereignty, what dissolving those monopolies will look like. It’s set in the Middle East and Morocco mostly, with Burma and the Phillipines additional settings.

Having finished it last night a pointer by Bruce Sterling this morning to an article titled Drones, Deniability, and Disinformation: Warfare in Libya and the New International Disorder which describes precisely how this all currently plays out already in Libya, although currently without much AI and autonomous platforms. It also is a worthwile reminder of John Robb’s Brave New War which I read in 2012.

Bruce Sterling takes this quote from the article, about drone weapons, but it applies even more to the disinformation efforts also mentioned.

Armed drones embody a trend toward military action that minimizes the risks and costs to the intervening powers, thereby encouraging them to meddle in conflicts where no vital interests are at stake

Read Escaping Skinner's Box: AI and the New Era of Techno-Superstition
One of the things AI will do is re-enchant the world and kickstart a new era of techno-superstition. If not for everyone, then at least for most people who have to work with AI on a daily basis. The catch, however, is that this is not necessarily a good thing. In fact, it is something we should worry about.

A good presentation I attended this afternoon at World Summit AI 2019. Will blog about it, but bookmarking it here for now.

Since the summer I am holding three questions that are related. They all concern what role machine learning and AI could fulfil for an individual or an everyday setting. Everyman’s AI, so to speak.

The first question is a basic one, looking at your house, and immediate surroundings:

1: What autonomous things would be useful in the home, or your immediate neighbourhood?

The second question is more group and community oriented one:

2: What use can machine learning have for civic technology (tech that fosters citizen’s ability to do things together, to engage, participate, and foster community)?

The third question is perhaps more a literary one, an invitation to explore, to fantasise:

3 What would an “AI in the wall” of your home be like? What would it do, want to do? What would you have it do?

(I came across an ‘AI in the wall’ in a book once, but it resided in the walls of a pub. Or rather it ran the pub. It being a public place allowed it to interact in many ways in parallel, so as to not get bored)

Notes on reading Novacene by James Lovelock 2019

Definition of life: entities that reduce entropy, as they organise their environment

I knew his 1970s Gaia Theory, but remembered it mostly as a type of systems thinking and seeing earth as a complex system. But he adds something key:

In earth’s case the purpose of the system is to keep earth cool, to keep temperatures at 15C average. And do so as our sun slowly heats up.

A startling assumption to me is that earth really is not in the Goldilocks zone, but Mars is. We would be like Venus, hot, if not for the entropy reducing earth life. That life continuously draws down heat.
Furthermore, to an alien observer earth would not look cool but much hotter because of dumpng solar heat continuously.

The sun is heating up and so is earth therefore. Keeping cool is our prime directive. The climate urgency is making it worse and burning fossil fuells (stored heat from the past) should stop.

The Anthropocene started with the steam engine, when humans could influence their environment on a global scale. The Novacene is the coming age of AI.
The optimal temperature range for electronics and life are similar, and life and AI have the same hard upper temperature limit of 47C.
Above it we will have a runaway process to becoming like Venus.

AI will not deliberately kill us because it needs the world to stay cool under a heating sun. Carbon based life is needed for it. They will supplant us by evolution, slow not sudden, as evolution moves beyond us, as it always would.

Interesting notion: AI might become 1M times faster than us, but they are bound by the same physics as us. It means e.g. their travel will be at roughly the same speed.
Which will be 1M as boring and slower to AI than to us.

Makes a good caveat: AI would need to start its evolution from ‘good’ beginnings. E.g. not from autonomous weapons platforms.
Yet precisely in civic tech such as aviation we put hard constraints on AI. But we do not on military AI, making it more likely it will evolve from there.

My takeaway from this is how to use AI for civic tech, and set it free as it were, with a sense of communal values. Including with a sense of the Prime Directive to keep cool.

That I think is a core flaw in Lovelocks reasoning. Yes, the PD is to keep cool. Not only for our self-created heating, but mostly for the sun heating.
But how many humans are aware of this, and of those how many care enough to act, given the timescale of the suns heating in millions of years?
How will we make AI aware, and will they care where we do not, given that their relative timescale is even up to a million times longer?

He stresses the notion of the engineer and artisanal engineering. Where knowing how to make things work is a priori more important than knowing why it works.
This also ties into his notion that intuiting is key for engineering, and the scientific method of standing on the shoulders of others is more suited for the ‘know why’

Some of my takeaways:

  • When increasing the abundance of life is good to keep cool, greening your urban living environment makes sense on a deeper level than just cooling the city.
    Also as cities are an efficient way to house us humans at our current numbers.
  • How to use ML for civic tech, for networked agency
  • How to explore ML, what it currently does, what it can do, areas of issues it could be used in.
  • What autonomous things would be valuable in the home, neighbourhood, city.
  • What would an “AI in the wall” be like?

The P2P Foundation reposts an article by Jeremy Lent from late 2017 on how corporations are artificial intelligences.

It doesn’t mention Brewster Kahle’s 2014 exploration of the same notion.
SF writer Charlie Stross referenced Kahle when he called corporations a 19 century form of ‘slow AI’.
Because “corporations are context blind, single purpose algorithms“.

I like that positioning of organisations as slow AI and single purpose algorithms. For two reasons.
First, as it refocuses us on the fact that organisational structures are tools. When those tools get bigger than us, they stop serving us. And it points the way to how we always need to think about AI as tools, with their smallness as design principle.
Second, because it nicely highlights what I said earlier about ethics futurising. Futurising is when ethical questions are tied to AI in ways that puts it well into the future, and not paying attention to how those same ethical issues play out in your current context. All the ethical aspects of AI we discuss apply to our organisations, corporations just as much, but we simply assume that was considered some point in the past. It wasn’t.

P1000960Your Slow AI overlords looking down on you, photo Simone Brunozzi, CC-BY-SA