Since the summer I am holding three questions that are related. They all concern what role machine learning and AI could fulfil for an individual or an everyday setting. Everyman’s AI, so to speak.

The first question is a basic one, looking at your house, and immediate surroundings:

1: What autonomous things would be useful in the home, or your immediate neighbourhood?

The second question is more group and community oriented one:

2: What use can machine learning have for civic technology (tech that fosters citizen’s ability to do things together, to engage, participate, and foster community)?

The third question is perhaps more a literary one, an invitation to explore, to fantasise:

3 What would an “AI in the wall” of your home be like? What would it do, want to do? What would you have it do?

(I came across an ‘AI in the wall’ in a book once, but it resided in the walls of a pub. Or rather it ran the pub. It being a public place allowed it to interact in many ways in parallel, so as to not get bored)

Notes on reading Novacene by James Lovelock 2019

Definition of life: entities that reduce entropy, as they organise their environment

I knew his 1970s Gaia Theory, but remembered it mostly as a type of systems thinking and seeing earth as a complex system. But he adds something key:

In earth’s case the purpose of the system is to keep earth cool, to keep temperatures at 15C average. And do so as our sun slowly heats up.

A startling assumption to me is that earth really is not in the Goldilocks zone, but Mars is. We would be like Venus, hot, if not for the entropy reducing earth life. That life continuously draws down heat.
Furthermore, to an alien observer earth would not look cool but much hotter because of dumpng solar heat continuously.

The sun is heating up and so is earth therefore. Keeping cool is our prime directive. The climate urgency is making it worse and burning fossil fuells (stored heat from the past) should stop.

The Anthropocene started with the steam engine, when humans could influence their environment on a global scale. The Novacene is the coming age of AI.
The optimal temperature range for electronics and life are similar, and life and AI have the same hard upper temperature limit of 47C.
Above it we will have a runaway process to becoming like Venus.

AI will not deliberately kill us because it needs the world to stay cool under a heating sun. Carbon based life is needed for it. They will supplant us by evolution, slow not sudden, as evolution moves beyond us, as it always would.

Interesting notion: AI might become 1M times faster than us, but they are bound by the same physics as us. It means e.g. their travel will be at roughly the same speed.
Which will be 1M as boring and slower to AI than to us.

Makes a good caveat: AI would need to start its evolution from ‘good’ beginnings. E.g. not from autonomous weapons platforms.
Yet precisely in civic tech such as aviation we put hard constraints on AI. But we do not on military AI, making it more likely it will evolve from there.

My takeaway from this is how to use AI for civic tech, and set it free as it were, with a sense of communal values. Including with a sense of the Prime Directive to keep cool.

That I think is a core flaw in Lovelocks reasoning. Yes, the PD is to keep cool. Not only for our self-created heating, but mostly for the sun heating.
But how many humans are aware of this, and of those how many care enough to act, given the timescale of the suns heating in millions of years?
How will we make AI aware, and will they care where we do not, given that their relative timescale is even up to a million times longer?

He stresses the notion of the engineer and artisanal engineering. Where knowing how to make things work is a priori more important than knowing why it works.
This also ties into his notion that intuiting is key for engineering, and the scientific method of standing on the shoulders of others is more suited for the ‘know why’

Some of my takeaways:

  • When increasing the abundance of life is good to keep cool, greening your urban living environment makes sense on a deeper level than just cooling the city.
    Also as cities are an efficient way to house us humans at our current numbers.
  • How to use ML for civic tech, for networked agency
  • How to explore ML, what it currently does, what it can do, areas of issues it could be used in.
  • What autonomous things would be valuable in the home, neighbourhood, city.
  • What would an “AI in the wall” be like?

The P2P Foundation reposts an article by Jeremy Lent from late 2017 on how corporations are artificial intelligences.

It doesn’t mention Brewster Kahle’s 2014 exploration of the same notion.
SF writer Charlie Stross referenced Kahle when he called corporations a 19 century form of ‘slow AI’.
Because “corporations are context blind, single purpose algorithms“.

I like that positioning of organisations as slow AI and single purpose algorithms. For two reasons.
First, as it refocuses us on the fact that organisational structures are tools. When those tools get bigger than us, they stop serving us. And it points the way to how we always need to think about AI as tools, with their smallness as design principle.
Second, because it nicely highlights what I said earlier about ethics futurising. Futurising is when ethical questions are tied to AI in ways that puts it well into the future, and not paying attention to how those same ethical issues play out in your current context. All the ethical aspects of AI we discuss apply to our organisations, corporations just as much, but we simply assume that was considered some point in the past. It wasn’t.

P1000960Your Slow AI overlords looking down on you, photo Simone Brunozzi, CC-BY-SA

This week NBC published an article exploring the source of training data sets for facial recognition. It makes the claim that we ourselves are providing, without consent, the data that may well be used to put us under surveillance.

In January IBM made a database available for research into facial recognition algorithms. The database contains some 1 million face descriptions that can be used as a training set. Called “Diversity in Faces” the stated aim is to reduce bias in current facial recognition abilities. Such bias is rampant often due to too small and too heterogenous (compared to the global population) data sets used in training. That stated goal is ethically sound it seems, but the means used to get there raises a few questions with me. Specifically if the means live up to the same ethical standards that IBM says it seeks to attain with the result of their work. This and the next post explore the origins of the DiF data, my presence in it, and the questions it raises to me.

What did IBM collect in “Diversity in Faces”?
Let’s look at what the data is first. Flickr is a photo sharing site, launched in 2004, that started supporting publishing photos with a Creative Commons license from early on. In 2014 a team led by Bart Thomee at Yahoo, which then owned Flickr, created a database of 100 million photos and videos with any type of Creative Commons license published in previous years on Flickr. This database is available for research purposes and known as the ‘YFCC-100M’ dataset. It does not contain the actual photos or videos per se, but the static metadata for those photos and videos (urls to the image, user id’s, geo locations, descriptions, tags etc.) and the Creative Commons license it was released under. See the video below published at the time:

YFCC100M: The New Data in Multimedia Research from CACM on Vimeo.

IBM used this YFCC-100M data set as a basis, and selected 1 million of the photos in it to build a large collection of human faces. It does not contain the actual photos, but the metadata of that photo, and a large range of some 200 additional attributes describing the faces in those photos, including measurements and skin tones. Where YFC-100M was meant to train more or less any image recognition algorithm, IBM’s derivative subset focuses on faces. IBM describes the dataset in their Terms of Service as:

a list of links (URLs) of Flickr images that are publicly available under certain Creative Commons Licenses (CCLs) and that are listed on the YFCC100M dataset (List of URLs together with coding schemes aimed to provide objective measures of human faces, such as cranio-facial features, as well as subjective annotations, such as human-labeled annotation predictions of age and gender(“Coding Schemes Annotations”). The Coding Schemes Annotations are attached to each URL entry.

My photos are in IBM’s DiF
NBC, in their above mentioned reporting on IBM’s DiF database, provide a little tool to determine if photos you published on Flickr are in the database. I am an intensive user of Flickr since early 2005, and published over 25.000 photos there. A large number of those carry a Creative Commons license, BY-NC-SA, meaning that as long as you attribute me, don’t use an image commercially and share your result under the same license you’re allowed to use my photos. As the YFCC-100M covers the years 2004-2014 and I published images for most of those years, it was likely my photos are in it, and by extension likely my photos are in IBM’s DiF. Using NBC’s tool, based on my user name, it turns out 68 of my photos are in IBM’s DiF data set.

One set of photos that apparently is in IBM’s DiF cover the BlogTalk Reloaded conference in Vienna in 2006. There I made various photos of participants and speakers. The NBC tool I mentioned provides one photo from that set as an example:

Thomas Burg

My face is likely in IBM’s DiF
Although IBM doesn’t allow a public check who is in their database, it is very likely that my face is in it. There is a half-way functional way to explore the YFCC-100M database, and DiF is derived from the YFCC-100M. It is reasonable to assume that faces that can be found in YFCC-100M are to be found in IBM’s DiF. The German university of Kaiserslautern at the time created a browser for the YFCC-100M database. Judging by some tests it is far from complete in the results it shows (for instance if I search for my Flickr user name it shows results that don’t contain the example image above and the total number of results is lower than the number of my photos in IBM’s DiF) Using that same browser to search for my name, and for Flickr user names that are likely to have taken pictures of me during the mentioned BlogTalk conference and other conferences, show that there is indeed a number of pictures of my face in YFCC-100M. Although the limited search in IBM’s DiF possible with NBC’s tool doesn’t return any telling results for those Flickr user names. it is very likely my face is in IBM’s DiF therefore. I do find a number of pictures of friends and peers in IBM’s DiF that way, taken at the same time as pictures of myself.


Photos of me in YFCC-100M

But IBM won’t tell you
IBM is disingenuous when it comes to being transparent about what is in their DiF data. Their TOS allows anyone whose Flickr images have been incorporated to request to be excluded from now on, but only if you can provide the exact URLs of the images you want excluded. That is only possible if you can verify what is in their data, but there is no public way to do so, and only university affiliated researchers can request access to the data by stating their research interest. Requests can be denied. Their TOS says:

3.2.4. Upon request from IBM or from any person who has rights to or is the subject of certain images, Licensee shall delete and cease use of images specified in such request.

Time to explore the questions this raises
Now that the context of this data set is clear, in a next posting we can take a closer look at the practical, legal and ethical questions this raises.

Some of the things I found worth reading in the past few days:

  • Although this article seems to confuse regulatory separation with technological separation, it does give a try in formulating the geopolitical aspects of internet and data: There May Soon Be Three Internets. America’s Won’t Necessarily Be the Best
  • Interesting, yet basically boils down to actively exercising your ‘free will’. It assumes a blank slate for the hacking, where I haven’t deliberately set out for information/contacts on certain topics. And then it suggests doing precisely that as remedy. The key quote for me here is “Humans are hacked through pre-existing fears, hatreds, biases and cravings. Hackers cannot create fear or hatred out of nothing. But when they discover what people already fear and hate it is easy to push the relevant emotional buttons and provoke even greater fury. If people cannot get to know themselves by their own efforts, perhaps the same technology the hackers use can be turned around and serve to protect us. Just as your computer has an antivirus program that screens for malware, maybe we need an antivirus for the brain. Your AI sidekick will learn by experience that you have a particular weakness – whether for funny cat videos or for infuriating Trump stories – and would block them on your behalf.“: Yuval Noah Harari on the myth of freedom
  • This is an important issue, always. I recognise it from my work for the World Bank and UN agencies. Is what you’re doing actually helping, or is it shoring up authorities that don’t match with your values? And are you able to recognise it and withdraw when you cross the line from the former to the latter? I’ve known entrepreneurs who kept a client blacklist of sectors, governments and companies, but often it isn’t that clear cut. I’ve avoided engagements in various countries over the years, but every client engagement can be rationalised: How McKinsey Has Helped Raise the Stature of Authoritarian Governments, and when the consequences come back to bite, Malaysia files charges against Goldman-Sachs
  • This seems like a useful list to check for next books to read. I’ve definitely enjoyed reading the work of Chimamanda Ngozi Adichie and Nnedi Okorafor last year: My year of reading African women, by Gary Younge

Some things I thought worth reading in the past days

  • A good read on how currently machine learning (ML) merely obfuscates human bias, by moving it to the training data and coding, to arrive at peace of mind from pretend objectivity. Because of claiming that it’s ‘the algorithm deciding’ you make ML a kind of digital alchemy. Introduced some fun terms to me, like fauxtomation, and Potemkin AI: Plausible Disavowal – Why pretend that machines can be creative?
  • These new Google patents show how problematic the current smart home efforts are, including the precursor that are the Alexa and Echo microphones in your house. They are stripping you of agency, not providing it. These particular ones also nudge you to treat your children much the way surveillance capitalism treats you: as a suspect to be watched, relationships denuded of the subtle human capability to trust. Agency only comes from being in full control of your tools. Adding someone else’s tools (here not just Google but your health insurer, your landlord etc) to your home doesn’t make it smart but a self-censorship promoting escape room. A fractal of the panopticon. We need to start designing more technology that is based on distributed use, not on a centralised controller: Google’s New Patents Aim to Make Your Home a Data Mine
  • An excellent article by the NYT about Facebook’s slide to the dark side. When the student dorm room excuse “we didn’t realise, we messed up, but we’ll fix it for the future” defence fails, and you weaponise your own data driven machine against its critics. Thus proving your critics right. Weaponising your own platform isn’t surprising but very sobering and telling. Will it be a tipping point in how the public views FB? Delay, Deny and Deflect: How Facebook’s Leaders Fought Through Crisis
  • Some of these takeaways from the article just mentioned we should keep top of mind when interacting with or talking about Facebook: FB knew very early on about being used to influence the US 2016 election and chose not to act. FB feared backlash from specific user groups and opted to unevenly enforce their terms or service/community guidelines. Cambridge Analytica is not an isolated abuse, but a concrete example of the wider issue. FB weaponised their own platform to oppose criticism: How Facebook Wrestled With Scandal: 6 Key Takeaways From The Times’s Investigation
  • There really is no plausible deniability for FB’s execs on their “in-house fake news shop” : Facebook’s Top Brass Say They Knew Nothing About Definers. Don’t Believe Them. So when you need to admit it, you fall back on the ‘we messed up, we’ll do better going forward’ tactic.
  • As Aral Balkan says, that’s the real issue at hand because “Cambridge Analytica and Facebook have the same business model. If Cambridge Analytica can sway elections and referenda with a relatively small subset of Facebook’s data, imagine what Facebook can and does do with the full set.“: We were warned about Cambridge Analytica. Why didn’t we listen?
  • [update] Apparently all the commotion is causing Zuckerberg to think FB is ‘at war‘, with everyone it seems, which is problematic for a company that has as a mission to open up and connect the world, and which is based on a perception of trust. Also a bunker mentality probably doesn’t bode well for FB’s corporate culture and hence future: Facebook At War.