Hossein Derakhshan onderschrijft met dit artikel in The Guardian de missie van PublicSpaces, Frank! Hij trekt het bovendien in Europees verband. Lijkt me terecht, juist ook omdat privacy, omgang met gegevens en de geopolitieke kant van data/social media een Europese aangelegenheid is, geen primair Nederlandse. @hod3r is denk ik een goede bron ook voor het diepere denkwerk achter de positionering van PublicSpaces. Heb je er nog ingangen, ben je er nog actief? Marco wel zie ik op de site.
A project I’m involved has won funding from the SIDN Fund. SIDN is the Dutch domain name authority, and they run a fund to promote, innovate, and stimulate internet use, to build a ‘stronger internet for all’.
With the Open Nederland association, the collective of makers behind the Dutch Creative Commons Chapter, of which I’m a board member, we received funding for our new project “Filter me niet!” (Don’t filter me.)
With the new EU Copyright Directive, the position of copyrights holders is in flux the coming two years. Online platforms will be responsible for ensuring copyrights on content you upload. In practice this will mean that YouTube, Facebook, and all those other platforms will filter out content where they have doubts concerning origin, license or metadata. For makers this is a direct threat, as they run the risk of seeing their uploads blocked even while they clearly hold the needed copyright. False positives are already a very common phenomenon, and this will likely get worse.
With Filtermeniet.nl (Don’t filter me) we want to aid makers that want to upload their work, by inserting a bit of advice and assistance right when they want to hit that upload button. We’ll create a tool, guide and information source for Dutch media makers, through which they can declare the license that fits them best, as well as improve metadata. In order to lower the risk of being automatically filtered out for the wrong reasons.
This week NBC published an article exploring the source of training data sets for facial recognition. It makes the claim that we ourselves are providing, without consent, the data that may well be used to put us under surveillance.
In January IBM made a database available for research into facial recognition algorithms. The database contains some 1 million face descriptions that can be used as a training set. Called “Diversity in Faces” the stated aim is to reduce bias in current facial recognition abilities. Such bias is rampant often due to too small and too heterogenous (compared to the global population) data sets used in training. That stated goal is ethically sound it seems, but the means used to get there raises a few questions with me. Specifically if the means live up to the same ethical standards that IBM says it seeks to attain with the result of their work. This and the next post explore the origins of the DiF data, my presence in it, and the questions it raises to me.
What did IBM collect in “Diversity in Faces”?
Let’s look at what the data is first. Flickr is a photo sharing site, launched in 2004, that started supporting publishing photos with a Creative Commons license from early on. In 2014 a team led by Bart Thomee at Yahoo, which then owned Flickr, created a database of 100 million photos and videos with any type of Creative Commons license published in previous years on Flickr. This database is available for research purposes and known as the ‘YFCC-100M’ dataset. It does not contain the actual photos or videos per se, but the static metadata for those photos and videos (urls to the image, user id’s, geo locations, descriptions, tags etc.) and the Creative Commons license it was released under. See the video below published at the time:
IBM used this YFCC-100M data set as a basis, and selected 1 million of the photos in it to build a large collection of human faces. It does not contain the actual photos, but the metadata of that photo, and a large range of some 200 additional attributes describing the faces in those photos, including measurements and skin tones. Where YFC-100M was meant to train more or less any image recognition algorithm, IBM’s derivative subset focuses on faces. IBM describes the dataset in their Terms of Service as:
a list of links (URLs) of Flickr images that are publicly available under certain Creative Commons Licenses (CCLs) and that are listed on the YFCC100M dataset (List of URLs together with coding schemes aimed to provide objective measures of human faces, such as cranio-facial features, as well as subjective annotations, such as human-labeled annotation predictions of age and gender(“Coding Schemes Annotations”). The Coding Schemes Annotations are attached to each URL entry.
My photos are in IBM’s DiF
NBC, in their above mentioned reporting on IBM’s DiF database, provide a little tool to determine if photos you published on Flickr are in the database. I am an intensive user of Flickr since early 2005, and published over 25.000 photos there. A large number of those carry a Creative Commons license, BY-NC-SA, meaning that as long as you attribute me, don’t use an image commercially and share your result under the same license you’re allowed to use my photos. As the YFCC-100M covers the years 2004-2014 and I published images for most of those years, it was likely my photos are in it, and by extension likely my photos are in IBM’s DiF. Using NBC’s tool, based on my user name, it turns out 68 of my photos are in IBM’s DiF data set.
One set of photos that apparently is in IBM’s DiF cover the BlogTalk Reloaded conference in Vienna in 2006. There I made various photos of participants and speakers. The NBC tool I mentioned provides one photo from that set as an example:
My face is likely in IBM’s DiF
Although IBM doesn’t allow a public check who is in their database, it is very likely that my face is in it. There is a half-way functional way to explore the YFCC-100M database, and DiF is derived from the YFCC-100M. It is reasonable to assume that faces that can be found in YFCC-100M are to be found in IBM’s DiF. The German university of Kaiserslautern at the time created a browser for the YFCC-100M database. Judging by some tests it is far from complete in the results it shows (for instance if I search for my Flickr user name it shows results that don’t contain the example image above and the total number of results is lower than the number of my photos in IBM’s DiF) Using that same browser to search for my name, and for Flickr user names that are likely to have taken pictures of me during the mentioned BlogTalk conference and other conferences, show that there is indeed a number of pictures of my face in YFCC-100M. Although the limited search in IBM’s DiF possible with NBC’s tool doesn’t return any telling results for those Flickr user names. it is very likely my face is in IBM’s DiF therefore. I do find a number of pictures of friends and peers in IBM’s DiF that way, taken at the same time as pictures of myself.
Photos of me in YFCC-100M
But IBM won’t tell you
IBM is disingenuous when it comes to being transparent about what is in their DiF data. Their TOS allows anyone whose Flickr images have been incorporated to request to be excluded from now on, but only if you can provide the exact URLs of the images you want excluded. That is only possible if you can verify what is in their data, but there is no public way to do so, and only university affiliated researchers can request access to the data by stating their research interest. Requests can be denied. Their TOS says:
3.2.4. Upon request from IBM or from any person who has rights to or is the subject of certain images, Licensee shall delete and cease use of images specified in such request.
Time to explore the questions this raises
Now that the context of this data set is clear, in a next posting we can take a closer look at the practical, legal and ethical questions this raises.
A while ago Peter.
This the difference between having tightly coupled systems and loosely coupled systems. Loosely coupled systems can show more robustness because having failing parts will not break the whole. It also allows for more resilience that way, you can locally fix things that fell apart.
It may clash however with our current expectations of having electricity 24/7. Because of that expectation we don’t spend much time about being clever in our timing and usage of energy. A long time ago I provided training to a group of some 20 Iraqi water provision managers, as part of the rebuilding efforts after the US invasion of Iraq. They had all kinds of issues obviously, and often issues arising in parallel. What I remember connected to Peter’s post is how they described Iraqi citizens had adapted to the intermittent availability of electricity and water. How they made things work, at some level, by incorporating the intermittent availability of things into their routines. When there was no electricity they used water for cooling, and vice versa for instance. A few years ago at a Border Sessions conference in The Hague, one speaker talked about resilience and intermittent energy sources too. He mentioned the example that historically Dutch millers had dispensation of visiting church on Sundays if it was windy enough to mill.
The past few days in Dutch newspapers a discussion is taking place that some local solar energy plans can’t be implemented because the grid maintainers can’t deal with the inputs. Now this isn’t necessarily true, but more the framing that comes with the current always on macro-grid. Tellingly any mention of micro grids, or local storage is absent from that framing.
In a different discussion with Peter Rukavina and with Peter Bihr, it was mentioned that resilience is, and needs to be, rising on the list of design principles. It’s also the reason why resilience is one of three elements of agency in my networked agency thinking.
Next week it is 50 years ago that Doug Engelbart (1925-2013) and his team demonstrated all that has come to define interactive computing. Five decades on we still don’t have turned everything in that live demo into routine daily things. From the mouse, video conferencing, word processing, outlining, drag and drop, digital mind mapping, to real time collaborative editing from multiple locations. In 1968 it is all already there. In 2018 we are still catching up with several aspects of that live demonstrated vision though. Doug Engelbart and team ushered in the interactive computing era to “augment human intellect”, and on the 50th anniversary of The Demo a symposium will ask the question what augmenting the human intellect can look like in the 21st century.
A screenshot of Doug Engelbart during the 1968 demo
The 1968 demo was later named ‘the Mother of all Demos‘. I first saw it in its entirety at the 2005 Reboot conference in Copenhagen. Doug Engelbart had a video conversation with us after the demo. To me it was a great example, not merely of prototyping new tech, but most of all of proposing a coherent and expansive vision of how different technological components and human networked interaction and routines can together be used to create new agency and new possibilities. To ‘augment human intellect’ indeed. That to me is the crux, to look at the entire constellation of humans, our connections, routines, methods and processes, our technological tools and achieving our desired impact. Likely others easily think I’m a techno-optimist, but I don’t think I am. I am generally an optimist yes, but to me what is key is our humanity, and to create tools and methods that enhance and support it. Tech as tools, in context, not tech as a solution, on its own. It’s what my networked agency framework is about, and what I try to express in its manifesto.
Paul Duplantis has blogged about where the planned symposium, and more importantly us in general, may take the internet and the web as our tools.
This week and next week I am working with the Library Services Fryslan team (BSF), the ones who also run Frysklab, a mobile FabLab. We’re taking about 5 full days and two evenings to dive deeply into detailing and shaping the Impact Through Connection projects BSF runs. Those are based on my networked agency framework. Now that BSF has done a number of these projects they find that they need a better way to talk about it to library decision makers, and a better way to keep the pool of facilitators much closer to the original intentions and notions, as well as find ways to better explain the projects to participants.
It’s quite a luxury to take the time with 5 others to spend a lot of time on talking through our experiences, jotting them down, and reworking them into new narratives and potential experiments. It’s also very intensive, as well as challenging to capture what we share, discuss and construct. In the end we want to be able to explain the why, what and how of networked agency to different groups much better, next to improving the way we execute the Impact Through Connection projects.
After doing a braindump on day 1, we used the second day to discuss some of what we gathered, figure out what’s missing, what needs more detail. We’ve now started to bring all that disjointed material into a wiki, so that we can move things around, and tease out the connections between different elements. This will be the basis for further reflection, planning to end up with ‘living documentation’ that allows us remix and select material for different contexts and groups.
Currently I think we are at the stage of having collected a mountain of thoughts and material, without much sight of how we will be able to process it all. But experience tells me we will get through that by just going on. It makes the luxury of having allocated the time to really do that all the more tangible.