Bookmarked Internet of Things and Objects of Sociality (by Ton Zijlstra, 2008)

Fifteen years ago today I blogged this brainstorming exercise about how internet-connectivity for objects might make for different and new objects of sociality. A way to interact with our environment differently. Not a whole lot of that has happened, let alone become common. What has happened is IoT being locked up in device and mobile app pairings. Our Hue lights are tied to the Hue app, and if I’d let it collect e.g. behavioural data it would go to Philips first, not to me. A Ring doorbell (now disabled), our Sonos speakers are the same Those rigid pairings are a far cry from me seamlessly interacting with my environment. One exception is our Meet Je Stad sensor in the garden, as it runs on LoRaWan and the local citizen science community has the same access as I do to the data (and I run a LoRa gateway myself, adding another control point for me).

Incoming EU legislation may help to get more agency on this front. First and foremost, the Data Act when it is finished will make it mandatory that I can access the data I generate with my use of devices like those Hue lights and Sonos speakers and any others you and I may have in use (the data from the invertor on your solar panels for instance). And allow third parties to use that data in real time. A second relevant law I think is the Cyber Resilience Act, which regulates the cybersecurity of any ‘product with digital elements’ on the EU market, and makes it mandatory to provide additional (technical) documentation around that topic.

The internet of things, increases the role of physical objects as social objects enormously, because it adds heaps of context that can serve relationships. Physical objects always have been social objects, but only in their immediate physical context. … Making physical objects internet-aware creates a slew of possible new uses for it as social objects. And if you [yourself] add more sensors or actuators to a product (object hacks so to speak), the list grows accordingly.

Ton Zijlstra, 2008

We had a fun first visit to the local CoderDojo this afternoon, with the three of us. Y animated dinosaurs and created an earth with wobbly eyes that followed the mouse pointer.


Y working in Scratch on some animated dinosaurs

A month ago, Y had a ‘programming day’ at school where people from De Programmeerschool worked a full day with her class. She liked working in Scratch, so I suggested we visit the local CoderDojo. Next time I think we should try and bring a friend. She invited a friend this time, but there were no more tickets available (although there was still plenty space on-site).

I key-noted at the Dutch DojoCon in 2019, and then became a member of their Club of 100, donating money every year. And today I was able to bring my daughter and enjoy the activities. I first came across CoderDojo in Limerick, Ireland during 3D Camp in 2014 co-organised by our friend Gabriela Avram.

Bookmarked China Seeks Stricter Oversight of Generative AI with Proposed Data and Model Regulations (by Chris McKay at Maginative)

Need to read this more closely. A few things stand out at first glance:

  • This is an addition to the geo-political stances that EU, US, China put forth w.r.t. everything digital and data. A comparison with the EU AI Regulation that is under negotiation is of interest.
  • It seems focused on generative AI solely. Are there other (planned) acts covering other AI applications and development. Why is generative AI singled out here, because it has a more direct population facing character?
  • It seems to mostly front-load the responsibilities towards the companies producing generative AI applications, i.e. towards the models used and pre-release. In comparison the EU regulations incorporates responsibilities for distributors, buyers, users and even users of output only and spans the full lifetime of any application.
  • It lists specific risks in several categories. How specific are those worded, might there be an impact on how future-proof the regulation is? Are there thresholds introduced for such risks?

Let’s see if I can put some AI to work to translate the original Chinese proposed text (PDF).

Via Stephen Downes, who is also my source for the link to the original proposal in PDF.

By emphasizing corpus safety, model security, and rigorous assessment, the regulation intends to ensure that the rise of [generative] AI in China is both innovative and secure — all while upholding its socialist principles.

Chris McKay at Maginative

Favorited I’m banned for life from advertising on Meta. Because I teach Python. by Reuven Lerner

The Python programming language is over 30 yrs old, the Pandas data analysis library for it is 15 yrs old. It’s not unlikely Meta’s advert checking AI was created using both somewhere in the process. But the programming context of both words was definitely not in the training set for it.

Provide Python and Pandas training, advertise on FB. Get blocked because Meta’s AI spits out a high probability it is about illegal animal trade. Appeal the decision. Have the same AI, not a person, look at it again and conclude the same thing. Get blocked for all time. Have insiders check and conclude this can’t be reversed.

Computer says no…‘ and Kafka had a child and it’s Meta’s AI. And Meta has no human operated steering wheel that is connected to anything meaningful.

Via Ben Werdmuller

I’m a full-time instructor in Python and Pandas, teaching in-person courses at companies around the world … Meta’s AI system noticed that I was talking about Python and Pandas, assumed that I was talking about the animals […], and banned me. The appeal that I asked for wasn’t reviewed by a human, but was reviewed by another bot, which (not surprisingly) made a similar assessment.

Reuven Lerner

ODRL, Open Digital Rights Language popped up twice this week for me and I don’t think I’ve been aware of it before. Some notes for me to start exploring.

Rights Expression Languages

Rights Expression Languages, RELs, provide a machine readable way to convey or transfer usage conditions, rights, restraints, granularly w.r.t. both actions and actors. This can then be added as metadata to something. ODRL is a rights expression language, and seems to be a de facto standard.

ODRL is a W3C recommendation since 2018, and thus part of the open web standards. ODRL has its roots in the ’00s and Digital Rights Management (DRM): the abhorred protections media companies added to music and movies, and now e-books, in ways that restrains what people can do with media they bought to well below the level of what was possible before and commonly thought part of having bought something.

ODRL can be expressed in JSON or RDF and XML. A basic example from Wikipedia looks like this:


{
"@context": "http://www.w3.org/ns/odrl.jsonld",
"uid": "http://example.com/policy:001",
"permission": [{
"target": "http://example.com/mysong.mp3",
"assignee": "John Doe",
"action": "play"
}]
}

In this JSON example a policy describes that example.com grants John permission to play mysong.

ODRL in the EU Data Space

In the shaping of the EU common market for data, aka the European common data space, it is important to be able to trace provenance and usage conditions for not just data sets, but singular pieces of data, as it flows through use cases, through applications and their output back into the data space.
This week I participated in a webinar by the EU Data Space Support Center (DSSC) about their first blueprint of data space building blocks, and for federation of such data spaces.

They propose ODRL as the standard to describe usage conditions throughout data spaces.

The question of enactment

It wasn’t the first time I talked about ODRL this week. I had a conversation with Pieter Colpaert. I reached out to get some input on his current view of the landscape of civic organisations active around the EU data spaces. We also touched upon his current work at the University of Gent. His research interest is on ODRL currently, specifically on enactment. ODRL is a REL, a rights expression language. Describing rights is one thing, enacting them in practice, in technology, processes etc. is a different thing. Next to that, how do you demonstrate that you adhere to the conditions expressed and that you qualify for using the things described?

For the EU data space(s) this part sounds key to me, as none of the data involved is merely part of a single clear interaction like in the song example above. It’s part of a variety of flows in which actors likely don’t directly interact, where many different data elements come together. This includes flows through applications that tap into a data space for inputs and outputs but are otherwise outside of it. Such applications are also digital twins, federated systems of digital twins even, meaning a confluence of many different data and conditions across multiple domains (and thus data spaces). All this removes a piece of data lightyears from the neat situation where two actors share it between them in a clearly described transaction within a single-faceted use case.

Expressing the commons

It’s one thing to express restrictions or usage conditions. The DSSC in their webinar talked a lot about business models around use cases, and ODRL as a means for a data source to stay in control throughout a piece of data’s life cycle. Luckily they stopped using the phrase ‘data ownership’ as they realised it’s not meaningful (and confusing on top of it), and focused on control and maintaining having a say by an actor.
An open question for me is how you would express openness and the commons in ODRL. A shallow search surfaces some examples of trying to express Creative Commons or other licenses this way, but none recent.

Openness, can mean an absence of certain conditions, although there may be some (like adding the same absence of conditions to re-shared material or derivative works), which is not the same as setting explicit permissions. If I e.g. dedicate something to the public domain, an image for instance, then there are no permissions for me to grant, as I’ve removed myself from that role of being able to give permission. Yet, you still want to express it to ensure that it is clear for all that that is what happened, and especially that it remains that way.

Part of that question is about the overlap and distinction between rights expressed in ODRL and authorship rights. You can obviously have many conditions outside of copyright, and can have copyright elements that may be outside of what can be expressed in RELs. I wonder how for instance moral authorship rights (that an author in some (all) European jurisdictions cannot do away with) can be expressed after an author has transferred/sold the copyrights to something? Or maybe, expressing authorship rights / copyrights is not what RELs are primarily for, as it those are generic and RELs may be meant for expressing conditions around a specific asset in a specific transaction. There have been various attempts to map all kinds of licenses to RELs though, so I need to explore more.

This is relevant for the EU common data spaces as my government clients will be actors in them and bringing in both open data and closed and unsharable but re-usable data, and several different shades in between. A range of new obligations and possibilities w.r.t. data use for government are created in the EU data strategy laws and the data space is where those become actualised. Meaning it should be possible to express the corresponding usage conditions in ODRL.

ODRL gaps?

Are there gaps in the ODRL standard w.r.t. what it can cover? Or things that are hard to express in it?
I came across one paper ‘A critical reflection on ODRL’ (PDF Kebede, Sileno, Van Engers 2020), that I have yet to read, that describes some of those potential weaknesses, based on use cases in healthcare and logistics. Looking forward to digging out their specific critique.

Bookmarked Mechanisms of Techno-Moral Change: A Taxonomy and Overview (by John Danaher and Henrik Skaug Sætra)

Via Stephen Downes. Overview of how, through what mechanisms, technology changes work moral changes. At first glance seems to me a sort of detailing of Smits’ 2002 PhD thesis Monster theory, looking at how tech changes can challenge cultural categories, and diving into the specific part where cultural categories are adapted to fit new tech in. The citations don’t mention Smits or the anthropological work of Mary Douglas it is connected to. It does cite references by Peter-Paul Verbeek and Marianne Boenink (all three from the PSTS department I studied at), so no wonder I sense a parallel here.

The first example mentioned in the table explaining the six identified mechanisms points in this direction of a parallel too: the 70s redefinition of death as brain death was a redefinition of cultural concepts to assimilate tech change was also used as example in Smits’ work. The third example is a direct parallel to my 2008 post on empathy as shifting cultural category because of digital infrastructure, and how I talked about hyperconnected individuals and the impact on empathy in 2010 when talking about the changes bringing forth MakerHouseholds.

Where Monster theory was meant as a tool to understand and diagnose discussions of new tech, wherein the assmilation part (both cultural categories and technology get adapted) is the pragmatic route (the mediation theory of Peter Paul Verbeek is located there too), it doesn’t as such provide ways to act / intervene. Does this taxonomy provide options to act?
Or is this another descriptive way to locate where moral effects might take place, and the various types of responses to Monsters still determine the potential moral effect?

The paper is directly available, added it to my Zotero library for further exploration.

Many people study the phenomenon of techno-moral change but, to some extent, the existing literature is fragmented and heterogeneous – lots of case studies and examples but not enough theoretical unity. The goal of this paper is to bring some order to existing discussions by proposing a taxonomy of mechanisms of techno-moral change. We argue that there are six primary mechanisms..

John Danaher