Over drie weken, op 18 december vindt de PKM / Obsidian gebruikers meet-up plaats! De Digitale Fitheid community en de PKM groep van de Nederlandse Vereniging van Informatieprofessionals (KNVI) zijn de gastgevers. Vanaf 19:00 duiken we bij S2M Utrecht in hoe we persoonlijk kennismanagement doen, en hoe ieder van ons dat al wel of nog niet in Obsidian implementeert. Elkaars werkwijzen zien en bespreken is altijd enorm inspirerend, en levert nieuwe ideeën op hoe je je eigen PKM-flow kunt tweaken en hoe je je Obsidian gereedschap scherper slijpt.

We willen veel tijd nemen om elkaar dingen te kunnen laten zien. We gebruiken daarom in ieder geval 2 projectieschermen naast elkaar, zodat we ook dingen kunnen vergelijken en we sneller meer mensen iets kunnen laten tonen.

Leidend is telkens de vraag “Hoe doe jij X in je PKM systeem, en hoe heb je dat geïmplementeerd in je Obsidian set-up?”
Waar X een onderwerp kan zijn als:

  • hoe zoek en vind je in jouw PKM systeem?
  • hoe begrens jij jouw PKM systeem, hoort productiviteit/GTD er bij, of alleen leren? Is jouw PKM flow generiek, of gericht op bepaalde thema’s?
  • naar welke outputs / resultaten werk je in je PKM toe (schrijven, vertellen, creatieve ideeën etc.)?

En daaromheen zijn er interessante thema’s als:

  • Welke overgangen van analoog naar digitaal en vice versa zitten er in jouw systematiek? Hoe speelt analoog een rol in je leren?
  • Welke visuele elementen spelen een rol in je PKM systeem?

Tot slot, omdat Obsidian werkt op lokale bestanden, is ook een onderwerp:

  • Hoe gebruik je vanuit andere programma’s of werkwijzen die lokale files buiten Obsidian om?

We vragen je iets dat jij graag wilt laten zien uit jouw PKM systeem in Obsidian voor te bereiden. Aan het begin kijken we wie graag bepaalde voorbeelden wil zien of laten zien. Vanuit dat overzicht van onderwerpen gaan we dan aan de slag.

Gratis aanmelden kan op Digitale Fitheid. Tot 18 december!

Bookmarked Internet of Things and Objects of Sociality (by Ton Zijlstra, 2008)

Fifteen years ago today I blogged this brainstorming exercise about how internet-connectivity for objects might make for different and new objects of sociality. A way to interact with our environment differently. Not a whole lot of that has happened, let alone become common. What has happened is IoT being locked up in device and mobile app pairings. Our Hue lights are tied to the Hue app, and if I’d let it collect e.g. behavioural data it would go to Philips first, not to me. A Ring doorbell (now disabled), our Sonos speakers are the same Those rigid pairings are a far cry from me seamlessly interacting with my environment. One exception is our Meet Je Stad sensor in the garden, as it runs on LoRaWan and the local citizen science community has the same access as I do to the data (and I run a LoRa gateway myself, adding another control point for me).

Incoming EU legislation may help to get more agency on this front. First and foremost, the Data Act when it is finished will make it mandatory that I can access the data I generate with my use of devices like those Hue lights and Sonos speakers and any others you and I may have in use (the data from the invertor on your solar panels for instance). And allow third parties to use that data in real time. A second relevant law I think is the Cyber Resilience Act, which regulates the cybersecurity of any ‘product with digital elements’ on the EU market, and makes it mandatory to provide additional (technical) documentation around that topic.

The internet of things, increases the role of physical objects as social objects enormously, because it adds heaps of context that can serve relationships. Physical objects always have been social objects, but only in their immediate physical context. … Making physical objects internet-aware creates a slew of possible new uses for it as social objects. And if you [yourself] add more sensors or actuators to a product (object hacks so to speak), the list grows accordingly.

Ton Zijlstra, 2008

Favorited EDPB Urgent Binding Decision on processing of personal data for behavioural advertising by Meta by EDPB

This is very good news. The European Data Protection Board, at the request of the Norwegian DPA, has issued a binding decision instructing the Irish DPA and banning the processing of personal data for behavioural targeting by Meta. Meta must cease processing data within two weeks. Norway already concluded a few years ago that adtech is mostly illegal, but European cases based on the 2018 GDPR moved through the system at a glacial pace, in part because of a co-opted and dysfunctional Irish Data Protection Board. Meta’s ‘pay for privacy‘ ploy is also torpedoed with this decision. This is grounds for celebration, even if this will likely lead to legal challenges first. And it is grounds for congratulations to NOYB and Max Schrems whose complaints filed the first minute the GDPR enforcement started in 2018 kicked of the process of which this is a result.

…take, within two weeks, final measures regarding Meta Ireland Limited (Meta IE) and to impose a ban on the processing of personal data for behavioural advertising on the legal bases of contract and legitimate interest across the entire European Economic Area (EEA).

European Data Protection Board

In 1967 French literary critic Roland Barthes declared the death of the author (in English, no less). An author’s intentions and biography are not the means to explain definitively what the meaning of a text (of fiction) is. It’s the reader that determines meaning.

Barthes reduces the author to merely a scriptor, a scribe, who doesn’t exist other than for their role of penning the text. It positions the work fully separate of its maker.

I don’t disagree with the notion that readers glean meaning in layers from a text, far beyond what an author might have intended. But thinking about the author’s intent, in light of their biography or not, is one of those layers for readers to interpret. It doesn’t make the author the sole decider on meaning, but the author’s perspective can be used to create meaning by any reader. Separating the author from their work entirely is cutting yourself of from one source of potential meaning. Even when reduced to the role of scribe, such meaning will leak forth: the monks of old who tagged the transcripts they made and turned those into Indexes that are a common way of interpreting on which topics a text touches or puts emphasis. So despite Barthes pronouncement, I never accepted the brain death of the author, yet also didn’t much care specifically about their existence for me to find meaning in texts either.

With the advent of texts made by generative AI I think bringing the author and their intentions in scope of creating meaning is necessary however. It is a necessity as proof of human creation. Being able to perceive the author behind a text, the entanglement of its creation with their live, is the now very much needed Reverse Turing test. With algorithmic text generation there is indeed only a scriptor, one incapable of conveying meaning themselves.
To determine the human origin of a text, the author’s own meaning, intention and existence must shine through in a text, or be its context made explicit. Because our default assumption must be that it was generated.

The author is being resurrected. Because we now have fully automated scriptors. Long live the author!

In discussions about data usage and sharing and who has a measure of control over what data gets used and shared how, we easily say ‘my data’ or get told about what you can do with ‘your data’ in a platform.

‘My data’.

While it sounds clear enough, I think it is a very imprecise thing to say. It distracts from a range of issues about control over data, and causes confusion in public discourse and in addressing those issues. Such distraction is often deliberate.

Which one of these is ‘my data’?

  • Data that I purposefully collected (e.g. temperature readings from my garden), but isn’t about me.
  • Data that I purposefully collected (e.g. daily scale readings, quantified self), that is about me.
  • Data that is present on a device I own or external storage service, that isn’t about me but about my work, my learning, my chores, people I know.
  • Data that describes me, but was government created and always rests in government databases (e.g. birth/marriage registry, diploma’s, university grades, criminal records, real estate ownership), parts of which I often reproduce/share in other contexts while not being the authorative source (anniversaries, home address, CV).
  • Data that describes me, but was private sector created and always rests in private sector databases (e.g. credit ratings, mortgage history, insurance and coverage used, pension, phone location and usage, hotel stays, flights boarded)
  • Data that describes me, that I entered into my profiles on online platforms
  • Data that I created, ‘user generated content’, and shared through platforms
  • Data that I caused to be through my behaviour, collected by devices or platforms I use (clicks through sites, time spent on a page, how I drive my car, my e-reading habits, any IoT device I used/interacted with, my social graphs), none of which is ever within my span of control, likely not accessible to me, and I may not even be aware it exists.
  • Data that was inferred about me from patterns in data that I caused to be through my behaviour, none of which is ever within my span of control, and which I mostly don’t know about or even suspect exists. Which may say things I don’t know about myself (moods, mental health) or that I may not have made explicit anywhere (political or religious orientation, sexual orientation, medical conditions, pregnancy etc)

Most of the data that holds details about me wasn’t created by me, and wasn’t within my span of control at any time.
Most of the data I purposefully created or have or had in my span of control, isn’t about me but about my environment, about other people near me, things external and of interest to me.

They’re all ‘my data’. Yet, whenever someone says ‘my data’, and definitely when someone says ‘your data’, that entire scope isn’t what is indicated. My data as a label easily hides the complicated variety of data we are talking about. And regularly, specifically when someone says ‘your data’, hiding parts of the list is deliberate.
The last bullets, data that we created through our behaviour and what is inferred about us, is what the big social media platforms always keep out of sight when they say ‘your data’. Because that’s the data their business models run on. It’s never part of the package when you click ‘export my data’ in a platform.

The core issues aren’t about whether it is ‘my data’ in terms of control or provenance. The core issues are about what others can/cannot will/won’t do with any data that describes me or is circumstantial to me. Regardless in whose span of control such data resides, or where it came from.

There are also two problematic suggestions packed into the phrase ‘my data’.
One is that with saying ‘my data’ you are also made individually responsible for the data involved. While this is partly true (mostly in the sense of not carelessly leaving stuff all over webforms and accounts), almost all responsibility for the data about you resides with those using it. It’s other’s actions with data that concern you, that require responsibility and accountability, and should require your voice being taken into account. "Nothing about us, without us" holds true for data too.
The other is that ‘my data’ is easily interpreted and positioned as ownership. That is a sleight of hand. Property claims and citizen rights are very different things and different areas of law. If ‘your data’ is your property, all that is left is to haggle about price, and each context is framed as merely transactional. It’s not in my own interest to see my data or myself as a commodity. It’s not a level playing field when I’m left to negotiating my price with a global online platform. That’s so asymmetric that there’s only one possible outcome. Which is the point of the suggestion of ownership as opposed to the framing as human rights. Contracts are the preferred tool of the biggest party, rights that of the individual.

Saying ‘my data’ and ‘your data’ is too imprecise. Be precise, don’t let others determine the framing.

ODRL, Open Digital Rights Language popped up twice this week for me and I don’t think I’ve been aware of it before. Some notes for me to start exploring.

Rights Expression Languages

Rights Expression Languages, RELs, provide a machine readable way to convey or transfer usage conditions, rights, restraints, granularly w.r.t. both actions and actors. This can then be added as metadata to something. ODRL is a rights expression language, and seems to be a de facto standard.

ODRL is a W3C recommendation since 2018, and thus part of the open web standards. ODRL has its roots in the ’00s and Digital Rights Management (DRM): the abhorred protections media companies added to music and movies, and now e-books, in ways that restrains what people can do with media they bought to well below the level of what was possible before and commonly thought part of having bought something.

ODRL can be expressed in JSON or RDF and XML. A basic example from Wikipedia looks like this:


{
"@context": "http://www.w3.org/ns/odrl.jsonld",
"uid": "http://example.com/policy:001",
"permission": [{
"target": "http://example.com/mysong.mp3",
"assignee": "John Doe",
"action": "play"
}]
}

In this JSON example a policy describes that example.com grants John permission to play mysong.

ODRL in the EU Data Space

In the shaping of the EU common market for data, aka the European common data space, it is important to be able to trace provenance and usage conditions for not just data sets, but singular pieces of data, as it flows through use cases, through applications and their output back into the data space.
This week I participated in a webinar by the EU Data Space Support Center (DSSC) about their first blueprint of data space building blocks, and for federation of such data spaces.

They propose ODRL as the standard to describe usage conditions throughout data spaces.

The question of enactment

It wasn’t the first time I talked about ODRL this week. I had a conversation with Pieter Colpaert. I reached out to get some input on his current view of the landscape of civic organisations active around the EU data spaces. We also touched upon his current work at the University of Gent. His research interest is on ODRL currently, specifically on enactment. ODRL is a REL, a rights expression language. Describing rights is one thing, enacting them in practice, in technology, processes etc. is a different thing. Next to that, how do you demonstrate that you adhere to the conditions expressed and that you qualify for using the things described?

For the EU data space(s) this part sounds key to me, as none of the data involved is merely part of a single clear interaction like in the song example above. It’s part of a variety of flows in which actors likely don’t directly interact, where many different data elements come together. This includes flows through applications that tap into a data space for inputs and outputs but are otherwise outside of it. Such applications are also digital twins, federated systems of digital twins even, meaning a confluence of many different data and conditions across multiple domains (and thus data spaces). All this removes a piece of data lightyears from the neat situation where two actors share it between them in a clearly described transaction within a single-faceted use case.

Expressing the commons

It’s one thing to express restrictions or usage conditions. The DSSC in their webinar talked a lot about business models around use cases, and ODRL as a means for a data source to stay in control throughout a piece of data’s life cycle. Luckily they stopped using the phrase ‘data ownership’ as they realised it’s not meaningful (and confusing on top of it), and focused on control and maintaining having a say by an actor.
An open question for me is how you would express openness and the commons in ODRL. A shallow search surfaces some examples of trying to express Creative Commons or other licenses this way, but none recent.

Openness, can mean an absence of certain conditions, although there may be some (like adding the same absence of conditions to re-shared material or derivative works), which is not the same as setting explicit permissions. If I e.g. dedicate something to the public domain, an image for instance, then there are no permissions for me to grant, as I’ve removed myself from that role of being able to give permission. Yet, you still want to express it to ensure that it is clear for all that that is what happened, and especially that it remains that way.

Part of that question is about the overlap and distinction between rights expressed in ODRL and authorship rights. You can obviously have many conditions outside of copyright, and can have copyright elements that may be outside of what can be expressed in RELs. I wonder how for instance moral authorship rights (that an author in some (all) European jurisdictions cannot do away with) can be expressed after an author has transferred/sold the copyrights to something? Or maybe, expressing authorship rights / copyrights is not what RELs are primarily for, as it those are generic and RELs may be meant for expressing conditions around a specific asset in a specific transaction. There have been various attempts to map all kinds of licenses to RELs though, so I need to explore more.

This is relevant for the EU common data spaces as my government clients will be actors in them and bringing in both open data and closed and unsharable but re-usable data, and several different shades in between. A range of new obligations and possibilities w.r.t. data use for government are created in the EU data strategy laws and the data space is where those become actualised. Meaning it should be possible to express the corresponding usage conditions in ODRL.

ODRL gaps?

Are there gaps in the ODRL standard w.r.t. what it can cover? Or things that are hard to express in it?
I came across one paper ‘A critical reflection on ODRL’ (PDF Kebede, Sileno, Van Engers 2020), that I have yet to read, that describes some of those potential weaknesses, based on use cases in healthcare and logistics. Looking forward to digging out their specific critique.