Public Spaces is an effort to reshape the internet experience towards a much larger emphasis on, well, public spaces. Currently most online public debate is taking place in silos provided by monopolistic corporations, where public values will always be trumped by value extraction regardless of externalised costs to communities, ethics, and society. Today the Public Spaces 2022 conference took place. I watched the 2021 edition online, but this time decided to be in the room. This to have time to interact with other participants and see who sees itself as part of this effort. Public Spaces is supported by 50 or so organisations, one of which I’m a board member of. Despite that nominal involvement I am still somewhat unclear about what the purpose of Public Spaces as a movement, not as an intention, is. This first day of the 2-day conference didn’t make that clearer to me, but the actual sessions and conversations were definitely worthwile to me.

Some first observations that I jotted down on the way home, below the photo taken just before the start of the conference.

a) In the audience and on stage there were some known faces, but mostly people unknown to me. Good thing, as it demonstrates how many new entrants into these discussions there are. At the same time there was also a notable absence of faces, e.g. from the organisations part of the Public Spaces effort. Maybe it’s because they rather show up tomorrow when the deputy minister is also present. As an awareness raising exercise, despite this still being a rather niche and like minded audience, this conference is certainly valuable.

b) That value was I think mostly expressed by the attention given to explaining some of the newly agreed European laws, the Digital Markets Act, Digitals Services Act and Data Governance Act. For most of the audience this looks like the first actual encounter with what those laws say, and one panel moderator upon hearing its contents showed themselves surprised this was already decided regulation and not stuck somewhere in a long and slow pipeline of debate and lobbying.

c) It was very good to hear people on stage actually speaking enthusiastically about the things these new laws deliver, despite being cautious about the pace of implementation and when we’ll see the actual impact of these rules. Lotje Beek of Bits of Freedom was enthusiastic about the Digital Services Act and I applaud the work BoF has done in the past years on this. (Disclosure: I’m on the board of an NGO that joins forces with BoF and Waag, organiser of this conference, in the so-called Digital Four, which lobbies the Dutch government on digital affairs.)
Similarly Kim van Sparrentak, MEP for the Greens, talked with energy about the Digital Markets Act. This was very important I think, and helps impress on the audience to engage with these new laws and the tools they provide.

d) The opening talk by Miriam Rasch I enjoyed a lot. Her earlier book Friction E felt seemingly lacked some deeper understanding of the technologies involved to build the conclusions and arguments on, so I was interested in hearing her talk in person. The focus today was more on her second book Autonomy. I‘ll buy bought it and will read it, also to clarify whether some of the things I think I heard are my misunderstanding or parts of the ideas expressed in the book. Rasch positions autonomy as the key thing to guard and strengthen. She doesn’t mean autonomy in the sense of being fully disconnected from everyone else in your decisions, but in a more interdependent way. To make your own choices, within the network of relationships around you. Also as an emotionally rooted thing, which I thought is a useful insight. She does position it as something exclusively individual. At the same time it seems she equates autonomy with agency, and I think agency does not merely reside on the individual level but also in groups of relationships in a given context (I call it networked agency). It seemed a very westernised individualistic viewpoint, that I think sets you up for less autonomy because it pits you individually against the much bigger systems and structures that erode your autonomy, dumping you in a very assymetric power struggle. A second thing that stood out to me is how she expresses the me-against-the-system issue as one of autonomy versus automation. It’s a nice alliteration, but I don’t accept that juxtaposition. It’s definitely the case that automation is frequently used to dehumanise lots of decisions, and thus eroding the autonomy of those being decided about. But to me it’s not inherent in automation. When you have the logic of (corporate) bureacracies doing the automation, you’ll end up with automation that mimics that logic. If I do the automation it will mimic my logic. I use automation a lot for my own purposes, and it increases my agency, it’s a direct expression of my autonomy (or that of the groups I’m part of). There’s more to be said, in a separate post, a.o. about the 3 or 4 thinking exercises she took us through to explore autonomy as a concept for ourselves. After all it wouldn’t make us more autonomous if she would prescribe us her definition of autonomy, precisely because she underscores that it’s not a purely rational concept but an emotional one as well.

e) Prof. Tamar Sharon of Radboud University spoke about the influence tech companies have in other domains than tech itself because of their technology being expanded into or used in those domains such as health, education, spatial planning, media. She calls it sphere transgressions. This may bring value, but may also be problematic. She showed a very cool tool that visualises how various tech companies are influential in domains you don’t immediately associate them with. A good thinking aid I think also in the upcoming discussion about sectoral European data spaces and being alert to the pitfall of it turning into a tech dominated discussion, rather than a societal benefit and impact discussion.

f) Kudos to the conference organisers. Every panel composition was nicely balanced, it shows good care in curating the program and having tapped into a high quality network. I know from experience that it takes deliberate effort to make it so. Also the catering was fully vegetarian and vegan, no words wasted on it, just by default. That’s the way to go.

Matt Webb has been keeping UnOffice hours for a few years, a few timeslots in his week during which anyone can come by and talk to him. Several people in my network similarly have opened parts of their weekly schedule for others to be able to plan a conversation with them. Using a tool like Calendly, it saves the back and forth of finding a time. More importantly it is a clear signal you don’t have to ask if it’s ok to have a conversation. You can just go ahead and plan it if you want to talk to them.

I like that idea. A few times in the past I’ve mailed a selection of my own contacts to ask them for a conversation, just to catch up and hear what they are doing. It always leads to some new insights or connections, and sometimes it generates a next step. It’s a serendipity aid.

As an experiment I’ve created a schedule in which anyone can book a conversation on Wednesday afternoons (Central European Time). You can find the link to my Calendly schedule in the right hand side bar.

Bookmarked Data altruism: how the EU is screwing up a good idea (by Winfried Veil)

I find this an unconvincing critique of the data altruism concept in the new EU Data Governance Act (caveat: the final consolidated text of the new law has not been published yet).

“If the EU had truly wanted to facilitate processing of personal data for altruistic purposes, it could have lifted the requirements of the GDPR”
GDPR slackened for common good purposes? Let’s loosen citizen rights requirements? It asumes common good purposes can be well enough defined to not endanger citizen rights, turtles all the way down. The GDPR is a foundational block, one in which the author, some googling shows, is disappointed with having had some first hand experience in its writing process. The GDPR is a quality assurance instrument, meaning, like with ISO style QA systems, it doesn’t make anything impossible or unallowed per se but does require you organise it responsibly upfront. That most organisations have implemented it as a compliance checklist to be applied post hoc is the primary reason for it being perceived as “straight jacket” and for the occurring GDPR related breaches to me.
It is also worth noting that data altruism also covers data that is not covered by the GDPR. It’s not just about person identifiable data, but also about otherwise non-public or confidential organisational data.

The article suggests it makes it harder for data altruistic entities to do something that already now can be done under the GDPR by anyone, by adding even more rules.
The GDPR pertains to the grounds for data collection in the context of usage specified at the time of collection. Whereas data altruism is also aimed at non-specified and at not yet known future use of data collected here and now. As such it covers an unaddressed element in the GDPR and offers a path out of the purpose binding the GDPR stipulates. It’s not a surprise that a data altruism entity needs to comply with both the GDPR and a new set of rules, because those additional rules do not add to the GDPR responsibilities but cover other activities. The type of entities envisioned for it already exist in the Netherlands, common good oriented entities called public benefit organisations: ANBI‘s. These too do not absolve you from other legal obligations, or loosen the rules for you. On the contrary these too have additional (public) accountability requirements, similar to those described in the DGA (centrally registered, must publish year reports). The DGA creates ANBI’s for data, Data-ANBI’s. I’ve been involved in data projects that could have benefited from that possibility but never happened in the end because it couldn’t be made to work without this legal instrument.

To me the biggest blind spot in the criticism is that each of the examples cited as probably more hindered than helped by the new rules are single projects that set up their own data collection processes. That’s what I think data altruism is least useful for. You won’t be setting up a data altruism entity for your project, because by then you already know what you want the data for and start collecting that data after designing the project. It’s useful as a general purpose data holding entity, without pre-existing project designs, where later, with the data already collected, such projects as cited as example will be applicants to use the data held. A data altruistic entity will not cater to or be created for a single project but will serve data as a utility service to many projects. I envision that universities, or better yet networks of universities, will set up their own data altruistic entities, to cater to e.g. medical or social research in general. This is useful because there currently are many examples where handling the data requirements being left to the research team is the source of not just GDPR breaches but also other ethical problems with data use. It will save individual projects such as the examples mentioned a lot of time and hassle if there’s one or more fitting data altruistic entities for them to go to as a data source. This as there will then be no need for data collection, no need to obtain your own consent or other grounds for data collection for each single respondent, or create enough trust in your project. All that will be reduced to guaranteeing your responsible data use and convince an ethical board of having set up your project in a responsible way so that you get access to pre-existing data sources with pre-existing trust structures.

It seems to me sentences cited below require a lot more thorough argumentation than the article and accompanying PDF try to provide. Ever since I’ve been involved in open data I’ve seen plenty of data innovations, especially if you switch your ‘only unicorns count’ filter off. Barriers that unintentionally do exist typically stem more from a lack of a unified market for data in Europe, something the DGA (and the GDPR) is actually aimed at.

“So long as the anti-processing straitjacket of the GDPR is not loosened even a little for altruistic purposes, there will be little hope for data innovations from Europe.” “In any case, the EU’s bureaucratic ideas threaten to stifle any altruism.”

Winfried Veil

Kitty Kilian blogt een paar mooie gesprekken met eerbiedwaardige “Bloggers van Toen”. Zoals Elja Daae en Merel Roze. Toen is 20 jaren geleden, het moment waarop ik net gestopt was met mijn webpagina’s geheel met de hand op Notepad te maken, en een heuse blogtool begon te gebruiken. Wat me opvalt is dat deze gesprekken van nu niet veel anders lijken dan de discussies van toen, als het gaat over de zoektocht naar het Ware Bloggen.

Over performativiteit en, of juist versus, authenticiteit. Zakelijk of persoonlijk bloggen of een mix daarvan, publicatie of conversatie. Op naam van je bedrijf, of zoals Frank Meeuwsen het noemt op je eigen ‘moederschip’, op persoonlijke titel op je eigen domein.
Over schrijven, meer en regelmatig schrijven vooral. Hoe je de weg naar schrijven terugvindt, en hoe je de drempels om op de knop ‘publiceren’ te drukken slecht. Drempels t.a.v. kwaliteit, die makkelijk hoger worden als je sluipenderwijs onuitgesproken eisen gaat stellen aan wat je publiceert. Eisen die je jezelf oplegt in reactie op wie je weet dat je weblog leest, of in reactie op hoeveel mensen je weblog lezen. Eisen aan jezelf die ook makkelijk zwaarder worden als je langzaam minder blogt. Als je dan na een maand of maanden eindelijk weer eens wat blogt, dan moet het wel meteen een verblindend inzicht zijn. Waarom zou je anders de uitdijende stilte doorbreken?
Hoe volhouden ook kwalitatief betekenis heeft. Elja stipt dat mooi aan, lang vaak genoeg schrijven leidt tot een ‘enorm’ blog zegt ze. Het hele corpus aan blogposts krijgt een extra laag van betekenis doordat het een corpus is. Kwantiteit genereert kwalitatieve effecten. Die kwantiteit bereik je alleen door iets lang genoeg te doen, zoals blijven bloggen sinds het begin van Toen. Je blog wordt met de tijd je avatar, het is domweg onvermijdelijk, hoe koel zakelijk je het ook zou houden. Het is ook waarom interessante gesprekken over bloggen met “Bloggers van Toen” mogelijk zijn.

Dat er nu minder interessante bloggers zijn dan Toen, zoals in het gesprek met Elja naar voren wordt gebracht, wil er bij mij niet in. Er waren Toen ook al niet minder interessante websites dan tien jaar daarvoor toen alle bestaande websites gewoon nog in een lijstje op een andere website konden worden bijgehouden (in een weblog zeg maar). Al zeiden sommigen dat toen wel. Curatie is moeilijker geworden wellicht, en ongetwijfeld zitten er drommen mensen dagelijks op Facebook toxische memes te re-sharen die prachtige eigen blogposts hadden kunnen schrijven. De opportuniteitskosten van FB zijn fenomenaal, ordegroottes erger dan TV kijken. Toen ik ruim 4 jaar geleden weer meer ging bloggen (want FB in de ban) waren veel Blogs van Toen in mijn feedreader stil gevallen en vergaan. Inmiddels volg ik weer honderden bloggers, sommigen nog van Toen zoals Elja en Frank, sommigen weer opnieuw na Toen, maar de meesten van Ver Na Toen. Niet iedereen kan echt lekker schrijven, zie alhier, maar er is in mijn beleving wel veel meer variatie om uit te kiezen. Er zijn veel meer bloggers dan Toen, wat wil je met bijna 5 miljard meer internetgebruikers. Statistisch onontkomelijk zijn er dus ook veel meer goede bloggers. Al die bloggers bij elkaar geven je alleen niet dat gevoel van Toen, toen het blogiversum overzichtelijk was en daarmee beschut voelde. Misschien zijn er minder bloggers in Nederland, dat zou wellicht kunnen, al voegde ik deze maand een 15 jarige startende blogger uit Nederland toe aan mijn feedreader. We zijn hier in het algemeen meer van de transactie dan van de conversatie en dat zie je denk ik terug in een conversationeel medium als weblogs. Maar dat was Toen al niet anders.

Als de Bloggers van Toen uit 200X ook de Bloggers van Later zijn in 2040, zouden we er dan nog steeds zo over praten en schrijven? Of er over laten schrijven door onze persoonlijke AI-assistent?

Meta-blogging, bloggen over bloggen, is al 20 jaren een categorie in mijn blog. Af en toe wied ik onkruid in mijn categorieën-lijstje hier, omdat termen veranderen, of omdat ik thema’s minder vaak aanstip. Ik denk dat ik metablogging voorlopig nog wel als categorie aan kan houden.

HT Frank Meeuwsen, eveneens “Blogger van Toen” (en van nu uiteraard, anders had ik via hem de gesprekken niet gevonden en het hier niet kunnen vermelden.)

Looking forward to reading Data Mesh by Zhamak Dehghani. She published this book last March with O’Reilly, but originated the concept in 2018. Received the book yesterday.
She suggests data mesh, as a decentralized sociotechnical paradigm drawn from modern distributed architecture that provides a new approach to sourcing, sharing, accessing, and managing analytical data at scale. In other words, moving beyond traditional data warehouses and lakes to a distributed set-up.

I think this is a key concept for the creation of the EU dataspace(s), where combining and working with data is envisioned as not requiring centralising that data, nor even having a copy of the data for your application. Only that way, through things like multi party computing and bringing your model to the data not the other way around, is it possible to bring data into play at European scale that is sensitive, personal or non-public. Only that way is it possible to work with data across domains and a diversity of data holding or using stakeholders. In my work on a national reference architecture for federated digital twins, the concept was applied to the data parts of the reference.

I’m taking the liberty to put three questions before Chris Aldrich about his Hypothes.is experiences, after reading Annotation by Remi Kalir and Antero Garcia. Kalir and Garcia make much of the social affordances that annotation can provide. Where annotation is not an individual activity, jotting down marginalia in solitude, but a dialogue between multiple annotators in the now, or incrementally adding to annotators from the past. Like my blogposts are an ongoing conversation with the world as well. Hypothes.is is one of the mentioned tools that make such social annotating possible. I am much more used to individually annotating (except for shared work documents), where my notes are my own and for my own learning. Yet, I follow Chris Aldrich’s use of Hypothes.is with interest, his RSS feed of annotations is highly interesting, so there’s a clear sign that there can be benefit in social annotation. In order to better understand Chris’s experience I have three questions:

1. How do you beat the silo?

Annotations are anchored to the annotated text. Yet in my own note making flow, I lift them away from the source text to my networked set of notions and notes in which emergent structures produce my personal learning. I do maintain a link to the right spot in the source text. Tools like Hypothes.is are designed as silos to ensure that its social features work. How do you get your annotations into the rest of your workflow for notes and learning? How do you prevent that your social annotation tool is yet another separate place where one keeps stuff, cutting off the connections to the rest of one’s work and learning that would make it valuable?

2. What influence does annotating with an audience have on how you annotate?

My annotations and notes generally are fragile things, tentative formulations, or shortened formulations that have meaning because of what they point to (in my network of notes and thoughts), not so much because of their wording. Likewise my notes and notions read differently than my blog posts. Because my blog posts have an audience, my notes/notions are half of the internal dialogue with myself. Were I to annotate in the knowledge that it would be public, I would write very differently, it would be more a performance, less probing forwards in my thoughts. I remember that publicly shared bookmarks with notes in Delicious already had that effect for me. Do you annotate differently in public view, self censoring or self editing?

3. Who are you annotating with?

Learning usually needs a certain degree of protection, a safe space. Groups can provide that, but public space often less so. In Hypothes.is who are you annotating with? Everybody? Specific groups of learners? Just yourself and one or two others? All of that, depending on the text you’re annotating? How granular is your control over the sharing with groups, so that you can choose your level of learning safety?

Not just Chris is invited to comment on these questions obviously. You’re all invited.


Opticks, with marginalia, image by Open Library, license CC BY