Back in 2022 the Belgian and other data protection boards found that IAB’s ‘Transparency and Consent Framework‘ is illegal, because it is neither transparent nor has any meaningful connection with the word consent. IAB is the industry club for adtech users. Yesterday this verdict was upheld on appeal.

You know the kind of consent form from about 80% of websites, it takes one click to give away everything for the next three generations, and a day of clicks to deny consent. They need to coerce your consent to feed the tracking based real-time-bidding mechanisms for displaying all those ads that you see if you don’t use an ad blocker like a sane adult.

It was always clear that type of behaviour does not result in freely given consent for tracking and is illegal under the GDPR. But it takes time to have such things contested in court and affirmed before adtech corporations will admit it.

The 2022 decision now upheld on appeal (PDF in Dutch) applies immediately across the EU, and will impact such IAB members as Google, Microsoft, Amazon, X, and Automattic (WordPress) (at least they were a member back in 2022). The appeal to the decision was filed in March 2022, and the Belgian court submitted several prejudicial questions to the European court of justice, that were answered in spring 2024, and now lead to a decision.

Excellent work by the Irish Council for Civil Liberties and others.

Ceterum censeo AdTech is fundamentally non-compatible with the GDPR, and needs to die.

Digitale autonomie klinkt mooi en noodzakelijk. In de praktijk maken product managers en IT-managers keuzes op andere gronden dan ‘autonomie’. Een klassiek geval van een weging moeten maken tussen iets dat abstract is (‘autonomie’), versus iets dat veel concreter en vertrouwder is voor degene die een beslissing neemt (‘total cost of ownership’ bijv.). Bij digitale ethiek zie ik dat ook vaak, en in wegingen t.a.v. de AVG ook. Het concrete wint het dan meestal van het abstracte of algemene. Omdat je verschillende categorieën van dingen aan het vergelijken bent en we daar slecht in zijn.

Hoe maak je digitale autonomie tastbaar genoeg om het op directieniveau besproken te krijgen? Door de te vergelijken aspecten wel vergelijkbaar te maken. Recent kreeg ik een inkijkje in hoe dat binnen een grote uitvoeringsorganisatie is gegaan.

Startpunt was de probleemstelling: de keuze voor bepaalde digitale diensten o.b.v. louter financiële en technische factoren leidt tot afhankelijkheden. Dit introduceert operationele en financiële kwetsbaarheden, omdat er beslissingen buiten de eigen organisatie kunnen worden genomen die rechtstreeks de eigen primaire operationele processen stil kunnen leggen.

Het digitale stapelmodel uit de Digitale Open Strategische Autonomie (DOSA, 2023 Ministerie van Economische Zaken) is als hulpmiddel bij de analyse ingezet.

Het stapelmodel maakt onderscheid in lagen, van grondstoffen onderin via infrastructuur naar data en applicaties.

De analyse keek naar twee assen op elk van die lagen: is de organisatie ervan of niet? doet de organisatie het zelf, of doen ze het niet zelf?

In dit geval is de organisatie zelf op het vlak van data en toepassingen zeer actief, en dat behoort ook tot de kern van wie ze zijn. Maar ze besteden ook veel uit. In mindere mate geldt hetzelfde voor ‘zachte infrastrctuur’. Ze zijn niet van de harde infra, en hardware, en al helemaal niet van grondstoffen, en doen daar ook vrijwel niets zelf.

En wat was de verandering in de afgelopen jaren?

In bovenstaande plaatje zie je dat er op de vlakken van data, applicaties en zachte infrastructuur meer is uitbesteed in de afgelopen jaren. Daarbij is men ook zelf veel blijven doen m.b.t. data en applicaties, maar op het vlak van zachte infrastructuur is men minder zelf gaan doen en zijn bepaalde activiteiten gestopt.

Om afhankelijkheden te verminderen (m.n. daar waar je ‘er van bent’ maar wel veel uitbesteed), kun je benoemen of je de verandering wilt omkeren, en of je dat samen met anderen wilt doen. In onderstaande plaatje bijvoorbeeld, zelf volledig op de data focussen, t.a.v. applicaties zelf veel blijven doen, minder uitbesteden en meer samenwerken in de keten, en t.a.v. zachte infrastructuur minder zelf doen, meer samenwerken in de keten, en minder uitbesteden.

Hieruit volgen in een discussie makkelijker elementen van een uitvoerbare strategie, en concretere afwegingen t.a.v. inkoopvereisten die je aan anderen stelt, kennisontwikkeling in de organisatie, en samenwerkingsverbanden met ketenpartners.

Technology, working in technology, is inherently impacting society, and must concern itself with the democratisation of access and use, and the flow of information. My focus has always been the agency technology can provide, specifically to those who don’t have such agency without it. How it can strengthen community and autonomy. Unintended consequences and externalised effects of creating and using technology do always exist and impact different groups too, and need to be considered in any technology choice. My work in government data is about information and power asymmetries, my work in digital ethics more generally seeks to incorporate a wide range of other values and considerations, my work in tech regulation and standards similarly is to enable agency and confirm and embed values. Ethics is not about saying no to things, it’s about shaping our actions towards each other. Seeing each other as part of every question, not othering others to cut them out of deliberations. Democracy is at its core.
My day to day work in my company is carried by it and my voluntary board work reflects it as well.

My work in technology has always been what I’ve come to call constructive activism. It’s an often less visible way to enable change though than through e.g. overtly campaigning for such change. You can work in relative quiet. There are times however when it becomes needed to more visibly get involved, to be seen to get involved. I increasingly feel we’ve been sliding into such a situation in the past years here in the West.

Defend Democracy is a young civil society organisation, working in Brussels, to strengthen democracy, and defend it against eroding forces from here, elsewhere and from technology. Like my other voluntary board memberships enabling agency is key here. At the Open State Foundation it’s about citizen’s agency based on increased government transparency and better information. At the Open Nederland association of Dutch makers in support of Creative Commons licensing, it’s about makers’ autonomy in what happens to what they make and how it can contribute to society. At the ActivityClub foundation it’s enabling public discourse through a non-toxic common infrastructure (mastodon.nl a.o.). At Defend Democracy it’s about what those other three organisations have in common. Strengthening and defending democracy.

I am joining the board of Defend Democracy as its treasurer.

I’ve been using Flickr.com to externally share photos since March 2005 (just before the Yahoo acquisition), and I have some 40.000 photos there from the past two decades.
Pixelfed a federated photo sharing tool is meant as a Instagram alternative. I created a test account on Pixelfed.social in December 2018 (profile number 4000) but never used it for anything.

More recently Pixelfed has enjoyed wider attention and has been a top download on mobile phones.

I wonder, would Pixelfed be suitable as a Flickr replacement? Does anyone treat is as such yet?

A quick exploration of the settings seems to indicate I can’t, like in Flickr, share images with specific circles of contacts (e.g. designated family for pictures that have our daughter in it), or choose to not share them at all other than with myself. Uploads are limited to 20 images at a time, I saw, although albums (on Pixelfed connections) are possible. There also doesn’t (yet?) seem to be a way to explore an image exif or other meta-data, like location.

I’m tempted to self-host a personal instance to experiment. Anyone with experience in that?

Bookmarked Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence

Finalised in June, the AI Act (EU 2024/1689) was published yesterday 12-07-2024 and will enter into force after 20 days, on 02-08-2024. Generally the law will be applicable after 2 years, on 02-08-2026, with. a few exceptions:

  • The rules on banned practices (Chapter 2) will become applicable in 6 months, on 02-02-2025, as will the general provisions (Chapter 1)
  • Parts such as the chapter on notified bodies, general purpose AI models (Chapter 5), governance (Chapter 7), penalties (Chapter 12), will become applicable in a year, on 02-08-2025
  • Article 6 in Chapter 3, on the classification rules for high risk AI applications, will apply in 3 years, from 02-02-2027

The purpose of this Regulation is to improve the functioning of the internal market by laying down a uniform legal framework in particular for the development, the placing on the market, the putting into service and the use of artificial intelligence systems (AI systems) in the Union, in accordance with Union values, to promote the uptake of human centric and trustworthy artificial intelligence (AI) while ensuring a high level of protection of health, safety, fundamental rights as enshrined in the Charter of Fundamental Rights of the European Union (the ‘Charter’), including democracy, the rule of law and environmental protection, to protect against the harmful effects of AI systems in the Union, and to support innovation. This Regulation ensures the free movement, cross-border, of AI-based goods and services, thus preventing Member States from imposing restrictions on the development, marketing and use of AI systems, unless explicitly authorised by this Regulation.