Das sieht sehr interessant aus, Heinz. Die Verbindung zwischen Degrowth-denken und Contentstrategien; ich glaube wir brauchen mehr solche Beispiele wie man abstrakte Vorhaben umsetzt oder übersetzt in kleinere, mehr alltäglichen Kontexten, ohne dabei in die Falle des ‘toten Urgrossvater Prinzips*’ zu tappen.

Vielleicht ist auch dieses Event am 11.11. in Brüssel etwas für dich (ich habe vor dabei zu sein, hoffentlich klappt das auch): SciFi Economics Lab, von Edgeryders organisiert. Im Orga-Team ist Alberto Cottica, die du vielleicht im letzten Jahr bei Elmine’s Geburtstags-unconference gesprochen hast.

Ich habe aber auch eine ganz praktische Frage, über den Formfaktor deiner Präsentation: mir gefallen immer die HTML Folien, weil es ja leicht teilbar und in einem offenen Standard ist. Aber bist du während deines Vortrags von Internetzugang abhängig, oder hast du einen Weg das auch lokal auf der eigenen Maschine zu zeigen?

* Mein toter Urgrossvater ist Weltmeister in Energie, Wasser, seltene Erdmetalle, und CO2 usw. sparen. Seit er gestorben ist spart er 100% bis in aller Ewigkeit. Leben heist Verbrauch, und daher ist ‘sparen’ als Ziel an sich keine Lösung, ‘smartes’ denken über sparen im Kontext (neuer) Ziele aber schon (wie LED). Frei nach Bruce Sterling bei Reboot 2009.

Replied to a post by Heinz Wittenbrink Web teacher and blogger, living in Graz and sometimes in Dubrovnik.Heinz Wittenbrink Web teacher and blogger, living in Graz and sometimes in Dubrovnik.

Preparing an English version of my presentation on Content Strategy for Degrowth for our #coscamp today

Voor iedereen die zich bezig houdt met digitale transformatie bij de overheid is Maike Klip’s weblog over service design de moeite van het volgen waard. Toegevoegd aan de feedreader. Met dank aan Alper voor de tip.

Read Klipklaar

Hi, ik ben Maike Klip en dit is mijn researchblog. Ik onderzoek hoe de digitale overheid een begripvolle verbinding kan hebben met mensen in Nederland. Als je nieuw bent op mijn blog en midden in mijn zoektocht valt, begin dan hier en ik praat je bij.

A good presentation I attended this afternoon at World Summit AI 2019. Will blog about it, but bookmarking it here for now.

Read Escaping Skinner’s Box: AI and the New Era of Techno-Superstition (philosophicaldisquisitions.blogspot.com)

One of the things AI will do is re-enchant the world and kickstart a new era of techno-superstition. If not for everyone, then at least for most people who have to work with AI on a daily basis. The catch, however, is that this is not necessarily a good thing. In fact, it is something we should worry about.

Ethics is the expression of values in actual behaviour. So when you want to do data ethics it is about practical issues, and reconsidering entrenched routines. In the past few weeks I successfully challenged some routine steps in a clients’ organisation, resulting in better and more ethical use of data. The provision of subsidies to individuals is arranged by specific regulations. The regulations describe the conditions and limitations for getting a subsidy, and specify a set of requirements when you apply for a subsidy grant. Such subsidy regulations, once agreed have legal status.

With the client we’re experimenting in making it vastly less of an effort for both requester and the client to process a request. As only then does it make sense to provide smaller sized subsidies to individual citizens. Currently there is a rather high lower limit for subsidies. Otherwise the costs of processing a request would be higher than the sum involved, and the administrative demands for the requester would be too big in comparison to the benefits received. Such a situation typically leads to low uptake of the available funding, and ineffective spending, which both make the intended impact lower (in this case reducing energy usage and CO2 emissions).

In a regular situation the drafting of regulation and then the later creation of an application form would be fully separate steps, and the form would probably blindly do what the regulations implies or demands and also introduce some overshoot out of caution.

Our approach was different. I took the regulation and lifted out all criteria that would require some sort of test, or demands that need a piece of information or data. Next, for each of those criteria and demands I marked what data would satisfy them, the different ways that data could be collected, and what role it played in the process. The final step is listing the fields needed in the form and/or those suggested by the form designers, and determining how filling those fields can be made easier for an applicant, (E.g. having pick up lists)

A representation of the steps taken / overview drawn

What this drawing of connections allows is to ask questions about the need and desirability of collecting a specific piece of data. It also allows to see what it means to change a field in a form, for how well the form complies with the regulation, or which fields and what data flows need to change when you change the regulation.

Allowing these questions to be asked, led to the realisation that several hard demands for information in the draft regulation actually play no role in determining eligibility for the subsidy involved (it was simply a holdover from another regulation that was used as template, and something that the drafters thought was ’nice to have’). As we were involved early, we could still influence the draft regulation and those original unneeded hard demands were removed just before the regulation came up for an approval vote. Now that we are designing the form it also allows us to ask whether a field is really needed, where the organisation is being overcautious about an unlikely scenario of abuse, or where it does not match an actual requirement in the regulation.

Questioning the need for specific data, showing how it would complicate the clients’ work because collecting it comes with added responsibilities, and being able to ask those questions before regulation was set in stone, allowed us to end up with a more responsible approach that simultaneously reduced the administrative hoops for both applicant and client to jump through. The more ethical approach now is also the more efficient and effective one. But only because we were there at the start. Had we asked those questions after the regulation was set, it would have increased the costs of doing the ethically better thing.

The tangible steps taken are small, but with real impact, even if that impact would likely only become manifest if we hadn’t taken those steps. Things that have less friction get noticed less. Baby steps for data ethics, therefore, but I call it a win.

Paper salesDoing this online is a neighbouring right in the new EU Copyright Directive. Photo by Alper, license CC BY

A move that surprises absolutely no one: Google won’t pay French publishers for snippets. France is the first EU country to transcribe the new EU Copyright Directive into law. This directive contains a new neighbouring right that says if you link to something with a snippet of that link’s content (e.g. a news link, with the first paragraph of the news item), you need to seek permission to do so, and that permission may come with a charge. This in the run-up to the directive was dubbed the ‘link tax’, although that falsely suggests it concerns any type of hyperlinking.
Google, not wanting to pay publishers for the right to use snippets with their links, will stop using snippets with those links.

reading the newspaperPhoto by Nicolas Alejandro, license CC BY

Ironically the link at the top is to a publisher, Axel Springer, that lobbied intensively for the EU Copyright Directive to contain this neighbouring right. Axel Springer is also why we knew with certainty up front this part of the Copyright Directive would fail. Years ago (2013) Germany, after lobbying by the same Axel Springer publishing house, created this same neighbouring right in their copyright law. Google refused to buy a license and stopped using snippets. Axel Springer saw its traffic from search results drop by 40%, others by 80%. They soon caved and provided Google with a free of charge license, to recoup some of the traffic to their sites.

read newsPhoto by CiaoHo, license CC BY

This element of the law failed in Germany, it failed in Spain in 2015 as well. Axel Springer far from being discouraged however touted this as proof that Google needed to be regulated, and continued lobbying for the same provision to be included in the EU Copyright Directive. With success, despite everyone else explaining how it wouldn’t work there either. It really comes at no surprise therefore that now the Copyright Directive will come into force in French law, it has the exact same effect. Wait for French publishers to not exercise their new neighbouring rights in 3, 2, 1…

Week 32/52.2012Photo by The JH Photography, license CC BY

News publishers have problems, I agree. Extorting anyone linking to them is no way to save their business model though (dropping toxic adtech however might actually help). It will simply mean less effective links to them, resulting in less traffic, in turn resulting in even less advert revenue for them (a loss exceeding any revenue they might hope to get from link snippet licenses). This does not demonstrate the monopoly of Google (though I don’t deny its real dominance), it demonstrates that you can’t have cake and eat it (determining how others link to you and get paid for it, but keep all your traffic as is), and it doesn’t change that news as a format is toast.

BELGIUMPhoto by Willy Verhulst, license CC BY ND

De discussie over stikstof en de rol van boeren heeft slechts voor een deel met het wel of niet transparant zijn van cijfers te maken.

Want veel is al transparant. Zoals het landelijk rekenmodel dat gebruikt wordt om stikstofdeposities te bepalen: de code van Aerius is op GitLab beschikbaar. Data wordt gedeeld.

Twee dingen staan voor mij centraal.

Op praktisch niveau speelt een rol dat een rekenmodel niet hetzelfde is als metingen. Net als bij geluidsoverlast en geluidszones wordt er heel weinig echt gemeten, er wordt vooral berekend. Berekeningen zijn geen feiten, het zijn aannames uitgedrukt in cijfers. De cijfers suggereren slechts feitelijkheid, omdat we geleerd hebben getallen serieuzer te nemen dan kwalitatieve aanduidingen als “een beetje” “veel” “meer” of “minder”. Dit leidt al jaren tot frustratie en problemen in de praktijk voor bijvoorbeeld provincies bij vergunningverlening. Dat kom ik in mijn werk vaak tegen. Met een landelijk rekenmodel kun je namelijk niet bepalen wat de daadwerkelijke impact van een lokale puntbron van stikstof is (bijvoorbeeld een fabrieksschoorsteen of een varkensstal), of wat er gebeurt als je die verplaatst dichter of verder af van een natuurgebied. De boeren hebben hier een zeer valide punt. Er is geen reden waarom we niet veel meer in het echt zouden kunnen meten, in plaats van slechts berekenen.

Op strategisch niveau is iets fundamentelers aan de hand. Als je keuzes maakt moet je ook de consequenties van keuzes uitleggen en aanvaarden. Dat doet het Rijk niet. Beperking van de uitstoot betekent namelijk beperking van gedrag. Die beperking wordt niet aanvaardt door het Rijk. Of het nu om boeren gaat, het aantal vluchten op Schiphol, wel of niet Lelystad als extra luchthaven, autorijden, wel of niet een extra Maasvlakte, wel of niet die woonwijk, etc. etc. Het enige wat gebeurde is de belofte van toekomstige verandering van gedrag. Het PAS beloofde toekomstige compensatiemaatregelen zodat alles door kan zoals het al ging. Wel kiezen en reguleren op papier maar niet veranderen in de praktijk, niet kiezen voor wat je dan voortaan laat. Weggepoetst met de hoop op een deus ex machina in de toekomst die het allemaal fixt en moeilijke keuzes nu overbodig maakt.

We kunnen bakkeleien over de cijfers wat we willen, zoals de boeren nu doen met het RIVM. Maar uiteindelijk lost dat niets op, zolang je niet aanvaardt dat kiezen consequenties heeft.

MaasvlakteMaasvlakte by Albert Koch, license CC BY ND