The number and frequency of 51% attacks on blockchains is increasing. Ethereum last month being the first of the top 20 cryptocoins to be hit. Other types of attacks mostly try to exploit general weaknesses in how exchanges operate, but this is fundamental to how blockchain is supposed to work. Combined with how blockchain projects don’t seem to deliver and are basically vaporware, we’ve definitely gone from the peak of inflated expectations to the trough of disillusion. Whether there will be a plateau of productivity remains an open question.

A team of people, including Jeremy Keith whose writings are part of my daily RSS infodiet, have been doing some awesome web archeology. Over the course of 5 days at CERN, they recreated the browser experience as it was 30 years ago with the (fully text based) WorldWideWeb application for the NeXT computer

Hypertext’s root, the CERN page in 1989

This is the type of pages I visited before inline images were possible.
The cool bit is it allows you to see your own site as it would have looked 30 years ago. (Go to Document, then Open from full document reference and fill in your url) My site looks pretty well, which is not surprising as it is very text centered anyway.

Hypertexting this blog like it’s 1989

Maybe somewhat less obvious, but of key importance to me in the context of my own information strategies and workflows, as well as in the dynamics of the current IndieWeb efforts is that this is not just a way to view a site, but you can also edit the page directly in the same window. (See the sentence in all capitals in the image below.)

Read and write, the original premise of the WWW

Hypertext wasn’t meant as viewing-only, but as an interactive way of linking together documents you were actively working on. Closest come current wiki’s. But for instance I also use Tinderbox, a hypertext mindmapping, outlining and writing tool for Mac, that incorporates this principle of linked documents and other elements that can be changed as you go along. This seamless flow between reading and writing is something I feel we need very much for effective information strategies. It is present in the Mother of all Demos, it is present in the current thinking of Aaron Parecki about his Social Reader, and it is a key element in this 30 year old browser.

Kars Alfrink pointed me to a report on AI Ethics by the Nuffield Foundation, and from it lifts a specific quote, adding:

Good to see people pointing this out: “principles alone are not enough. Instead of representing the outcome of meaningful ethical debate, to a significant degree they are just postponing it”

This postponing of things, is something I encounter all the time. In general I feel that many organisations who claim to be looking at ethics of algorithms, algorithmic fairness etc, currently actually don’t have anything to do with AI, ML or complicated algorithms. To me it seems they just do it to place the issue of ethics well into the future, that as yet unforeseen point they will actually have to deal with AI and ML. That way they prevent having to look at ethics and de-biasing their current work, how they now collect, process data and the governance processes they have.

This is not unique to AI and ML though. I’ve seen it happen with open data strategies too. Where the entire open data strategy of for instance a local authority was based on working with universities and research entities to figure out how decades after now data might play a role. No energy was spent on how open data might be an instrument in dealing with actual current policy issues. Looking at future issues as fig leaf to not deal with current ones.

This is qualitatively different from e.g. what we see in the climate debates, or with smoking, where there is a strong current to deny the very existence of issues. In this case it is more about being seen to solve future issues, so no-one notices you’re not addressing the current ones.

Chris Corrigan last November wrote a posting “Towards the idea that complexity is a theory of change“. Questions about the ‘theory of change’ you intend to use are regular parts of project funding requests for NGO’s, the international development sector and the humanitarian aid sector.

Chris’ posting kept popping up in my mind, “I really should blog about this”. But I didn’t. So for now I just link to it here. Because I think Chris is right, complexity is a theory of change. And in projects I do that concern community stewarding, networked agency and what I call distributed digital transformation, basically anything where people are the main players, it is for me in practice. Articulating it that way is helpful.

Cutting Through Complexity
How not to deal with complexity… Overly reductionist KPMG adverts on Thames river boats

To me there seems to be something fundamentally wrong with plans I come across where companies would pay people for access to their personal data. This is not a well articulated thing, it just feels like the entire framing of the issue is off, so the next paragraphs are a first attempt to jot down a few notions.

To me it looks very much like a projection by companies on people of what companies themselves would do: treating data as an asset you own outright and then charging for access. So that those companies can keep doing what they were doing with data about you. It doesn’t strike me as taking the person behind that data as the starting point, nor their interests. The starting point of any line of reasoning needs to be the person the data is about, not the entity intending to use the data.

Those plans make data release, or consent for using it, fully transactional. There are several things intuitively wrong with this.

One thing it does is put everything in the context of single transactions between individuals like you and me, and the company wanting to use data about you. That seems to be an active attempt to distract from the notion that there’s power in numbers. Reducing it to me dealing with a company, and you dealing with them separately makes it less likely groups of people will act in concert. It also distracts from the huge power difference between me selling some data attributes to some corp on one side, and that corp amassing those attributes over wide swaths of the population on the other.

Another thing is it implies that the value is in the data you likely think of as yours, your date of birth, residence, some conscious preferences, type of car you drive, health care issues, finances etc. But a lot of value is in data you actually don’t have about you but create all the time: your behaviour over time, clicks on a site, reading speed and pauses in an e-book, minutes watched in a movie, engagement with online videos, the cell towers your phone pinged, the logs about your driving style of your car’s computer, likes etc. It’s not that the data you’ll think of as your own is without value, but that it feels like the magician wants you to focus on the flower in his left hand, so you don’t notice what he does with his right hand.
On top of that it also means that whatever they offer to pay you will be too cheap: your data is never worth much in itself, only in aggregate. Offering to pay on individual transaction basis is an escape for companies, not an emancipation of citizens.

One more element is the suggestion that once such a transaction has taken place everything is ok, all rights have been transferred (even if limited to a specific context and use case) and that all obligations have been met. It strikes me as extremely reductionist. When it comes to copyright authors can transfer some rights, but usually not their moral rights to their work. I feel something similar is at play here. Moral rights attached to data that describes a person, which can’t be transferred when data is transacted. Is it ok to manipulate you into a specific bubble and influence how you vote, if they paid you first for the type of stuff they needed to be able to do that to you? The EU GDPR I think takes that approach too, taking moral rights into account. It’s not about ownership of data per se, but the rights I have if your data describes me, regardless of whether it was collected with consent.

The whole ownership notion is difficult to me in itself. As stated above, a lot of data about me is not necessarily data I am aware of creating or ‘having’, and likely don’t see a need for to collect about myself. Unless paying me is meant as incentive to start collecting stuff about me for the sole purpose of selling it to a company, who then doesn’t need my consent nor make the effort to collect it about me themselves. There are other instances where me being the only one able to determine to share some data or withhold it mean risks or negative impact for others. It’s why cadastral records and company beneficial ownership records are public. So you can verify that the house or company I’m trying to sell you is mine to sell, who else has a stake or claim on the same asset, and to what amount. Similar cases might be made for new and closely guarded data, such as DNA profiles. Is it your sole individual right to keep those data closed, or has society a reasonable claim to it, for instance in the search for the cure for cancer? All that to say, that seeing data as a mere commodity is a very limited take, and that ownership of data isn’t a clear cut thing. Because of its content, as well as its provenance. And because it is digital data, meaning it has non-rivalrous and non-excludable characteristics, making it akin to a public good. There is definitely a communal and network side to holding, sharing and processing data, currently conveniently ignored in discussions about data ownership.

In short talking about paying for personal data and data lockers under my control seem to be a framing that presents data issues as straightforward but doesn’t solve any of data’s ethical aspects, just pretends that it’s taken care of. So that things may continue as usual. And that’s even before looking into the potential unintended consequences of payments.

Help jij ons mee organiseren? We gaan een IndieWebCamp organiseren in Utrecht, een event om het gebruik van het Open Web te bevorderen, en met elkaar praktische zaken aan je eigen site te verbeteren. We zoeken nog een geschikte datum en locatie in Utrecht. Je hulp is dus van harte welkom.

Op het Open Web bepaal jij zelf wat je publiceert, hoe het er uit ziet, en met wie je in gesprek gaat. Op het Open Web bepaal je zelf wie en wat je volgt en leest. Het Open Web was er altijd al, maar in de loop van de tijd zijn we allemaal min of meer opgesloten geraakt in de silo’s van Facebook, Twitter, en al die anderen. Hun algoritmes en timelines bepalen nu wat jij leest. Dat kan ook anders. Bouw je eigen site, waar anderen niet tussendoor komen fietsen omdat ze advertentie-inkomsten willen genereren. Houd je eigen nieuwsbronnen bij, zonder dat andermans algoritme je opsluit in een bubbel. Dat is het IndieWeb: jouw content, jouw relaties, jij zit aan het stuur.

Frank Meeuwsen en ik zijn al heel lang onderdeel van internet en dat Open Web, maar brengen/brachten ook veel tijd in websilo’s als Facebook door. Inmiddels zijn we beiden actieve ‘terugkeerders’ op het Open Web. Afgelopen november waren we samen op het IndieWebCamp Nürnberg, waar een twintigtal mensen met elkaar discussieerde en ook zelf actief aan de slag gingen met hun eigen websites. Sommigen programmeerden geavanceerde dingen, maar de meesten zoals ikzelf bijvoorbeeld, deden juist kleine dingen (zoals het verwijderen van een link naar de auteur van postings op deze site). Kleine dingen zijn vaak al lastig genoeg. Toen we terugreden met de trein naar Nederland waren we het er al snel over eens: er moet ook een IndieWebCamp in Nederland komen. In Utrecht dus, dit voorjaar.

Om Frank te citeren:

Voel je je aangesproken door de ideeën van het open web, indieweb, wil je aan de slag met een eigen site die meer vrij staat van de invloeden sociale silo’s en datatracking? Wil je een nieuwsvoorziening die niet meer primair wordt gevoed door algoritmen en polariserende roeptoeters? Dan verwelkomen we je op twee dagen IndieWebCamp Utrecht.

Laat weten of je er bij wilt zijn.
Laat weten of je kunt helpen met het vinden van een locatie.
Laat weten hoe wij jou kunnen helpen bij je stappen op het Open Web.

Je bent uitgenodigd!

Dries Buytaert, the originator of the Drupal CMS, is pulling the plug on Facebook. Having made the same observations I did, that reducing FB engagement leads to more blogging. A year ago he set out to reclaim his blog as a thinking-out-loud space, and now a year on quits FB.

I’ve seen this in a widening group of people in my network, and I welcome it. Very much so. At the same time though, I realise that mostly we’re returning to the open web. As we were already there for a long time before the silo’s Sirens lured us in, silos started by people who like us knew the open web. For us the open web has always been the default.

Returning to the open web is in that sense not a difficult step to make. Yes, you need to overcome the FOMO induced by the silo’s endless scrolling timeline. But after that withdrawal it is a return to the things still retained in your muscle memory. Dusting off the domain name you never let lapse anyway. Repopulating the feed reader. Finding some old blogging contacts back, and like in the golden era of blogging, triangulate from their blog roll and published feeds to new voices, and subscribe to them. It’s a familiar rhythm that never was truly forgotten. It’s comforting to return, and in some ways privilege rather than a risky break from the mainstream.

It makes me wonder how we can bring others along with us. The people for whom it’s not a return, but striking out into the wilderness outside the walled garden they are familiar with. We say it’s easy to claim your own space, but is it really if you haven’t done it before? And beyond the tech basics of creating that space, what can we do to make the social aspects of that space, the network and communal aspects easier? When was the last time you helped someone get started on the open web? When was the last time I did? Where can we encounter those that want and need help getting started? Outside of education I mean, because people like Greg McVerry have been doing great work there.

The ‘on this day in earlier years‘ plugin I recently installed on this blog is already proving to be useful in the way I hoped: creating somewhat coincidental feedback loops to my earlier blogposts, self serendipity.

Last week I had lunch with Lilia and Robert, and 15 years ago today another lunch with Lilia prompted a posting on lurking in social networks / blog networks. With seventeen comments, many of them pointing to other blogposts it’s a good example of the type of distributed conversations blogging can create. Or could, 15 years ago. Re-reading that posting now, it is still relevant to me. And a timely reminder. I think it would be worth some time to go through more of my postings about information strategies from back then, and see how they compare to now, and how they would translate to now.

Today I gave short presentation at the Citizen Science Koppelting conference in Amersfoort. Below is the transcript and the slidedeck.

I’ve worked on opening data, mainly with governments worldwide for the past decade. Since 2 years I’ve been living in Amersfoort, and since then I’ve been a participant in the Measure Your City network, with a sensor kit. I also run a LoRaWan gateway to provide additional infrastructure to people wanting to collect sensor data. Today I’d like to talk to you about using open data. What it is, what exists, where to find it, and how to get it. Because I think it can be a useful resource in citizen science.

What is open data? It is data that is published by whoever collected it in such a way, so that anyone is permitted to use it. Without any legal, technical or financial barriers.

This means an open license, such as Creative Commons 0, open standards, and machine readable formats.
Anyone can publish open data, simply by making it available on the internet. And plenty people, academics, and companies do. But mostly open data means we’re looking at government for data.

That’s because we all have a claim on our government, we are all stakeholders. We already paid for the data as well, so it’s all sunk costs, while making it available to all as infrastructure does not increase the costs a lot. And above all: governments have many different tasks, and therefore lots of different data. Usually over many years and at relatively good quality.

The legal framework for open data consists of two parts. The national access to information rules, in NL the WOB, which says everything government has is public, unless it is not.
And the EU initiated regulation on re-using, not just accessing, government material. That says everything that is public can be re-used, unless it can’t. Both these elements are passive, you need to request material.

A new law, the WOO, makes publication mandatory for more things. (For some parts publication is already mandated in laws, like in the WOB, the Cadastre law, and the Company Register)

Next to that there are other elements that play a role. Environmental data must be public (Arhus convention), and INSPIRE makes it mandatory for all EU members to publish certain geographic data. A new EU directive is in the works, making it mandatory for more organisations to publish data, and for some key data sets to be free of charge (like the company register and meteo data)

Next to the legal framework there are active Dutch policies towards more open data: the Data Agenda and the Open Government action plan.

The reason open data is important is because it allows people to do new things, and more importantly it allows new people, who did not have that access before, to do new things. It democratises data sources, that were previously only available to a select few, often those big enough to be able to pay for access. This has now been a growing movement for 10-15 years.

That new agency has visible effects. Economically and socially.In fact you probably already use open data on a daily basis without noticing. When you came here today by bike, you probably checked Buienradar. Which is based on the open data of the KNMI. Whenever in Wikipedia you find additional facts in the right hand column, that informations doesn’t come from Wikipedia but is often directly taken from government databases. The same is true for a lot of the images in Wikipedia, of monuments, historic events etc. They usually come from the open collections of national archives, etc.

When Google presents you with traffic density, like here the queues in front of the traffic lights on my way here, it’s not Google’s data. It’s government data, that is provided in near real-time from all the sensors in the roads. Google just taps into it, and anyone could do the same.You could do the same.

There are many big and small data sets that can be used for a new specific purpose. Like when you go to get gas for the car. You may have noticed at manned stations it takes a few seconds for the gas pump to start? That’s because they check your license plate against the make of the car, in the RDW’s open database. Or for small practical issues. Like when looking for a new house, how much sunshine does the garden get. Or can I wear shorts today (No!).

But more importantly for today’s discussion, It can be a powerful tool for citizen scientists as well. Such as in the public discussion about the Groningen earth quakes. Open seismological data allowed citizens to show their intuition that the strength and frequency of quakes was increasing was real. Using open data by the KNMI.Or you can use it to explore the impact of certain things or policies like analysing the usage statistics of the Utrecht bicycle parking locations.A key role open data can play is to provide context for your own questions. Core registers serve as infrastructure, key datasets on policy domains can be the source for your analysis. Or just a context or reference.

Here is a range of examples. The AHN gives you heights of everything, buildings, landscape etc.
But it also allows you to track growth of trees etc. Or estimate if your roof is suitable for solar panels.This in combination with the BAG and the TOP10NL makes the 3d image I started with possible. To construct it from multiple data sources: it is not a photograph but a constructed image.

The Sentinel satellites provide you with free high resolution data. Useful for icebreakers at sea, precision agriculture, forest management globally, flooding prevention, health of plants, and even to see if grasslands have been damaged by feeding geese or mice. Gas mains maintainer Stedin uses this to plan preventative maintenance on the grid, by looking for soil subsidence. Same is true for dams, dikes and railroads. And that goes for many other subjects. The data is all there. Use it to your advantage. To map your measurements, to provide additional proof or context, to formulate better questions or hypotheses.

It can be used to build tools that create more insigt. Here decision making docs are tied to locations. 38 Amersfoort council issues are tied to De Koppel, the area we are in now. The same is true for many other subjects. The data is all there. Use it to your advantage. To map your measurements, to provide additional proof or context, to formulate better questions or hypotheses.

Maybe the data you need isn’t public yet. But it might be. So request it. It’s your right. Think about what data you need or might be useful to you.
Be public about your data requests. Maybe we can for a Koppelting Data Team. Working with data can be hard and disappointing, doing it together goes some way to mitigate that.

[This post was created using a small hack to export the speaking notes from my slidedeck. Strangely enough, Keynote itself does not have such an option. Copying by hand takes time, by script it is just a single click. It took less than 10 minutes to clean up my notes a little bit, and then post the entire thing.]

Voor overheidsdata is het al lange tijd de landelijke norm dat daarvoor een Creative Commons 0 licentie, of ten hoogste een Creative Commons Naamsvermelding licentie wordt gebruikt. Dat betekent dat iedereen voor elk gebruiksdoel de gepubliceerde materialen mag hergebruiken. Voor andere dingen die overheden gebruiken, zoals ontwerp-elementen uit de huisstijl is dat niet het geval.

De Provincie Overijssel geeft nu een goed voorbeeld hoe je ook naast data hergebruik mogelijk kunt maken van ander materiaal dat met publiek geld is gerealiseerd. Op de site zijn de ruim 200 iconen die onderdeel uitmaken van de huisstijl van de provincie beschikbaar gesteld voor hergebruik. Als je een icoon downloadt krijg je het in 5 bestandsformaten aangeleverd (svg, ai, emf, jpg, png).

Op deze iconenset berust een Creative Commons licentie, de rechten liggen dus bij de provincie Overijssel, maar iedereen die de iconen wil gebruiken, krijgt van Overijssel toestemming om dit werk te verspreiden, met anderen te delen of te bewerken.

De Provincie Overijssel verdient complimenten voor deze stap.
De enige kanttekening is dat nog niet expliciet is gemaakt welke Creative Commons licentie er van toepassing is. Uit de bovenstaande tekst is op te maken dat afgeleide werken zijn toegestaan, maar bijvoorbeeld niet of dat ook voor commercieel hergebruik geldt, of dat verwacht wordt dat je een afgeleid werk onder gelijke condities weer deelt. Ook is nog niet helder of naamsvermelding van de Provincie als oorspronkelijke maker vereist is.

Ik heb de Provincie een mail gestuurd met de vraag of ze nog expliciet kunnen aangeven welke Creative Commons licentie geldt voor de iconen. Hopelijk hanteren ze net als bij hun data een CC0 of CC-BY licentie.

Een van de iconen, voor recreatie, die door de Provincie Overijssel beschikbaar is gesteld. (licentie CC, onbekend)

Today I’m working at Library Service Fryslan to further document and detail our Networked Agency based library program Impact through Connection. This is a continuation of our work last December.

The team in skype conversation, which is why all are staring towards the laptop.

We sat down to augment material and write this morning. In the afternoon we spent an hour talking to David Lankes. He’s the director of USC’s library and information science school, and the originator of the term ‘community librarian’. Jeroen de Boer, our team lead, had asked him last month for some reflection on our work. That took the shape of an extended skype confcall this afternoon, which was very helpful.

Trying to make our effort much more tangible in terms of examples and in supporting librarians in their role in Impact through Connections, is one thing that was emphasised. The need for training librarians in the methodological aspects of this, to help them feel more comfortable in the open-ended setting we create for this project, another. It also made us realise that some of the things we already mentioned, or did earlier, but since dropped of our radar somewhat, need to be pulled more into the center again. The suggestion to create multiple parallel propositions for libraries, as a way to better engage in conversation about the level of service provided, involvement of librarians, and the consequences different choices carry, I think was a good practical tip.

A conversation with David Lankes
In conversation with David Lankes

Enige tijd geleden is een groep organisaties, zoals Waag Society, het initiatief PublicSpaces gestart. Binnen deze non-profit zoekt een groep organisaties de weg (terug) naar internet als gemeenschappelijk goed, weg van de social networking platforms die uiteindelijk alleen een commercieel doel dienen. Hoe dat zou moeten staat nog open volgens mij, al is er wel een manifest, en ik weet ook niet in hoeverre gedistribueerd denken (de onzichtbare hand van netwerken) echt een rol speelt. Het maakt in ieder geval nieuwsgierig. Het thema is zonder meer van groot belang, en er ‘broeit’ op allerlei plekken activiteit op dit vlak. Soms ‘klassiek’ geheel op technisch vlak voor een toepassing (zoals Mastodon), en soms in de breedte (zoals Next Generation Internet). PublicSpaces kiest zo te zien ook een maatschappelijke insteek, en dat is terecht. In het verleden heb ik het er met Marleen Stikker van Waag Society wel over gehad dat je een ‘nieuw maatschappelijk middenveld’ nodig hebt, dat zich heeft georganiseerd en werkt langs de lijnen van onze genetwerkte digitale wereld, en zichtbaar genoeg is voor het ‘klassieke’ middenveld en bijvoorbeeld de overheid.

Komende week is er een bijeenkomst in Arnhem, onder de titel “Het internet is stuk” waar PublicSpace centraal staat. Georganiseerd mede door Marco Derksen, zal Geert-Jan Bogaerts, hoofd digitaal van de VPRO en voorzitter van PublicSpaces de ideeën en plannen toelichten.

Het maximaal aantal plaatsen is al bereikt, en er is een wachtlijst, dus ik zal er zelf niet bij zijn. Maar Frank Meeuwsen wel, dus ik verwacht dat er wel wat impressies in zijn blog of dat van Marco Derksen (die er al onlangs over schreef) zullen verschijnen.

heat wave in bryant park
De publieke ruimte is het originele social media platform. Onze tools lijken er nog te weinig op. (photo Laura LaRose, license CC-BY)

Donald Clark writes about the use of voice tech for learning. I find I struggle enormously with voice. While I recognise several aspects put forward in that posting as likely useful in learning settings (auto transcription, text to speech, oral traditions), there are others that remain barriers to adoption to me.

For taking in information as voice. Podcasts are mentioned as a useful tool, but don’t work for me at all. I get distracted after about 30 seconds. The voices drone on, there’s often tons of fluff as the speaker is trying to get to the point (often a lack of preparation I suppose). I don’t have moments in my day I know others use to listen to podcasts: walking the dog, sitting in traffic, going for a run. Reading a transcript is very much faster, also because you get to skip the bits that don’t interest you, or reread sections that do. Which you can’t do when listening, because you don’t know when a uninteresting segment will end, or when it might segue into something of interest. And then you’ve listened to the end and can’t get those lost minutes back. (Videos have the same issue, or rather I have the same issue with videos)

For using voice to ask or control things. There are obvious privacy issues with voice assistants. Having active microphones around for one. Even if they are supposed to only fully activate upon the use of the wake-up word, they get triggered by false positives. And don’t distinguish between me and other people that maybe it shouldn’t respond to. A while ago I asked around in my network how people use their Google and Amazon microphones, and the consensus was that most settle on a small range of specific uses. For those it shouldn’t be needed to have cloud processing of what those microphones tape in your living room, those should be able to be dealt with locally, with only novel questions or instructions being processed in the cloud. (Of course that’s not the business model of these listening devices).

A very different factor in using voice to control things, or for instance dictate is self-consciousness. Switching on a microphone in a meeting has a silencing effect usually. For dictation, I won’t dictate text to software e.g. at a client’s office, or while in public (like on a train). Nor will I talk to my headset while walking down the street. I might do it at home, but only if I know I’m not distracting others around me. In the cases where I did use dictation software (which nowadays works remarkably well), I find it clashes with my thinking and formulation. Ultimately it’s easier for me to shape sentences on paper or screen where I see them take shape in front of me. When dictating it easily descends into meaninglessness, and it’s impossible to structure. Stream of thought dictation is the only bit that works somewhat, but that needs a lot of cleaning up afterwards. Judging by all podcasts I sampled over the years, it is something that happens to more people when confronted with a microphone (see the paragraph above). Maybe if it’s something more prepared like a lecture, or presentation, it might be different, but those types of speech have been prepared in writing usually, so there is likely a written source for it already. In any case, dictation never saved me any time. It is of course very different if you don’t have the use of your hands. Then dictation is your door to the world.

It makes me wonder how voice services are helping you? How is it saving you time or effort? In which cases is it more novelty than effectiveness?

Alan Levine recently posted his description of how to add an overview to your blog of postings from previous years on the same date as today. He turned it into a small WordPress plugin, allowing you to add such an overview using a shortcode wherever in your site you want it. It was something I had on my list of potential small hacks, so it was a nice coincidence my feedreader presented me with Alan’s posting on this. It has become ‘small hack’ 4.

I added his WP plugin, but it didn’t work as the examples he provided. The overview was missing the years. Turns out a conditional loop that should use the posting’s year, only was provided with the current year, thus never fulfilling the condition. A simple change in how the year of older postings was fetched fixed it. Which has now been added to the plugin.

In the right hand sidebar you now find a widget listing postings from earlier years, and you can see the same on the page ‘On This Blog Today In‘. I am probably my own most frequent reader of the archives, and having older postings presented to me like this adds some serendipity.

From todays historic postings, the one about the real time web is still relevant to me in how I would like a social feed reader to function. And the one about a storm that kept me away from home, I still remember (ah, when Jaiku was still a thing!).

Adding these old postings is as simple as adding the shortcode ‘postedtoday’:

There is 1 post found on this site published on February 22

  • February 22, 2015
    • Student’s Six Big Data Lessons Students from a minor ‘big data’ at the local university of applied sciences presented their projects a few weeks ago. As I had done a session with them on open data as a guest lecturer, I was invited to the final presentations. From those presentations in combination several things stood out for me. Things that […]

A while ago Peter wrote about energy security and how having a less reliable grid may actually be useful to energy security.

This the difference between having tightly coupled systems and loosely coupled systems. Loosely coupled systems can show more robustness because having failing parts will not break the whole. It also allows for more resilience that way, you can locally fix things that fell apart.

It may clash however with our current expectations of having electricity 24/7. Because of that expectation we don’t spend much time about being clever in our timing and usage of energy. A long time ago I provided training to a group of some 20 Iraqi water provision managers, as part of the rebuilding efforts after the US invasion of Iraq. They had all kinds of issues obviously, and often issues arising in parallel. What I remember connected to Peter’s post is how they described Iraqi citizens had adapted to the intermittent availability of electricity and water. How they made things work, at some level, by incorporating the intermittent availability of things into their routines. When there was no electricity they used water for cooling, and vice versa for instance. A few years ago at a Border Sessions conference in The Hague, one speaker talked about resilience and intermittent energy sources too. He mentioned the example that historically Dutch millers had dispensation of visiting church on Sundays if it was windy enough to mill.

The past few days in Dutch newspapers a discussion is taking place that some local solar energy plans can’t be implemented because the grid maintainers can’t deal with the inputs. Now this isn’t necessarily true, but more the framing that comes with the current always on macro-grid. Tellingly any mention of micro grids, or local storage is absent from that framing.

In a different discussion with Peter Rukavina and with Peter Bihr, it was mentioned that resilience is, and needs to be, rising on the list of design principles. It’s also the reason why resilience is one of three elements of agency in my networked agency thinking.

Line 'Em Up
Power lines in Canada, photo Ian Muttoo, license CC BY SA