I’m intrigued by Zettelkasten, that Roel Groeneveld describes in his blog. Zettelkasten means filing cards cabinet, so in and of itself isn’t anything novel. It’s all in the described process of course, which originates with systems thinker Niklas Luhmann. I recognise the utility of having lots of small notes, and the ability to link them like beads on a necklace, which is much like the ‘threading cards‘ I mentioned here recently. A personal knowledge management process is extremely important, and needs to be supported by the right tools. Specifically for more easily getting from loose notions, to emergent patterns, to new constructs. Balancing stock and flow. Zettelkasten coming from a paper age seems rather focused on stock though, and pays less attention to flow. Crucially it encourages links between notes, a flow-like aspect, but to me often the links carry more meaning and knowledge than the notes/nodes it connects. The reason for linking, the association that makes a link apparent is an extremely valuable piece of info. Not sure how that would find its place in the Zettelkasten process, as while links exist, they’re not treated as a thing of meaning in their own right. Also some of the principles of the process described, especially atomicity, seem prone to creating lots of overhead by having to rework notes taken during a day. That type of reworking is I think best done in the style of gardening: when you are searching for something, or passing through some notes anyway, you can add, change, link, split off etc.

Filing...Tossed out filing card cabinets of the Manchester City Library (NH/USA), image license CC BY SA

In terms of tools, I am on the look out for something other than Evernote that I currently use. What I like about it is that it ‘eats anything’ and a note can be an image, text, web page, book, pdf, or a drawing, which I can add tags to, and can access through scripts from e.g. my todo tool, etc. Zettelkasten is fully text based in contrast. As a strong point that means it can be completely created from plain text files, if you have a tool that allows you to create, edit, search and put them in an overview extremely fast. But very often ideas are contained in images as well, so dealing with media is key I think. The Zettelkasten tool The Archive is worth a try, but lacks precisely this type of media support. Devonthink on the other hand is way over the top, and let’s one loose oneself in its complexity. The Archive keeps things simple, which is much better, but maybe too simple.

UntitledProbably the top left gives the most realistic information. Image by Brooke Novak, license CC BY

An organisation that says it wants to work data driven as well as sees ethics as a key design ingredient, needs to take a very close look imho at how they set KPI’s and other indicators. I recently came across an organisation that says those first two things, but whose process of setting indicators looks to have been left as a naive exercise to internal teams.

To begin with, indicators easily become their own goals, and people will start gaming the measurement system to attain the set targets. (Think of call centers picking up the phone and then disconnecting, because they are scored on the number of calls answered within 3 rings, but the length of calls isn’t checked for those picked up)

Measurement also isn’t neutral. It’s an expression of values, regardless of whether you articulated your values. When you measure the number of traffic deaths for instance as an indicator for road safety, but not wounded or accidents as such, nor their location, you’ll end up minimising traffic deaths but not maximising road safety. Because the absence of deaths isn’t the presence of road safety. Deaths is just one, albeit the most irreparable one, expression of the consequences of unsafety. Different measurements lead to different real life actions and outcomes.

Gauges‘Gauges’ by Adam Kent, license CC BY

When you set indicators it is needed to evaluate what they cover, and more importantly what they don’t cover. To check if the overall set of indicators is balanced, where some indicators by definition deteriorate when others improve (so balance needs to be sought). To check if assumptions behind indicators have been expressed and when needed dealt with.

Otherwise you are bound to end up with blind spots, lack of balance, and potential injustices. Defined indicators also determine what data gets collected, and thus what your playing field is when you have a ‘data driven’ way of working. That way any blind spot, lack of balance and injustice will end up even more profoundly in your decisions. Because where indicators mostly look back in time at output, data driven use of the data underlying those indicators actively determines actions and thus (part of) future output, turning your indicators in a much more direct and sometimes even automated feedback loop.

CompassOnly if you’ve deliberately defined your true north, can you use your measurements to determine direction of your next steps. ‘Compass’ by Anthony, license CC BY ND

With my company we now have fully moved out of Slack and into Rocket.Chat. We’re hosting our own Rocket.Chat instance on a server in an Amsterdam data center.

We had been using Slack since 2016, and used it both for ourselves, and with some network partners we work with. Inviting in (government) clients we never did, because we couldn’t guarantee the location of the data shared. At some point we passed the free tier’s limits, meaning we’d have to upgrade to a paid plan to have access to our full history of messages.

Rocket.chat is an open source alternative that is offered as a service, but also can be self-hosted. We opted for a Rocket.chat specific package with OwnCube. It’s an Austrian company, but our Rocket.chat instance is hosted in the Netherlands.

Slack offers a very well working export function for all your data. Rocket.chat can easily import Slack archives, including user accounts, channels and everything else.

With the move complete, we now have full control over our own data and access to our entire history. The cost of hosting (11.50 / month) is less than Slack would already charge for 2 users when paid annually (12.50 / month). The difference being we have 14 users. That works out as over 85% costs saving. Adding users, such as clients during a project, doesn’t mean higher costs now either, while it will always be a better deal than Slack as long as there’s more than 1 person in the company.

We did keep the name ‘slack’ as the subdomain on which our self-hosted instance resides, to ease the transition somewhat. All of us switched to the Rocket.chat desktop and mobile apps (Elmine from Storymines helping with navigating the installs and activating the accounts for those who wanted some assistance).

Visually, and in terms of user experience human experience, it’s much the same as Slack. The only exception being the creation of bots, which requires some server side wrangling I haven’t looked into yet.

The move to Rocket.chat is part of a path to more company-wide information hygiene (e.g. we now make sure all of us use decent password managers with the data hosted on EU servers, and the next step is running our own cloud e.g. for collaborative editing with clients and partners), and more information security.

As back in the ’00s, I’m still very much with you on the KM as conversations front, Euan. Back then in a conversation with Sally Bean, while visiting another large KM event I used the metaphor of visiting an art convention and seeing nothing but paint and brushes vendors. Luckily, reading that event report I still had valuable conversations that day.

As a timely reminder of the value of conversations to learn stuff you didn’t before, I received a touching thank you note just this afternoon before seeing your posting in my feed reader.

It was from an Irish civil servant, telling me about something major he had been involved in in an African nation. He thanked me for a conversation we had in 2012 about crowdsourcing as a means for government agencies to collaboratively build public services with citizens. That started him on a path that helped his regional government entity, and resulted in the impactful work in an African nation mentioned. It’s humbling to know something like that has come about with some sort of contribution from my conversations seven years ago, and awesome and attentive he sees our conversation as part of that journey.

Pretty powerful stuff, conversations and stories.

Replied to Knowledge Management, arses and elbows. (The Obvious?)

For me the purpose of KM was to make it easier to have useful conversations with people who knew stuff that you didn’t…..

In de (goede en nuttige!) sessie van de VNG over de WOO op Overheid360 eerder deze maand, werden de aanwezigen meerdere vragen gesteld. De laatste, wanneer je denkt dat de implementatie van de WOO afgerond zal zijn, leverde bovenstaande foto op.

Ik kan er nog steeds niet helemaal over uit. Het probleem van de WOO is overduidelijk dat het daarin genoemde ‘op orde krijgen van de informatiehuishouding’ tot meer werk leidt dan een overheidsorganisatie zegt aan te kunnen en budget voor te hebben (of bereid is prioriteit aan te geven).

Iedereen in de zaal zei, volgens deze foto, niet aan de wet te gaan of kunnen voldoen. Niemand zei over 5 jaren de boel op orde te hebben, de termijn die in de wet genoemd is. Twee van de 34 (6%) dachten het over 8 jaar voor elkaar te hebben, en die werden als enorme optimisten betiteld. De anderen dachten dat het tot 2030 (56%) zou duren, of nooit afkomt (38%).

De WOO krijgt het verwijt extra werk te veroorzaken. Je informatiehuishouding op orde hebben, wie eist dat nou, zo lijkt de gedachte. De WOO in huidige vorm is echter al een compromis. De eerste versie werd als onhaalbaar afgedaan, en in de nieuwe versie geeft de wetgever overheidsinstellingen vijf jaar de tijd, en de verplichting te laten zien dat je ook je best doet om in die vijf jaar een inhaalslag te maken. De 2e WOO is al een herkansing. En niet eens een tweede kans, maar de derde.

Veertig jaar geleden, 1980, werd de WOB van kracht, die openbaarheid regelt. Sinds die tijd is er vrijwel niets gedaan om openbaarheid als grondbeginsel in de informatiehuishouding op te nemen. Nog altijd wordt een WOB verzoek als lastig ervaren, want dan moet je zo zoeken waar je je spullen hebt. Omdat je je informatiehuishouding nooit hebt aangepast om openbaarheidsverzoeken snel te kunnen afhandelen. In Noorwegen krijg je per kerende post je gevraagde informatie, maar hier is een WOB verzoek (en elk verzoek om documenten, in welke vorm dan ook, is een WOB verzoek, ook dat besef is er na 40 jaren nog altijd niet) altijd extra werk, naast je gewone taken. Alsof openbaarmaking niet een wettelijke taak is. Dat heeft altijd al tot gekrakeel geleid, en de wetgever heeft de overheidsinstellingen voor die krampscheuten uitsluitend beloond (zoals het verwijderen van dwangmiddelen, anders dan de rechtsgang).

Nu verplichte actieve openbaarmaking dichterbij komt wordt nog veel zichtbaarder dat de informatiehuishouding daar niet op ingericht is. Dat was deze namelijk voor de passieve openbaarmaking van de WOB al niet. Enige tijd geleden kwam ik nog een hoofd bedrijfsinformatie bij een overheidsinstelling tegen die me vroeg “dus jij zegt dat openbaarheid wettelijk is omschreven?”. Ja dat zei ik. En wel al veel langer dan iedereen in die sessie waar ik bovenstaande foto maakte bij de overheid werkt.

Er zijn diverse zaken die al lang verplicht zijn om actief openbaar te maken (denk aan besluiten, vergunningen etc.), en dat lukt. Er is dus niet echt reden aan te nemen dat het voor een lijst van anderen zaken, zoals de WOO opnoemt, in vijf jaren niet ook zou kunnen.

Uit de slide bovenaan blijkt dat men al heeft opgegeven voordat de WOO er nog maar is.
Het is kennelijk een erg radicaal idee om een algemene openbaarheids- en data/informatie-strategie op te stellen die ook belooft de implementatie van de WOO netjes op tijd af te ronden. Een aanpak waarbij je actieve openbaarmaking als kans ziet. Als een instrument waarmee je het gedrag van allerlei externe betrokkenen kunt beïnvloeden. Zoals je nu financiering (subsidies) en regelgeving inzet om gedrag te beïnvloeden, is openbaarmaking een derde beleidsinstrument. En wel de goedkoopste van de drie.

Mij doet het allemaal denken aan het onderstaande plaatje dat in al mijn vroegere kennismanagement- en veranderprojecten wel van toepassing was. “We hebben geen tijd voor fundamentele aanpassingen, want we zijn al zo druk met ons normale werk en brandjes blussen”.

Too Busy To Improve - Performance Management - Square Wheels
Alan O’Rourke, license CC-BY

Do y’all understand how easy it is to make a fake tweet from a screenshot? Like by inspecting the browser and changing the text? …. I don’t trust posts I can’t search up on archives. And if you do have a link, archive it (not in an image but using an reputable archiving service).

Jacky Alciné’s words are true, so I thought I’d illustrate.

The general principle here is: if you make a statement about someone or something other than yourself or your personal opinions, you need to back it up with a link to supporting material. “X said on Twitter” needs to be linked to that tweet. Leaving googling for your source as an exercise to your readers isn’t just merely convenient to you, it is actively destructive of the web. The web is links, and they’re a key piece of information for your readers to judge if what you tweeted/said/blogged might be signal or noise. No links means it’s likely noise and it will degrade your standing as a source of signals. No links is aiding and abetting the bots, trolls and fakesters, as it allows them to hide in more noise.

Adding a screen-shot as Jacky Alciné says is not enough ‘proof’, as they can easily be altered directly in your browser. An example:

Yesterday I posted my first Tweet from my recent brain implant. It was awesome! So awesome in fact, I made a screenshot of it to preserve the moment for posterity.

In reality I posted from Indigenous (see there’s a link there!), a mobile app that provides my phone with IndieWeb reading and publishing capabilities, which I syndicated to my Twitter account (see there’s another link!). Also awesome, but much less awesome than blogging from a brain implant.

The difference between those two screenshots, getting from true to fake, is that I altered the text of the Twitter website in my browser. Every browser allows you to see a website you visit in ‘developer’ mode. It is helpful to e.g. play around with colors, to see what might work better for your site. But you can also use it to alter content. It’s all the same to your browser. See this screenshot, where I am in the process of changing ‘Indigenous’ into ‘brain implant’

But, you say, tweets might have been deleted and grabbing a screenshot is a good way of making sure I still have some proof if a tweet does get deleted. That’s true, tweets and other content do get deleted. Like self-congratulatory tweets/VK/FB messages about the downing of MH17 by separatist supporting accounts, before it became clear a regular line flight was shot out of the air, and those accounts were quickly scrubbed (See Bellingcat‘s overview). Having a screenshot is useful, but isn’t enough. If only for the reason that the originator may simply say you faked it, as it can so easily be done in a browser (see above). You still need to provide a link.

Using the Web Archive, or another archiving site, is your solution. The Web Archive has preserving as much of the web and other online content as possible as its mission. It is a trustable source. They save web pages on their own initiative, but you can submit any URL for preservation yourself and it will immediately be saved to the archive. Each archived page has its own URL as well, so you can always reference it. (Many links in Wikipedia point to the archived version of a page from the point in time it was referenced in Wikipedia for this reason).

I submitted my tweet from yesterday to the Web Archive, where it now has a web address that neither I, nor Twitter can change. This makes it acceptable proof of what I did in fact send out as a tweet yesterday.