It obviously makes no sense to block the mail system if you disagree with some of the letters sent. The deceptive method of blocking used here, targeting the back-end servers so that mail traffic simply gets ignored, while Russian Protonmail users still seemingly can access the service, is another sign that they’d rather not let you know blocking goes on at all. This is an action against end-to-end encryption.

The obvious answer is to use more end-to-end encryption, and so increase the cost of surveillance and repression. Use my protonmail address as listed on the right, or use PGP using my public key on the right to contact me. Other means of reaching me with end-to-end encryption are the messaging apps Signal and Threema, as well as Keybase (listed on the right as well).

Bookmarked Russia blocks encrypted email provider ProtonMail (TechCrunch)
Russia has told internet providers to enforce a block against encrypted email provider ProtonMail, the company’s chief has confirmed. The block was ordered by the state Federal Security Service, formerly the KGB, according to a Russian-language blog, which obtained and published the order aft…

Aral Balkan talks about how to design tools and find ways around the big social media platforms. He calls for the design and implementation of Small Tech. I fully agree. Technology to provide us with agency needs to be not just small, but smaller than us, i.e. within the scope of control of the group of people deploying a technology or method.

My original fascination with social media, back in the ’00s when it was blogs and wikis mostly, was precisely because it was smaller than us, it pushed publication and sharing in the hands of all of us, allowing distributed conversations. The concentration of our interaction in the big tech platforms made social media ‘bigger than us’ again. We don’t decide what FB shows us, breaking out of your own bubble (vital in healthy networks) becomes harder because sharing is based on pre-existing ‘friendships’ and discoverability has been removed. The erosion has been slow, but very visible. Networked Agency, to me, is only possible with small tech, and small methods. It’s why I find most ‘digital transformation’ efforts disappointing, and feel we need to focus much more on human digital networks, on distributed digital transformation. Based on federated small tech, networks of small tech instances. Where our tools are useful on their own, and more useful in concert with others.

Aral’s posting (and blog in general) is worth a read, and as he is a coder and designer, he acts on those notions too.

This weekend an online virtual IndieWebCamp took place. One of the topics discussed and worked upon has my strong interest: making it possible to authorise selective access to posts.

Imagine me writing here about my intended travel. This I would want to share with my network, but not necessarily announce publicly until after the fact. Similarly, let’s say I want some of those reading here to get an update about our little one, then I’d want to be able to indicate who can have access to that posting.

In a platform like FB, and previously on Google plus with its circles, you can select audiences for specific postings. Especially the circles in Google allowed fine grained control. On my blog it is much less obvious how to do that. Yet, there are IndieWeb components that would allow this. For instance IndieAuth already allows you to log-in to this website and other platforms using your own URL (much like Facebook’s login can be used on multiple sites, although you really don’t want to do that as it lets FB track you across other sites you use). However, for reading individual postings that have restricted access, it would require an action made by a human (accepting the authorisation request), which makes it impractical. Enter AutoAuth, based on IndieAuth, that allows your site to log-in to mine without human intervention.

Martijn van de Ven and Sven Knebel worked on this, as sketched out in the graph below.

Selective access to content inside a posting
Now, once this is working, I’d like to take it one step further still. The above still assumes I have postings for all and postings for some, and that implies writing entire postings with a specific audience in mind. More often I find I am deliberately vague on some details in my public postings, even though I know some of my network reading here can be trusted with the specifics. Like names, places, photos etc. In those instances writing another posting with that detailed info for restricted access does not make much sense. I’d want to be able to restrict access to specific sentences, paragraphs or details in an otherwise public posting.

This is akin to the way government document management systems are slowly being adapted, where specific parts in a document are protected by data protection laws, while the document itself is public by law. Currently balancing those two obligations means human intervention before sharing, but slowly systems are being adapted to knowing where in documents restricted access material is located. Ideally I want a way of marking up text in a posting like this so that it is only send out by the webserver when an authorisation like sketched above is available.

So that a posting like this is entirely possible:

“Today we went to the zoo with the < general access > little one < / general access > < friends only > our little one’s name < / friends only >

< general access > general IMAGE of zoo visit< / general access >
< friends only >
IMAGE with little one’s face< / friends only >

There were several points made in the conversation after my presentation yesterday at Open Belgium 2019. This is a brief overview to capture them here.

1) One remark was about the balance between privacy and openness, and asking about (negative) privacy impacts.

The framework assumes government as the party being interested in measurement (given that that was the assignment for which it was created). Government held open data is by default not personal data as re-use rules are based on access regimes which in turn all exclude personal data (with a few separately regulated exceptions). What I took away from the remark is that, as we know new privacy and other ethical issues may arise from working with data combinations, it might be of interest if we can formulate indicators that try to track negative outcomes or spot unintended consequences, in the same way as we are trying to track positive signals.

2) One question was about if I had included all economic modelling work in academia etc.

I didn’t. This isn’t academic research either. It seeks to apply lessons already learned. What was included were existing documented cases, studies and research papers looking at various aspects of open data impact. Some of those are academic publications, some aren’t. What I took from those studies is two things: what exactly did they look at (and what did they find), and how did they assess a specific impact? The ‘what’ was used as potential indicator, the ‘how’ as the method. It is of interest to keep tracking new research as it gets published, to augment the framework.

3) Is this academic research?

No, its primary aim is as a practical instrument for data holders as well as national open data policy makers. It’s is not meant to establish scientific truth, and completely quantify impact once and for all. It’s meant to establish if there are signs the right steps are taken, and if that results in visible impact. The aim, and this connects to the previous question as well, is to avoid extensive modelling techniques, and favor indicators we know work, where the methods are straightforward. This to ensure that government data holders are capable to do these measurements themselves, and use it actively as an instrument.

4) Does it include citizen science (open data) efforts?

This is an interesting one (asked by Lukas of Luftdaten.info). The framework currently does include in a way the existence and emergence of citizen science projects, as that would come up in any stakeholder mapping attempts and in any emerging ecosystem tracking, and as examples of using government open data (as context and background for citizen science measurements). But the framework doesn’t look at the impact of such efforts, not in terms of socio-economic impact and not in terms of government being a potential user of citizen science data. Again the framework is to make visible the impact of government opening up data. But I think it’s not very difficult to adapt the framework to track citizen science project’s impact. Adding citizen science projects in a more direct way, as indicators for the framework itself is harder I think, as it needs more clarification of how it ties into the impact of open government data.

5) Is this based only on papers, or also on approaching groups, and people ‘feeling’ the impact?

This was connected to the citizen science bit. Yes, the framework is based on existing documented material only. And although a range of those base themselves on interviewing or surveying various stakeholders, that is not a default or deliberate part of how the framework was created. I do however recognise the value of for instance participatory narrative inquiry that makes the real experiences of people visible, and the patterns across those experiences. Including that sort of measurements would be useful especially on the social and societal impacts of open data. But currently none of the studies that were re-used in the framework took that approach. It does make me think about how one could set-up something like that to monitor impact e.g. of local government open data initiatives.

At Open Belgium 2019 today Daniel Leufer gave an interesting session on bringing philosophy and technology closer together. He presented the Open Philosophy Network, as an attempt to bring philosophy questions into tech discussions while preventing a) the overly abstract work going on in academia, b) not having all stakeholders at the table in an equal setting. He aims at local gatherings and events. Such as a book reading group, on Shoshana Zuboff’s The Age of Surveillance Capitalism. Or tech-ethics round table discussions where there isn’t a panel of experts that gets interviewed but where philosophers, technologists and people who use the technology are all part of the discussion.

This resonated with me at various levels. One level is that I recognise a strong interest in naive explorations of ethical questions around technology. For instance at our Smart Stuff That Matters unconference last summer, in various conversations ethical discussions emerged naturally from the actual context of the session and the event.
Another is that, unlike some of the academic efforts I know, the step towards practical applicability is expected and needed sooner by many. In the end it all has to inform actions and choices in the here and now, even when nobody expects definitive answers. It is also why I myself dislike how many ethical discussions pretending to be action oriented are primarily connected to future or emergent technologies, not to current technology choices. Then it’s just a fig leaf for inaction, and removing agency. I’m more a pragmatist, and am interested in what achieves actual improvements in the here and now, and what increases agency.
Thirdly I also felt that there are many more connections to make in terms of open session formats, such as Open Space, knowledge cafés, blogwalks, and barcamps, and indeed the living room experience of our birthday unconferences. I’ve organised many of those, and I feel the need to revisit those experiences and think about how to deploy them for something like this.This also applies to formulating a slightly more structured approach to assist groups in organisations with naive ethical explorations.

The point of ethics is not to provide definitive answers, but to prevent us using terrible answers

I hope to interact a bit more with Daniel Leufer in the near future.

Today I gave a brief presentation of the framework for measuring open data impact I created for UNDP Serbia last year, at the Open Belgium 2019 Conference.

The framework is meant to be relatable and usable for individual organisations by themselves, and based on how existing cases, papers and research in the past have tried to establish such impact.

Here are the slides.

This is the full transcript of my presentation:

Last Friday, when Pieter Colpaert tweeted the talks he intended to visit (Hi Pieter!), he said two things. First he said after the coffee it starts to get difficult, and that’s true. Measuring impact is a difficult topic. And he asked about measuring impact: How can you possibly do that? He’s right to be cautious.

Because our everyday perception of impact and how to detect it is often too simplistic. Where’s the next Google the EC asked years ago. but it’s the wrong question. We will only know in 20 years when it is the new tech giant. But today it is likely a small start-up of four people with laptops and one idea, in Lithuania or Bulgaria somewhere, and we are by definition not be able to recognize it, framed this way. Asking for the killer app for open data is a similarly wrong question.

When it comes to impact, we seem to want one straightforward big thing. Hundreds of billions of euro impact in the EU as a whole, made up of a handful of wildly successful things. But what does that actually mean for you, a local government? And while you’re looking for that big impact you are missing all the smaller craters in this same picture, and also the bigger ones if they don’t translate easily into money.

Over the years however, there have been a range of studies, cases and research papers documenting specific impacts and effects. Me and my colleagues started collecting those a long time ago. And I used them to help contextualise potential impacts. First for the Flemish government, and last year for the Serbian government. To show what observed impact in for instance a Spanish sector would mean in the corresponding Belgian context. How a global prediction correlates to the Serbian economy and government strategies.

The UNDP in Serbia, asked me to extend that with a proposal for indicators to measure impact as they move forward with new open data action plans in follow up of the national readiness assessment I did for them earlier. I took the existing studies and looked at what they had tried to measure, what the common patterns are, and what they had looked at precisely. I turned that into a framework for impact measurement.

In the following minutes I will address three things. First what makes measuring impact so hard. Second what the common patterns are across existing research. Third how, avoiding the pitfalls, and using the commonalities we can build a framework, that then in itself is an indicator.Let’s first talk about the things that make measuring impact hard.

Judging by the available studies and cases there are several issues that make any easy answers to the question of open data impact impossible.There are a range of reasons measurement is hard. I’ll highlight a few.
Number 3, context is key. If you don’t know what you’re looking at, or why, no measurement makes much sense. And you can only know that in specific contexts. But specifying contexts takes effort. It asks the question: Where do you WANT impact.

Another issue is showing the impact of many small increments. Like how every Dutch person looks at this most used open data app every morning, the rain radar. How often has it changed a decision from taking the car to taking a bike? What does it mean in terms of congestion reduction, or emission reduction? Can you meaningfully quantify that at all?

Also important is who is asking for measurement. In one of my first jobs, my employer didn’t have email for all yet, so I asked for it. In response the MD asked me to put together the business case for email. This is a classic response when you don’t want to change anything. Often asking for measurement is meant to block change. Because they know you cannot predict the future. Motives shape measurements. The contextualisation of impact elsewhere to Flanders and Serbia in part took place because of this. Use existing answers against such a tactic.

Maturity and completeness of both the provision side, government, as well as the demand side, re-users, determine in equal measures what is possible at all, in terms of open data impact. If there is no mature provision side, in the end nothing will happen. If provision is perfect but demand side isn’t mature, it still doesn’t matter. Impact demands similar levels of maturity on both sides. It demands acknowledging interdependencies. And where that maturity is lacking, tracking impact means looking at different sets of indicators.

Measurements often motivate people to game the system. Especially single measurements. When number of datasets was still a metric for national portals the French opened with over 350k datasets. But really it was just a few dozen, which they had split according to departments and municipalities. So a balance is needed, with multiple indicators that point in different directions.

Open data, especially open core government registers, can be seen as infrastructure. But we actually don’t know how infrastructure creates impact. We know that building roads usually has a certain impact (investment correlates to a certain % rise in GDP), but we don’t know how it does so. Seeing open data as infrastructure is a logical approach (the consensus seems that the potential impact is about 2% of GDP), but it doesn’t help us much to measure impact or see how it creates that.

Network effects exist, but they are very costly to track. First order, second order, third order, higher order effects. We’re doing case studies for ESA on how satellite data gets used. We can establish network effects for instance how ice breakers in the Botnian gulf use satellite data in ways that ultimately reduce super market prices, but doing 24 such cases is a multi year effort.

E puor si muove! Galileo said Yet still it moves. The same is true for open data. Most measurements are proxies. They show something moving, without necessarily showing the thing that is doing the moving. Open data often is a silent actor, or a long range one. Yet still it moves.

Yet still it moves. And if we look at the patterns of established studies, that is what we indeed see. There are communalities in what movement we see. In the list on the slide the last point, that open data is a policy instrument is key. We know publishing data enables other stakeholders to act. When you do that on purpose you turn open data into a policy instrument. The cheapest one you have next to regulation and financing.

We all know the story of the drunk that lost his keys. He was searching under the light of a street lamp. Someone who helped him else asked if he lost the keys there. No, the drunk said, but at least there is light here. The same is true for open data. If you know what you published it for, at least you will be able to recognise relevant impact, if not all the impact it creates. Using it as policy instrument is like switching on the lights.

Dealing with lack of maturity means having different indicators for every step of the way. Not just seeing if impact occurs, but also if the right things are being done to make impact possible: Lead and lag indicators

The framework then is built from what has been used to establish impact in the past, and what we see in our projects as useful approaches. The point here is that we are not overly simplifying measurement, but adapt it to whatever is the context of a data provider or user. Also there’s never just one measurement, so a balanced approach is possible. You can’t game the system. It covers various levels of maturity from your first open dataset all the way to network effects. And you see that indicators that by themselves are too simple, still can be used.

Additionally the framework itself is a large scale sensor. If one indicator moves, you should see movement in other indicators over time as well. If you throw a stone in the pond, you should see ripples propagate. This means that if you start with data provision indicators only, you should see other measurements in other phases pick up. This allows you to both use a set of indicators across all phases, as well as move to more progressive ones when you outgrow the initial ones.finally some recommendations.

Some final thoughts. If you publish by default as integral part of processes, measuring impact, or building a business case is not needed as such. But measurement is very helpful in the transition to that end game. Core data and core policy elements, and their stakeholders are key. Measurement needs to be designed up front. Using open data as policy instrument lets you define the impact you are looking for at the least. The framework is the measurement: Only micro-economic studies really establish specific economic impact, but they only work in mature situations and cost a lot of effort, so you need to know when you are ready for them. But measurement can start wherever you are, with indicators that reflect the overall open data maturity level you are at, while looking both back and forwards. And because measurement can be done, as a data holder you should be doing it.

Kilroy black edited

Social geolocation services over the years have been very useful for me. The value is in triggering serendipitous meetings: being in a city outside my normal patterns at the same time someone in (or peripheral to) my network is in the city too, outside their normal patterns. It happened infrequently, about once a year, but frequently enough to be useful and keep checking in. I was a heavy user of Plazes and Dopplr, both long since disappeared. As with other social platforms I and my data quickly became the marketable product, instead of the customer. So ultimately I stopped using Foursquare/Swarm much, only occasionally for international travel, and completely in 2016. Yet I still long for that serendipitous effect, so I am looking to make my location and/or travel plans available, for selected readers, through this site.

There are basically three ways in which I could do that.
1) The POSSE way. I post my location or travel plan on this blog, and it gets shared to platforms like Foursquare, and through RSS. I would need to be able to show these postings only to my followers/ readers, and have a password protected RSS feed and subscription workflow.
2) The PESOS way. I use an existing platform to create my check-ins, like Foursquare, and share that back to my blog. Where it is only accessible for followers/readers, and has a password protected rss feed.
3) The ‘just my’ way. I use only my blog to create check-ins and share them selectively with followers and readers, and have a password protected rss feed for it.

Option 3 is the one that provides the most control over my data, but likely limits the way in which I can allow others to follow me, and needs a flexible on-the-go way to add check-ins through mobile.
Option 2 is the one that comes with easy mobile apps, allows followers to use their own platform apps to do so, as well as through my site.
Option 1 is the one that is in between those two. It has the problems of option 3, but still allows others to use their own platforms like in option 2.

I decided to try and do both Option 2, and Option 3. If I can find a way to make Option 3 work well, getting to Option 1 is an extension of it.
Option 2 at first glance was the easiest to create. This because Aaron Parecki already created ‘Own Your Swarm‘ (OYS) which is a bridge between my existing Foursquare/Swarm account and Micropub, an open protocol for which my site has an endpoint. It means I can let OYS talk both to my Swarm account and my site, so that it posts something to this blog every time I check-in in Swarm on my mobile. OYS not just posts the check-ins but also keeps an eye on my Swarm check-ins, so that when there are comments or likes, they too get reflected to my blog.

My blog uses the Posts Kinds plugin, that has a posting type for check-ins, so they get their own presentation in the blog. OYS allows me to automatically tag what it posts, which gets matched to the existing categories and tags in my blog.

I from now on use a separate category for location related postings, called plazes. Plazes was the original geolocation app I started using in 2004, when co-founder Felix Petersen showed it to me on the very first BlogWalk I co-organised in the Netherlands. Plazes also was the first app to quickly show me the value of creating serendipitous meetings. So as an expression of geo-serendipic (serendipity-epic?) nostalgia, I named the postings category after it.

Dave Winer writes “we all feel disempowered“:

… people who feel disempowered figure there’s nothing they can do, no one would listen to me anyway, so I’ll just go on doing what I do. I know I feel that way.

He’s talking in the context of the US political landscape, but it applies in general too. Part of the solution he suggests is to

Invest in local news. And btw, I have a lot more to invest than money.

Two things stand out for me.

One is, that we’ve come accustomed to view everything through the lens of individualism. Yes, we’ve gained much from individualism, but by now we’ve also landed in a false dichotomy. The false dichotomy is the presumption that you need to solve something as an individual, or if you individually can’t then all is lost. It puts all responsibility for any change on the individual, while it is clear no-one can change the world on their own. It pitches individuals against society as a whole, but ignores the intermediate level: groups with agency.

The second false dichotomy is the choice between either the (hyper)local or the global. You remove litter from your street, or you set out to save the ozone layer. Here again there’s a bridge possible between those two extremes, the (hyper)local and the global. Where you do something useful locally that also has some impact on a global issue. Or where you translate a global issue to how it manifests locally and solves a local need. You can worry about global fossil fuel use and with a cooperative in your area generate green energy. You can run your own parts of a global infrastructure, while basically only looking to create a local service. It is not either local or global. It can be local action, leveraging the opportunities global connection brings, or to mitigate the fall-out of global issues. It can be global, as scaling of local efforts.

Local / global, individual / society aren’t opposites, they’re layers. Complexity resides in that layeredness. To help deal with complexity the intermediate levels between the individual and the masses, bridging the local and the global (note: the national level is not that bridge) is what counts. The false dichotomies, and the narratives they are used in, obscure that, and create disempowerment that way.

Disempowerment is a kind of despair. The answer to despair isn’t hope but action. Networked agency, looks at groups in context to solve their own issues, in the full awareness of the global networks that surrounds us. Group action in its own context, overlapping into other contexts, layered into global context, like Russian dolls.

The number and frequency of 51% attacks on blockchains is increasing. Ethereum last month being the first of the top 20 cryptocoins to be hit. Other types of attacks mostly try to exploit general weaknesses in how exchanges operate, but this is fundamental to how blockchain is supposed to work. Combined with how blockchain projects don’t seem to deliver and are basically vaporware, we’ve definitely gone from the peak of inflated expectations to the trough of disillusion. Whether there will be a plateau of productivity remains an open question.

A team of people, including Jeremy Keith whose writings are part of my daily RSS infodiet, have been doing some awesome web archeology. Over the course of 5 days at CERN, they recreated the browser experience as it was 30 years ago with the (fully text based) WorldWideWeb application for the NeXT computer

Hypertext’s root, the CERN page in 1989

This is the type of pages I visited before inline images were possible.
The cool bit is it allows you to see your own site as it would have looked 30 years ago. (Go to Document, then Open from full document reference and fill in your url) My site looks pretty well, which is not surprising as it is very text centered anyway.

Hypertexting this blog like it’s 1989

Maybe somewhat less obvious, but of key importance to me in the context of my own information strategies and workflows, as well as in the dynamics of the current IndieWeb efforts is that this is not just a way to view a site, but you can also edit the page directly in the same window. (See the sentence in all capitals in the image below.)

Read and write, the original premise of the WWW

Hypertext wasn’t meant as viewing-only, but as an interactive way of linking together documents you were actively working on. Closest come current wiki’s. But for instance I also use Tinderbox, a hypertext mindmapping, outlining and writing tool for Mac, that incorporates this principle of linked documents and other elements that can be changed as you go along. This seamless flow between reading and writing is something I feel we need very much for effective information strategies. It is present in the Mother of all Demos, it is present in the current thinking of Aaron Parecki about his Social Reader, and it is a key element in this 30 year old browser.

Kars Alfrink pointed me to a report on AI Ethics by the Nuffield Foundation, and from it lifts a specific quote, adding:

Good to see people pointing this out: “principles alone are not enough. Instead of representing the outcome of meaningful ethical debate, to a significant degree they are just postponing it”

This postponing of things, is something I encounter all the time. In general I feel that many organisations who claim to be looking at ethics of algorithms, algorithmic fairness etc, currently actually don’t have anything to do with AI, ML or complicated algorithms. To me it seems they just do it to place the issue of ethics well into the future, that as yet unforeseen point they will actually have to deal with AI and ML. That way they prevent having to look at ethics and de-biasing their current work, how they now collect, process data and the governance processes they have.

This is not unique to AI and ML though. I’ve seen it happen with open data strategies too. Where the entire open data strategy of for instance a local authority was based on working with universities and research entities to figure out how decades after now data might play a role. No energy was spent on how open data might be an instrument in dealing with actual current policy issues. Looking at future issues as fig leaf to not deal with current ones.

This is qualitatively different from e.g. what we see in the climate debates, or with smoking, where there is a strong current to deny the very existence of issues. In this case it is more about being seen to solve future issues, so no-one notices you’re not addressing the current ones.

Chris Corrigan last November wrote a posting “Towards the idea that complexity is a theory of change“. Questions about the ‘theory of change’ you intend to use are regular parts of project funding requests for NGO’s, the international development sector and the humanitarian aid sector.

Chris’ posting kept popping up in my mind, “I really should blog about this”. But I didn’t. So for now I just link to it here. Because I think Chris is right, complexity is a theory of change. And in projects I do that concern community stewarding, networked agency and what I call distributed digital transformation, basically anything where people are the main players, it is for me in practice. Articulating it that way is helpful.

Cutting Through Complexity
How not to deal with complexity… Overly reductionist KPMG adverts on Thames river boats

To me there seems to be something fundamentally wrong with plans I come across where companies would pay people for access to their personal data. This is not a well articulated thing, it just feels like the entire framing of the issue is off, so the next paragraphs are a first attempt to jot down a few notions.

To me it looks very much like a projection by companies on people of what companies themselves would do: treating data as an asset you own outright and then charging for access. So that those companies can keep doing what they were doing with data about you. It doesn’t strike me as taking the person behind that data as the starting point, nor their interests. The starting point of any line of reasoning needs to be the person the data is about, not the entity intending to use the data.

Those plans make data release, or consent for using it, fully transactional. There are several things intuitively wrong with this.

One thing it does is put everything in the context of single transactions between individuals like you and me, and the company wanting to use data about you. That seems to be an active attempt to distract from the notion that there’s power in numbers. Reducing it to me dealing with a company, and you dealing with them separately makes it less likely groups of people will act in concert. It also distracts from the huge power difference between me selling some data attributes to some corp on one side, and that corp amassing those attributes over wide swaths of the population on the other.

Another thing is it implies that the value is in the data you likely think of as yours, your date of birth, residence, some conscious preferences, type of car you drive, health care issues, finances etc. But a lot of value is in data you actually don’t have about you but create all the time: your behaviour over time, clicks on a site, reading speed and pauses in an e-book, minutes watched in a movie, engagement with online videos, the cell towers your phone pinged, the logs about your driving style of your car’s computer, likes etc. It’s not that the data you’ll think of as your own is without value, but that it feels like the magician wants you to focus on the flower in his left hand, so you don’t notice what he does with his right hand.
On top of that it also means that whatever they offer to pay you will be too cheap: your data is never worth much in itself, only in aggregate. Offering to pay on individual transaction basis is an escape for companies, not an emancipation of citizens.

One more element is the suggestion that once such a transaction has taken place everything is ok, all rights have been transferred (even if limited to a specific context and use case) and that all obligations have been met. It strikes me as extremely reductionist. When it comes to copyright authors can transfer some rights, but usually not their moral rights to their work. I feel something similar is at play here. Moral rights attached to data that describes a person, which can’t be transferred when data is transacted. Is it ok to manipulate you into a specific bubble and influence how you vote, if they paid you first for the type of stuff they needed to be able to do that to you? The EU GDPR I think takes that approach too, taking moral rights into account. It’s not about ownership of data per se, but the rights I have if your data describes me, regardless of whether it was collected with consent.

The whole ownership notion is difficult to me in itself. As stated above, a lot of data about me is not necessarily data I am aware of creating or ‘having’, and likely don’t see a need for to collect about myself. Unless paying me is meant as incentive to start collecting stuff about me for the sole purpose of selling it to a company, who then doesn’t need my consent nor make the effort to collect it about me themselves. There are other instances where me being the only one able to determine to share some data or withhold it mean risks or negative impact for others. It’s why cadastral records and company beneficial ownership records are public. So you can verify that the house or company I’m trying to sell you is mine to sell, who else has a stake or claim on the same asset, and to what amount. Similar cases might be made for new and closely guarded data, such as DNA profiles. Is it your sole individual right to keep those data closed, or has society a reasonable claim to it, for instance in the search for the cure for cancer? All that to say, that seeing data as a mere commodity is a very limited take, and that ownership of data isn’t a clear cut thing. Because of its content, as well as its provenance. And because it is digital data, meaning it has non-rivalrous and non-excludable characteristics, making it akin to a public good. There is definitely a communal and network side to holding, sharing and processing data, currently conveniently ignored in discussions about data ownership.

In short talking about paying for personal data and data lockers under my control seem to be a framing that presents data issues as straightforward but doesn’t solve any of data’s ethical aspects, just pretends that it’s taken care of. So that things may continue as usual. And that’s even before looking into the potential unintended consequences of payments.

Help jij ons mee organiseren? We gaan een IndieWebCamp organiseren in Utrecht, een event om het gebruik van het Open Web te bevorderen, en met elkaar praktische zaken aan je eigen site te verbeteren. We zoeken nog een geschikte datum en locatie in Utrecht. Je hulp is dus van harte welkom.

Op het Open Web bepaal jij zelf wat je publiceert, hoe het er uit ziet, en met wie je in gesprek gaat. Op het Open Web bepaal je zelf wie en wat je volgt en leest. Het Open Web was er altijd al, maar in de loop van de tijd zijn we allemaal min of meer opgesloten geraakt in de silo’s van Facebook, Twitter, en al die anderen. Hun algoritmes en timelines bepalen nu wat jij leest. Dat kan ook anders. Bouw je eigen site, waar anderen niet tussendoor komen fietsen omdat ze advertentie-inkomsten willen genereren. Houd je eigen nieuwsbronnen bij, zonder dat andermans algoritme je opsluit in een bubbel. Dat is het IndieWeb: jouw content, jouw relaties, jij zit aan het stuur.

Frank Meeuwsen en ik zijn al heel lang onderdeel van internet en dat Open Web, maar brengen/brachten ook veel tijd in websilo’s als Facebook door. Inmiddels zijn we beiden actieve ‘terugkeerders’ op het Open Web. Afgelopen november waren we samen op het IndieWebCamp Nürnberg, waar een twintigtal mensen met elkaar discussieerde en ook zelf actief aan de slag gingen met hun eigen websites. Sommigen programmeerden geavanceerde dingen, maar de meesten zoals ikzelf bijvoorbeeld, deden juist kleine dingen (zoals het verwijderen van een link naar de auteur van postings op deze site). Kleine dingen zijn vaak al lastig genoeg. Toen we terugreden met de trein naar Nederland waren we het er al snel over eens: er moet ook een IndieWebCamp in Nederland komen. In Utrecht dus, dit voorjaar.

Om Frank te citeren:

Voel je je aangesproken door de ideeën van het open web, indieweb, wil je aan de slag met een eigen site die meer vrij staat van de invloeden sociale silo’s en datatracking? Wil je een nieuwsvoorziening die niet meer primair wordt gevoed door algoritmen en polariserende roeptoeters? Dan verwelkomen we je op twee dagen IndieWebCamp Utrecht.

Laat weten of je er bij wilt zijn.
Laat weten of je kunt helpen met het vinden van een locatie.
Laat weten hoe wij jou kunnen helpen bij je stappen op het Open Web.

Je bent uitgenodigd!

Dries Buytaert, the originator of the Drupal CMS, is pulling the plug on Facebook. Having made the same observations I did, that reducing FB engagement leads to more blogging. A year ago he set out to reclaim his blog as a thinking-out-loud space, and now a year on quits FB.

I’ve seen this in a widening group of people in my network, and I welcome it. Very much so. At the same time though, I realise that mostly we’re returning to the open web. As we were already there for a long time before the silo’s Sirens lured us in, silos started by people who like us knew the open web. For us the open web has always been the default.

Returning to the open web is in that sense not a difficult step to make. Yes, you need to overcome the FOMO induced by the silo’s endless scrolling timeline. But after that withdrawal it is a return to the things still retained in your muscle memory. Dusting off the domain name you never let lapse anyway. Repopulating the feed reader. Finding some old blogging contacts back, and like in the golden era of blogging, triangulate from their blog roll and published feeds to new voices, and subscribe to them. It’s a familiar rhythm that never was truly forgotten. It’s comforting to return, and in some ways privilege rather than a risky break from the mainstream.

It makes me wonder how we can bring others along with us. The people for whom it’s not a return, but striking out into the wilderness outside the walled garden they are familiar with. We say it’s easy to claim your own space, but is it really if you haven’t done it before? And beyond the tech basics of creating that space, what can we do to make the social aspects of that space, the network and communal aspects easier? When was the last time you helped someone get started on the open web? When was the last time I did? Where can we encounter those that want and need help getting started? Outside of education I mean, because people like Greg McVerry have been doing great work there.