Available Energy Data in The Netherlands

Which energy data is available as open data in the Netherlands, asked Peter Rukavina. He wrote about postal codes on Prince Edward Island where he lives, and in the comments I mentioned that postal codes can be used to provide granular data on e.g. energy consumption, while still aggregated enough to not disclose personally identifiable data. This as I know he is interested in energy usage and production data.

He then asked:

What kind of energy consumption data do you have at a postal code level in NL? Are your energy utilities public bodies?
Our electricity provider, and our oil and propane companies are all private, and do not release consumption data; our water utility is public, but doesn’t release consumption data and is not subject (yet) to freedom of information laws.

Let’s provide some answers.

Postal codes

Dutch postal codes have the structure ‘1234 AB’, where 12 denotes a region, 1234 denotes a village or neighbourhood, and AB a street or a section of a street. This makes them very useful as geographic references in working with data. Our postal code begins with 3825, which places it in the Vathorst neighbourhood, as shown on this list. In the image below you see the postal code 3825 demarcated on Google maps.

Postal codes are both commercially available as well as open data. Commercially available is a full set. Available as open data are only those postal codes that are connected to addresses tied to physical buildings. This as the base register of all buildings and addresses are open data in the Netherlands, and that register includes postal codes. It means that e.g. postal codes tied to P.O. Boxes are not available as open data. In practice getting at postal codes as open data is still hard, as you need to extract them from the base register, and finding that base register for download is actually hard (or at least used to be, I haven’t checked back recently).

On Energy Utilities

All energy utilities used to be publicly owned, but have since been privatised. Upon privatisation all utilities were separated into energy providers and energy transporters, called network maintainers. The network maintainers are private entities, but are publicly owned. They maintain both electricity mains as well as gas mains. There are 7 such network maintainers of varying sizes in the Netherlands

(Source: Energielevernanciers.nl

The three biggest are Liander, Enexis and Stedin.
These network maintainers, although publicly owned, are not subject to Freedom of Information requests, nor subject to the law on Re-use of Government Information. Yet they do publish open data, and are open to data requests. Liander was the first one, and Enexis and Stedin both followed. The motivation for this is that they have a key role in the government goal of achieving full energy transition by 2050 (meaning no usage of gas for heating/cooking and fully CO2 neutral), and that they are key stakeholders in this area of high public interest.

Household Energy Usage Data

Open data is published by Liander, Enexis and Stedin, though not all publish the same type of data. All publish household level energy usage data aggregated to the level of 6 position postal codes (1234 AB), in addition to asset data (including sub soil cables etc) by Enexis and Stedin. The service areas of all 7 network maintainers are also open data. The network maintainers are also all open to additional data requests, e.g. for research purposes or for municipalities or housing associations looking for data to pan for energy saving projects. Liander indicated to me in a review for the European Commission (about potential changes to the EU public data re-use regulations), that they currently deny about 2/3 of data requests received, mostly because they are uncertain about which rules and contracts apply (they hold a large pool of data contributed by various stakeholders in the field, as well as all remotely read digital metering data). They are investigating how to improve on that respons rate.

Some postal code areas are small and contain only a few addresses. In such cases this may lead to personally identifiable data, which is not allowed. Liander, Stedin and I assume Enexis as well, solve this by aggregating the average energy usage of the small area with an adjacent area until the number of addresses is at least 10.

Our address falls in the service area of Stedin. The most recent data is that of January 1st 2018, containing the energy use for all of 2017. Searching for our postal code (which covers the entire street) in their most recent CSV file yields on lines 151.624 and 625:

click for full sizeclick to enlarge

The first line shows electricity usage (ELK), and says there are 33 households in the street, and the avarage yearly usage is 4599kWh. (We are below that at around 3700kWh / year, which is higher than we were used to in our previous home). The next line provides the data for gas usage (heating and cooking) “GAS”, which is 1280 m3 on average for the 33 connections. (We are slightly below that at 1200 m3).

SmugMug Buys Flickr, End of the Yahoo Era

I’ve been using Flickr to store photos since March 2005. It’s at the same time an easy way to embed photos in my blog without using up storage space in the hosting account, and an online remote back-up. Over the years I’ve uploaded some 24.000 photos, though I’ve been using Flickr less in the last 2 years.

My account is from just before the moment Yahoo bought Flickr from its founders, which was also in March 2005, and it forced me to create a Yahoo account for it in 2007. Yahoo never seemed to have much vision for Flickr, but as an early user (Flickrs was founded in 2004) the original functionality I signed up and paid for was all I really needed.

Yahoo has been bought by Verizon last year, and since then it was likely they’d sell some parts of it. SmugMug has acquired Flickr last week, and that at least means that photography is now the main focus again. That hopefully means further evolution of Flickr, or it might mean a switch to SmugMug in the future.

Tellingly one needs to accept the new terms of service by 25th May 2018, which is the day the EU data protection regulation GDPR enters into force.

It also means that I will be able to delete my Yahoo account, which I only had because Flickr users were forced to.
Yahoo is an internet dinosaur, launched in 1994. Its best days already lie way back. Deleting my Yahoo account as such is also an end of an era, an end that felt long overdue for years already.

Backdoors and Futile Stamping

Russia is trying to block Telegram, an end-to-end encrypted messaging app. The reason for blocking is that Telegram refused to provide keys to the authorities with which messages can be decrypted. Not for a specific case, but for listening into general traffic.

Asking for keys (even if technologically possible), to have a general backdoor is a very bad idea. It will always be misused by others. And yes, you do have something to hide. Your internet banking is encrypted, your VPN connection from home to your work computer is too. You use passwords on websites, mail accounts and your wifi. If you don’t have anything to hide, please leave your Facebook login details along with your banking details in the comments. I promise I won’t use them. The point isn’t whether I or government keep our promises (and I or government might not), it’s that others definitely won’t.

As a result of Telegram not providing the keys, Russia is now trying to block people from using it. This results in millions of IP addresses now being blocked, more than 1 IP address per the around 14 million users of Telegram in Russia. (Telegram reports about 200 million users globally per month). Because the service partly runs on servers of Amazon and Google data centers, and those are getting blocked. This impacts other services as well, who use the same data centers to flexibly scale their computing needs. The blocking attempts aren’t working though.

It shows how fully distributed systems are hard to stamp out, it will merely pop up somewhere else. The internet routes around damages, it is what it was designed to do.

Let’s see if actions will now be taken by Russian authorities against persons and assets of Telegram, as that really is the only (potential, not garantueed,) way to stamp out something: dismantling it. In the case of Telegram, a private company, there are indeed people and assets one could target. And Telegram is pledging to deploy those assets in resisting. Yet dismantling Telegram, even if successful and disregarding other costs and consequences for a government, defeats the original purpose of wanting to listen in to message traffic. Traffic will easily move into other encrypted tools, like Signal, while new even more distributed applications will also emerge in response.

Summary:

  • General backdoors, bad idea, regardless of whether you can trust the one you give back door access to.
  • Blocking is hard to do with distributed systems.
  • If you don’t accept attempts to do either from data driven authoritarian governments, you need to accept the same objections to general back door access apply to other situations where you think the stated aim has more merit.
  • Do use an encrypted messaging app, like Signal, as much as possible

Data Worlds, to Understand the Politics of Data

Jonathan Gray has published an article on Data Worlds, as a way to better understand and experiment with the consequences of the datafication of our lives. The article appeared in Krisis, an open access journal for contemporary philisophy, in its latest edition dealing with Data Activism.

Jonathan Gray writes

The notion of data worlds is intended to make space for thinking about data as more than simply a representational resource, and the politics of data as more than a matter of liberation and protection. It is intended to encourage exploration of the performative capacities of data infrastructures: what they do and could do differently, and how they are done and could be done differently. This includes consideration of, as Geoffrey Bowker puts it, “the ways in which our social, cultural and political values are braided into the wires, coded into the applications and built into the databases which are so much a part of our daily lives”

He describes 3 ‘data worlds’, and positions them as an instrument intended for practical usage.

The three aspects of data worlds which I examine below are not intended to be comprehensive, but illustrative of what is involved in data infrastructures, what they do, and how they are put to work. As I shall return to in the conclusion, this outline is intended to open up space for not only thinking about data differently, but also doing things with data differently. The test of these three aspects is therefore not only their analytical purchase, but also their practical utility.

Those 3 worlds mentioned are

  1. Data Worlds as Horizons of Intelligibility, where data is plays a role in changing what is sayable, knowable, intelligible and experienceable , where data allows us to explore new perspectives, arrive at new insights or even new overall understanding. Hans Rosling’s work with Gapminder falls in this space, and datavisualisations that combine time and geography. To me this feels like approaching what John Thackara calls Macroscopes, where one finds a way to understand complete systems and one’s own place and role in it, and not just the position of oneself. (a posting on Macroscopes will be coming)
  2. Data Worlds as Collective Accomplishments, where consequences (political, social, economic) result from not just one or a limited number of actors, but from a wide variety of them. Open data ecosystems and the shifts in how civil society, citizens and governments interact, but also big data efforts by the tech industry are examples Gray cites. “Looking at data worlds as collective accomplishments includes recognising the role of actors whose contributions may otherwise be under-recognised.
  3. Data Worlds as Transnational Coordination, in terms of networks, international institutions and norm setting, which aim to “shape the world through coordination of data“. In this context one can think of things like IATI, a civic initiative bringing standardisation and transparency to international aid globally, but also the GDPR through which the EU sets a new de-facto global standard on data protection.

This seems at first reading like a useful thinking tool in exploring the consequences and potential of various values and ethics related design choices.

(Disclosure: Jonathan Gray and I wore both active in the early European open data community, and are co-authors of the first edition/iteration of the Open Data Handbook in 2010)

Macron’s 1.5 Billion for Values, Data and AI

Data, especially lots of it, is the feedstock of machine learning and algorithms. And there’s a race on for who will lead in these fields. This gives it a geopolitical dimension, and makes data a key strategic resource of nations. In between the vast data lakes in corporate silos in the US and the national data spaces geared towards data driven authoritarianism like in China, what is the European answer, what is the proposition Europe can make the world? Ethics based AI. “Enlightenment Inside”.

French President Macron announced spending 1.5 billion in the coming years on AI last month. Wired published an interview with Macron. Below is an extended quote of I think key statements.

AI will raise a lot of issues in ethics, in politics, it will question our democracy and our collective preferences……It could totally dismantle our national cohesion and the way we live together. This leads me to the conclusion that this huge technological revolution is in fact a political revolution…..Europe has not exactly the same collective preferences as US or China. If we want to defend our way to deal with privacy, our collective preference for individual freedom versus technological progress, integrity of human beings and human DNA, if you want to manage your own choice of society, your choice of civilization, you have to be able to be an acting part of this AI revolution . That’s the condition of having a say in designing and defining the rules of AI. That is one of the main reasons why I want to be part of this revolution and even to be one of its leaders. I want to frame the discussion at a global scale….The key driver should not only be technological progress, but human progress. This is a huge issue. I do believe that Europe is a place where we are able to assert collective preferences and articulate them with universal values.

Macron’s actions are largely based on the report by French MP and Fields Medal winning mathematician Cédric Villani, For a Meaningful Artificial Intelligence (PDF)

Ethics by Design

My current thinking about what to bring to my open data and data governance work, as well as to technology development, especially in the context of networked agency, can be summarised under the moniker ‘ethics by design’. In a practical sense this means setting non-functional requirements at the start of a design or development process, or when tweaking or altering existing systems and processes. Non-functional requirements that reflect the values you want to safeguard or ensure, or potential negative consequences you want to mitigate. Privacy, power asymmetries, individual autonomy, equality, and democratic control are examples of this.

Today I attended the ‘Big Data Festival’ in The Hague, organised by the Dutch Ministry of Infrastructure and Water Management. Here several government organisations presented themselves and the work they do using data as an intensive resource. Stuff that speaks to the technologist in me. In parallel there were various presentations and workshops, and there I was most interested in what was said about ethical issues around data.

Author and interviewer Bas Heijne set the scene at the start by pointing to the contrast between the technology optimism concerning digitisation of years back and the more dystopian discussion (triggered by things like the Cambridge Analytica scandal and cyberwars), and sought the balance in the middle. I think that contrast is largely due to the difference in assumptions underneath the utopian and dystopian views. The techno-optimist perspective, at least in the webscene I frequented in the late 90’s and early 00’s assumed the tools would be in the hands of individuals, who would independently weave the world wide web, smart at the edges and dumb at the center. The dystopian views, including those of early criticaster like Aron Lanier, assumed, and were proven at least partly right, a centralisation into walled gardens where individuals are mere passive users or an object, and no longer a subject with autonomy. This introduces wildly different development paths concerning power distribution, equality and agency.

In the afternoon a session with professor Jeroen van den Hoven, of Delft University, focused on making the ethical challenges more tangible as well as pointed to the beginnings of practical ways to address them. It was the second time I heard him present in a month. A few weeks ago I attended an Ethics and Internet of Things workshop at University of Twente, organised by UNESCO World Commission on the Ethics of Science and Technology (COMEST). There he gave a very worthwile presentation as well.


Van den Hoven “if we don’t design for our values…”

What I call ethics by design, a term I first heard from prof Valerie Frissen, Van den Hoven calls value sensitive design. That term sounds more pragmatic but I feel conveys the point less strongly. This time he also incorporated the geopolitical aspects of data governance, which echoed what Rob van Kranenburg (IoT Council, Next Generation Internet) presented at that workshop last month (and which I really should write down separately). It was good to hear it reinforced for today’s audience of mainly civil servants, as currently there is a certain level of naivety involved in how (mainly local governments) collaborate with commercial partners around data collection and e.g. sensors in the public space.

(Malfunctioning) billboard at Utrecht Central Station a few days ago, with not thought through camera in a public space (to measure engagement with adverts). Civic resistance taped over the camera.

Value sensitive design, said Van den Hoven, should seek to combine the power of technology with the ethical values, into services and products. Instead of treating it as a dilemma with an either/or choice, which is the usual way it is framed: Social networking OR privacy, security OR privacy, surveillance capitalism OR personal autonomy, smart cities OR human messiness and serendipity. In value sensitive design it is about ensuring the individual is still a subject in the philosophical sense, and not merely the object on which data based services feed. By addressing both values and technological benefits as the same design challenge (security AND privacy, etc.), one creates a path for responsible innovation.

The audience saw both responsibilities for individual citizens as well as governments in building that path, and none thought turning one’s back on technology to fictitious simpler times would work, although some were doubtful if there was still room to stem the tide.

Reinventing Distributed Conversations

Peter Rukavina picks up on my recent blogging about blogging, and my looking back on some of the things I wrote 10 to 15 years ago about it (before the whole commercial web started treating social interaction as a adverts targeting vehicle).

In his blogpost in response he talks about putting back the inter in internet, inter as the between, and as exchanges.

… all the ideas and tools and debates and challenges we hashed out 20 years ago on this front are as relevant today as they were then; indeed they are more vital now that we’ve seen what the alternatives are.

And he asks “how can we continue to evolve it“?

That indeed is an important question, and one that is being asked in multiple corners. By those who were isolated from the web for years and then shocked by what they found upon their return. But also by others, repeatedly, such as Anil Dash and Mike Loukides of O’Reilly Media, when they talk about rebuilding or retaking the web.

Part of it is getting back to seeing blogging as conversations, conversations that are distributed across your and my blogs. This is what made my early blog bloom into a full-blown professional community and network for me. That relationships emerge out of content sharing, which then become more important and more persistent than the content, was an important driver for me to keep blogging after I started. These distributed conversations we had back then and the resulting community forming were even a key building block of my friend Lilia’s PhD a decade ago.

So I’m pleased that Peter responds to my blogging with a blogpost, creating a distributed conversation again, and like him I wonder what we can do to augment it. Things we had ideas about in the 00’s but which then weren’t possible, and maybe now are. Can we make our blogs smarter, in ways that makes the connections that get woven more tangible, discoverable and followable, so that it can become an enriching and integral part of our interaction?

From Semi Freddo to Full Cold Turkey with FB

I’ve disengaged from Facebook (FB) last October, mostly because I wanted to create more space for paying attention, and for active, not merely responsive, reflection and writing, and realised that the balance between the beneficial and destructive aspects of FB had tilted too much to the destructive side.

My intention was to keep my FB account, as it serves as a primary channel to some professional contacts and groups. Also FB Messenger is the primary channel for some. However I wanted to get rid of my FB history, all the likes, birthday wishes etc. Deleting material is possible but the implementation of it is completely impractical: every element needs to be deleted separately. Every like needs to be unliked, every comment deleted, every posting on your own wall or someone else’s wall not just deleted but also the deletion confirmed as well. There’s no bulk deletion option. I tried to use a Chrome plugin that promised to go through the activity log and ‘click’ all those separate delete buttons, but it didn’t work. The result is that deleting your data from Facebook means deleting every single thing you ever wrote or clicked. Which can easily take 30 to 45 mins to just do for a single month worth of likes and comments. Now aggregate that over the number of years you actively used FB (about 5 years in my case, after 7 years of passive usage).

The only viable path to delete your FB data therefore is currently to delete the account entirely. I wonder if it will be different after May, when the GDPR is fully enforced.

Not that deletion of your account is easy either. You don’t have full control over deletion. The link to do so is not available in your settings interface, but only through the help pages, and it is presented as submitting a request. After you confirm deletion, you receive an e-mail that deletion of your data will commence after 14 days. Logging back in in that period stops the clock. I suspect this will no longer be enough when the GDPR enters into force, but it is what it currently is.

Being away from FB for a longer time, with the account deactivated, had the effect that when I did log back in (to attempt to delete more of my FB history), the FB timeline felt very bland. Much like how watching tv was once not to be missed, and then it wasn’t missed at all. This made me realise that saying FB was the primary channel for some contacts which I wouldn’t want to throw away, might actually be a cop-out, the last stand of FOMO. So FB, by making it hard to delete data while keeping the account, made it easy to decide to delete my account altogether.

Once the data has been deleted (which can take up to 90 days according to FB after the 14 day grace period), I might create a new account, with which to pursue the benefits of FB, but avoid the destructive side and with 12 years of Facebook history wiped. Be seeing you!


FB’s mail confirming they’ll delete my account by the end of April.

Algorithms That Work For Me, Not Commodotise Me

Stephanie Booth, a long time blogging connection, has been writing about reducing her Facebook usage and increasing her blogging. She says at one point

As the current “delete Facebook” wave hits, I wonder if there will be any kind of rolling back, at any time, to a less algorithmic way to access information, and people. Algorithms came to help us deal with scale. I’ve long said that the advantage of communication and connection in the digital world is scale. But how much is too much?

I very much still believe there’s no such thing as information overload, and fully agree with Stephanie that the possible scale of networks and connections is one of the key affordances of our digital world. My rss-based filtering, as described in 2005, worked better when dealing with more information, than with less. Our information strategies need to reflect and be part of the underlying complexity of our lives.

Algorithms can help us with that scale, just not the algorithms that FB uses around us. For algorithms to help, like any tool, they need to be ‘smaller’ than us, as I wrote in my networked agency manifesto. We need to be able to control its settings, tinker with it, deploy it and stop it as we see fit. The current application of algorithms, as they usually need lots of data to perform, sort of demands a centralised platform like FB to work. The algorithms that really will be helping us scale will be the ones we can use for our own particular scaling needs. For that the creation, maintenance and usage of algorithms needs to have a much lower threshold than now. I placed it in my ‘agency map‘ because of it.

Going back to a less algorithmic way of dealing with information isn’t an option, nor something to desire I think. But we do need algorithms that really serve us, perform to our information needs. We need less algorithms that purport to aid us in dealing with the daily river of newsy stuff, but really commodotise us at the back-end.

Cory Doctorow’s Walkaway- Hey, I Could Help Do That!

In a case of synchronicity I’ve read Cory Doctorow’s novel Walkaway when I was ill recently, just as Bryan Alexander scheduled it for his near future science fiction reading group. I loved reading the book, and in contrast to some other works of Doctorow the storyline kept working for me until the end.

Bryan amazingly has managed to get Doctorow to participate in a webcast as part of the Future Trends in learning series Bryan hosts. The session is planned for May 16th, and I marked my calendar for it.

In the comments Vanessa Vaile shares two worthwile links. One is an interesting recording from May last year at the New York public library in which Doctorow and Edward Snowden discuss some of the elements and underlying topics and dynamics of the Walkaway novel.

The other is a review in TOR.com, that resonates a lot with me. The reviewer writes how, in contrast with lots of other science fiction that takes one large idea or large change and extrapolates on that, Doctorow takes a number of smaller ideas and smaller changes, and then works out how those might interplay and weave new complexities, where the impact on “manufacturing, politics, the economy, wealth disparity, diversity, privilege, partying, music, sex, beer, drugs, information security, tech bubbles, law, and law enforcement” is all presented in one go.

It seems futuristic, until you realize that all of these things exist today.
….. most of it could start right now, if it’s the world we choose to create.

By not having any one idea jump too far from reality, Walkaway demonstrates how close we are, right now, to enormous promise and imminent peril.

That is precisely the effect reading Walkaway had on me, leading me to think how I could contribute to bringing some of the described effects about. And how some of those things I was/am already trying to create as part of my own work flow and information processes.