Does the New York Times see the irony? This article talks about how US Congress should look much less at the privacy terms of big tech, and more at the actual business practices.
Yet it calls upon me to disable my ad blocker. The ad blocker that blocks 28 ads in a single article, all served by a Google advertisement tracker. One which one of my browsers flags as working the same way as cross site scripting attacks work.
If as you say adverts are at the core of your business model, making journalism possible, why do you outsource it?
I’m ok with advertising New York Times, but not with adtech. There’s a marked difference between the two. It’s adtech, not advertising, that does the things you write about, like “how companies can use our data to invisibly shunt us in directions” that don’t benefit us. And adtech is the reason that, as you the say, the “problem is unfettered data exploitation and its potential deleterious consequences.” I’m ok with a newspaper running their own ads. I’m not ok with the New York Times behaving like a Trojan horse, pretending to be a newspaper but actually being a vehicle for, your own words, the “surveillance economy”.
Until then my ad blocker stays.
My browser blocking 28 ads (see the address bar) on a single article, all from 1 Google ad tracker.
Got a new phone (after selecting a new plan with some effort). As I don’t allow my phone to back-up everything to the (google) cloud, it took a few hours to get the new one ready: installing apps, and logging into all the associated accounts (using 1password). On the upside, it means a lot of unnecessary stuff accumulated over the last 2 years has been left behind on the old device.
Funny how #datagovernance companies publishing #gdpr compliance guides aren’t compliant themselves when asking personal data for downloads: no explicit opt-ins, hidden opt-ins (such as hitting download also subscribes you to their newsletter), no specific explanations on what data will be used how, asking more personal information than necessary.
Today I changed the way we use e-mail addresses for identification on-line.
Over time my e-mail address(es) has (have) become the carrier of a lot of important stuff. It’s not just a way to communicate with others, but also serves as generic user name on countless website accounts. And likely quite a few of those have had their security breached over time, or are unscrupulous (or even malicious) in their own right.
As part of a talk on privacy by Brenno de Winter (Dutch investigative journalist) that we went to this weekend (see previous posting), he mentioned using unique e-mail addresses (and pw’s of course) for every site you use. Or disposable e-mail addresses for sites you visit only once. That way when one site gets compromised there is no risk of your user credentials being used elsewhere, and if one site sells your email addresses on it is immediately apparent to you who did that.
I have been aware of this advice for a long time, but never saw an easy way to act on it:
Most disposable e-mail address (DEA) services offer a temporary e-mail address, usually enough to quickly confirm an e-mail address, after which it gets deleted automatically. This is useful for one time visits / registration at a website, but not for using unique addresses for services you use more often.
Some sites do not accept e-mail addresses that are clearly created by DEA type services
I own multiple domains, which I could theoretically use for unique mail addresses, but in practice that is much more unlikely. I would need to either create mail addresses before using them to register somewhere, through the domain’s administration panel, or use a catch-all that would simply accept any incoming mail on that domain, including tons of automatic spam flung out to randomly generated e-mail addresses.
What I actually need is:
The ability to create new e-mail addresses on the fly, simply by using them
The ability to both have more permanent unique addresses, as well as single use addresses
Using a domain that is not perceived as a DEA service and not easily associated to me (e.g. by visiting its website)
Using a domain that I control so I cannot get cut off from unique addresses connected to important user accounts
The ability to recognize any of these unique addresses in my regular inbox
Something that still filters out spam, while accepting any incoming address
So today I decided to investigate further and act on it.
I found 33mail.com, built by Andrew Clark (in Dublin/Ireland so under EU regulations), that allows you to create addresses on the fly, and then through a dashboard simply block the ones that get misused at some point. It also forwards to one of your actual e-mail addresses, including letting you (anonymously) reply from the unique address.
33mail.com allows you to connect any other domain to their service, so that instead of using firstname.lastname@example.org I can use something@myrandomdomain while still using 33mail. This is very useful as it helps to prevent being filtered out because of using a DEA service domain, and keeps the addresses under my control.
I registered two new domains, one for me, one for Elmine, and set up their MX DNS records to point to 33mail. So that email@example.com goes to 33mail. These domains are, apart from the records at the registrar, not otherwise easily associated to us.
I provided two unique email addresses for 33mail to forward to at two other domains I own and use.
I set up two auto-forwards for those addresses that 33mail forwards to, which makes it end up in one of my or Elmine’s regular inboxes. In our inbox we have filters that pick up on anything that comes from those forwarding addresses 33mail sends stuff to.
This is the solution I came up with:
This is not a free solution, but it is cheap. The registration of two domains, plus a service package so I can set my own DNS settings, with our regular hoster comes to 45 Euro or so. 33mail charges 8 or 9 Euros for a premium account, which is needed to add your own domain name to their service, and I created a premium account for each of us, as we will be using two seperate domain names. Total cost: 65 Euro/yr.
Here’s a drawing of the full set-up:
We went to hear an interesting talk by Dutch investigative journalist Brenno de Winter on privacy and related issues this weekend. It is part of a series of privacy related talks and workshops held in our town in this and coming weeks.
To me, as I blogged in 2006 after that year’s Reboot Conference privacy is a gift by the commons to the individual, and not so much an intrinsic individual thing. It allows the individual to be part of the commons, to act in the public sphere. It also means to me that privacy is part of what makes the commons work: withouth a certain expectation of privacy no-one can participate in the commons, resulting in the absence of commons.
Privacy in Public, photo by Susan Sermoneta, CC-BY
That doesn’t mean privacy can do without protection. The commons collapses easily, especially when your information is disconnected from your physical presence, as is usually the case in our digital age. Where the commons collapses, because i.e. the social distance increases, or contexts change or fully drop away, there rules and instruments are needed.
In that light Brenno shared a few notions I wanted to capture and put in this context of the commons:
The “If you have nothing to hide, why bother?” argument introduces a false dilemma. It puts the onus on the individual who seeks privacy, and not on whether the other entity complies with existing privacy rules and laws (=a responsible member of the commons). It may also well be what is ok now, will carry dire consequences in the future (e.g. homophobia in Uganda) when the character of the commons changes especially radically.
In the Netherlands there are no consequences for disregarding privacy rules around data inside a data-using entity (e.g. staff nosing around in data they have nothing to do with, like doctors looking up medical files from famous patients they are not treating themselves). Others can act as if outside the commons without social scrutiny.
Whenever there is a data security breach the data holder is generally portrayed as the victim, and not the people who’s personal data it is, or who are described by the data and who’s expectation of privacy in the commons got damaged. (as well as disregarding the fact that in the EU my personal data at company x is my data.)
The Dutch privacy watchdog CBP has 86 staff, compared to 1 million companies and government branches they need to watch. The watch dog has no teeth. The commons is mostly undefended.
Privacy has weak anchors in Dutch law. The commons is mostly undefended.
Why are there no (routine) impact assesments of measures that erode privacy in the name of security? If erosion of privacy is to be tolerated, the damage it constitutes to the commons needs to be not just balanced but surpassed by the benefits to the commons on other aspects.
All of these points are relevant to the question of how to maintain or extend the commons with rules and instruments, so that the gift of privacy can be given. By making sure the ‘infringing’ party is under similar social pressures to behave. By making sure we maintain a realistic balance when privacy needs to be temporarily eroded for the sake of the commons (that is the source of privacy).
When privacy breaks down also the commons itself breaks down, as privacy is the pathway and the trust base for taking part in the public sphere.