Managing personal information – can we even trust ourselves?

I recently read Don Tapscott's excellent post on Why transparency and privacy should go hand in hand, which was right on the money, in my view. However, a few other aspects of managing access to sensitive personal information were not addressed in his thoughts, namely:

  • should we be expected to effectively interpret safe use for our most personal information?
  • should "trustworthiness" be evaluated once and forgotten, or should we continually be given (or require) the opportunity and reminder to re-evaluate?

I agree with Mr. Tapscott that having many concurrent identities does not compromise one's integrity in any way, as famously suggested by Mark Zuckerberg. We often make judgments about which information is appropriate for which circles within our set of relationships.

I also agree with Tapscott in that most information today is not actually private, but is "practically obscure", and that further the amount of obscurity is decreasing rapidly. This is at once a crisis and an opportunity - there aren't any good examples of systems deployed on a large scale (society-wide) that do a good job of promoting our ability to deny or allow access to information about ourselves at a granular level. This is chiefly because the amount of personal information is vast, information about ourselves has never been so searchable and available, and we don't yet fully understand the implications of releasing it on such a large scale. And so we forge forward, performing experiments with our own private information, in an attempt to empirically discover what to release and what not to release (ouch).

I don't believe that models that allow you to once configure who has which kind of access to your personal information (and then forevermore forget about the permissions you've just granted) will prove to be very effective in protecting us. Permissions should expire as time passes - sometimes very quickly after they are granted. If you ask a former client for a reference for your work and they agree, is it reasonable for the interviewer to call them a year later? Some permissions should only survive the length of single, specific usage request, under very controlled circumstances.

And though I hate to admit impediments to personal freedom, one need only look to the mortgage crisis to understand that sometimes people need to be protected from themselves, and that yes, sometimes too much margin on too much debt should not be allowed, even if you and the bank both agree that it's ok. Government regulation (of lending rules in the banking industry, for example) is necessary.

One need only look to the mortgage crisis to understand that sometimes people need to be protected from themselves, and that yes, sometimes too much margin on too much debt should not be allowed, even if you and the bank both agree that it's ok.

Note also that corporations are at liberty to change their usage policies regarding personal information according to their own whims, a la Zuckerberg. Even if the resulting backlash eventually causes the corporation to relent and comply with the public's wishes, it will be little consolation for the compromised parties. And in fact, corporations might decide not to relent at all; keep in mind that your need for privacy is often at odds with their need to make money using your private information.

While I agree with Mr. Tapscott's clear statement that "institutions should be transparent about what they do with our personal information", this information may be published deep within long, boring, legal documents or agreements that though we agree to, we effectively never read. Further I don't think it's always reasonable to ask people to read and understand these documents, as much as it is unreasonable to expect them to internalize the ins and outs of emergency landing in case of flight trouble. The information can be too subtle or technical. It would be ludicrous to print the pilot's manual on the back of my boarding pass or ticket, in an attempt to empower us as passengers. And so, pilots are licensed, insured, and made responsible. We're responsible for the oxygen mask and the seat belt.

To choose a more benign example, how many PC-owning downloaders of the Safari browser realize that Apple's EULA "allows you to install and use one copy of the Apple Software on a single Apple-labeled computer at a time" ? How many wade through their (continuously updated) credit card usage agreements until they find themselves in dispute with the issuing company? Now imagine a EULA for your personal health information. How many of us believe we could even interpret one appropriately?

There exists a reasonable expectation that someone much more studied in the implications of releasing sensitive private information will educate us, arm us with the proper tools to allow us to make the right decisions, and help prevent abuse by those that don't have our best interests at heart. Don't get me wrong - I believe in a free society in which we naturally manage (and are expected to manage) much of our own personal information, but we can't be expected to effectively manage access to *all* of it - especially the very sensitive information - without being better enabled.

For instance, without really knowing the details, we expect that neither our doctors nor the provincial government will release our personal health information to private insurance companies. We don't know this because we've all read the government's usage policy on our personal information - we simply expect that this information will be protected because the implications of releasing it inappropriately are potentially disastrous.

This is to say nothing of the fact that a mental model of all of the "accessibility" information we are supposed to manage in this brave new world would probably not fit in our brains. That's neither how we manage our most private information now, nor long before it became infinitely searchable. In the past, someone might have asked us for a fine-grained piece of personal information in a real-time fashion ("Can I ask you a personal question?" or "Can I have your social insurance number?"), and we'd use all at our disposal - including what smarter people than us had indicated about releasing such information, as well as our up-to-the-minute judgment of how trustworthy the requester was - to decide whether or not to give it to them.

We expect that neither our doctors nor the provincial government will release our personal health information to private insurance companies. We don't know this because we've all read the government's usage policy on our personal information - we simply expect that this information will be protected because the implications of releasing it inappropriately are potentially disastrous.

I'm a foursquare user - my phone often prompts me if it can use my location when it thinks it needs it. I almost always answer "no" reasoning to myself that I'd much prefer to wander under safe cover of anonymity rather than be discovered by... goodness knows whom. However, when I eventually get hungry from walking around and I start to look for a good place to eat, I'm more likely to answer "yes". I choose to relinquish my privacy in that moment only in favour of the convenience that this action affords me. I believe this is a good start for defining a model that manages access to sensitive personal information, but we also need:

  • an expert who has our best interests at heart (i.e. the government) to help us categorize the risks associated with relinquishing certain kinds of sensitive information so that we can make informed decisions. This expert might also assign the information to "risk bands" for easier consumption
  • an (open source) tool that allows us to encrypt, store, and manage our information completely independently of how it will be used
  • trust authorities and key chains to help validate the identities of the requesters
  • a gatekeeper tool that notifies us in real time whenever this information is being requested, denying the request until we've explicitly allowed it

With these, we'd be better armed to make measured decisions about access to all kinds of information about ourselves, from the very benign to the very sensitive.

  1. Great article! I agree that it’s becoming harder by the day to protect our personal information and we definitely need new measures (preferably from the government) in order to improve protection against threats, and promote a global culture of security in our society.

    Gulmeena Israr
    Jul 29th, 2010

Add a comment

Comment feed
The better to greet you with
No one will ever see this
Your pride and joy
The reason this comment form exists

The crew behind ASOT

We're a team of interactive, software, and business intelligence experts skilled in the design, construction, and management of online enterprise systems.

Visit The Jonah Group site

Get in touch with us