/Tag: Security

De-identification, Data Security and Testing with Production Data

By |2017-09-22T18:16:27+00:00September 22nd, 2017|Enterprise Architecture|

While we know that software can expose data, we sometimes forget that writing software can expose data.

When a system gets deployed, we typically build a development environment, one or more test environments, and a production environment.  No surprises there.  However, developing software with sample data, instead of “real” data, can allow defects that are difficult to catch.  On the other hand, using “real” data (typically a subset of production data) runs considerable data security risks.  In this post, I’ll discuss the notion of building a general purpose deidentification tool specifically for software development and DevOps purposes.  (more…)

Enterprise Architecture and Threat Modeling

By |2016-11-28T21:14:48+00:00August 29th, 2016|Enterprise Architecture|

What should an Enterprise Architect know about threat modeling?

I recently asked on a LinkedIn group about threat modeling and Enterprise Architecture. My first surprise came when the first set of responses were from folks who didn’t appear to understand what threat modeling was. So I guess the first order of business for anyone wishing to consider themselves an Enterprise Architect is to study up on what Threat modeling is.

(more…)

Being Forgotten in the Internet of Things

By |2014-06-30T11:04:28+00:00June 30th, 2014|Enterprise Architecture|

We all know that Google lost a landmark legal case recently.  As of now, a citizen of Europe has the “right to be forgotten” on the Internet.  As of now, a citizen of Europe can ask Google to “forget” them, so that a search of their identity will not return embarrassing information from the past.  This allows a person to live past a mistake.  Your college indiscretion, and that time you were fired for photocopying your butt, or the time you got drunk and drove your car into a swamp and had to be rescued… all of that can “go away.”

However, this becomes much more difficult when we consider the emerging Internet of Things (IoT).  In the Internet of Things, the “stuff” that you own can generate streams of data that do not remain within your control.  That data is called “Information Property.”  It is the information that YOU generate, in the things that you do.  I believe that if YOU create a bit of information property, you should own it.

That information property, thousands of tiny bits of data about you or your activities, will wander out of your house, or your car, or your phone, to companies and governments running cloud-based data centers.  That swarm of data surrounds you, and be used to profile you, track you, predict your actions, influence your choices, and limit your abilities to get “outside” the system.  Most folks will not have any problem with this cloud of data.  At least not at first. 

Where we will first feel the pain of this cloud of data: when you want to be forgotten.

A parallel that does work

We have been dealing with “data about you” for a while.  When you apply for a loan or a credit card, the information you submit becomes the property of your creditor, and they share that data with credit reporting agencies, along with your payment history, employment history, residential history, status of property ownership, and basically any other factor that finance companies feel would influence your likelihood to pay your debts.  The US Federal Government has placed some controls on this data, but not many.  Europe has placed entirely different controls.  You have no right to be forgotten, but you do have the right to limit their memory to a decade.  That allows you to “get past” a mistake or series of mistakes.  But you are always “known.”  However, a mistake can be forgotten. 

This is a model we can use.  Here is data, about you, outside your control, that get’s “forgotten” on a regular basis as it gets old.  There is a possibility in the credit reporting world for being “forgotten” because the data is tied to you, personally.  It is ALL personal data. 

This is not (yet) true in the Internet of Things.  If your car sends data to a smart roadway system, there is a great deal of information about where you go, and when, but under most circumstances, your identity is not tied to that data.  It’s the identity of the CAR that is sent, but not the identity of the driver or passenger.  That can be seen as an advantage, because it is tough to link that data to you, but it is possible, and therefore it will occur.  You will be found.  And when it does occur, you no longer have any easy mechanism to PROVE that the data from your car relates to you. This means that if any government creates a policy to allow you to be forgotten, the car data will not go away.  You can’t CLAIM that data because it is not directly linked to you.  You don’t own it.

Think this is a minor problem?  After all, your city doesn’t have a smart roadway yet, and your car doesn’t send data, so this problem is a long way off, right?  Wrong.  If we don’t think of this now, privacy will be sacrificed, possibly for decades. 

The environment of regulations sets the platform by which companies create their business models.  If we create a world where you cannot claim your data, and you cannot manage your data, other people will start claiming your data, and making money.  Once that happens, new regulations amount to government “taking money” from a company.  The typical government response is to “grandfather” existing practices (or to protect them outright).  No chance to change beyond a snail’s pace at that time.

A proposal

I propose a simple mechanism.  Every time you purchase a device on the IoT, you insert an ID into the device.  This ID is a globally unique ID (my tech friends call this a GUID) which is essentially a very large random number.  You can pick up as many as you want over your lifetime, but I’d suggest getting a new one every month.  A simple app can create the GUID and manage them.  Every item you purchase during that month gets the ID for that month.

Every bit of data (or Information property) sent by the device to the swarm of companies that will collect and work with this data will get your GUID.

Note that your GUID allows those companies to link your data across devices (your phone, your car, your refrigerator, your ATM card, your medical record, etc).  Is this allowed?  Perhaps one government or another will say “no” but that control will be easily worked around, so let’s assume that you cannot control this.  The thing I want to point out is that this kind of linkage is POSSIBLE now, it’s just more difficult.  But difficulty is being overcome at a huge rate with the number of computing devices growing geometrically.  Let’s assume that folks can do this NOW and that you will NEVER be able to control it.

Therefore inserting an ID is not giving up control.  You don’t have it now.

But it is possible, with the ID, to TAKE control.  You will be able to submit a request to a regulated data management company (a category that doesn’t yet exist, but it is possible), then those systems can identify all the data records with your ID, and delete them.  Only if you can claim your data can you delete it.  By inserting a GUID into your Internet-of-things, you have gained a right… the right to claim your data, and therefore delete it.

It will no longer be a choice of sending a single message to a single search firm like Google.  The request to delete will have to go to a broker that will distribute the request, over time, to a swarm of data management companies, to remove data tagged with these IDs. 

Some implications

Now, before anyone complains that a company, once they have data, will never let it go, I would submit that is nonsense.  90% of the value of information comes from samples of that data of less than 2% of the population.  In fact, the vast majority of data will be useless, and plenty of companies will be looking for excuses to toss data into the virtual trash bin.  If a customer asks to delete data, it costs a micro-cent to do it, but that data is probably clogging things up anyway. 

Getting a company to spend the money will probably require regulations from large players like the EU, the USA, China, Japan, Brazil, and India. 

The time to act is now

Now is the time to ask for these regulations, as the Internet of Things is just getting started.  Companies that understand the ability to create and manage these IDs, and respond to the request to delete information, will have a leg up on their competition.  Customers will trust these companies more, and the data will be more accurate for consumers of these data services. 

You cannot delete “information property” until you can claim it.  The ID is the claim. 

Kudos to Cambridge for refusing to cover up security holes in “Chip and PIN”

By |2010-12-26T01:51:09+00:00December 26th, 2010|Enterprise Architecture|

One challenge with long-running news stories is that it is often difficult to keep track of the “current” bits.  Even important news can seem like “old” news because the problem is taking so long to be resolved, or even addressed.  What worries me is that many folks, especially here in the USA, are completely unaware of this story. 

I’m talking about the flaws in the Chip-and-PIN system for credit card validation and in the “Verified by Visa” ecommerce validation systems.  It turns out that both systems, heavily invested attempts by the credit card industry to reduce fraud, have not had the intended effect.  Fraud has increased, despite both changes.  Security researchers at Cambridge University have pointed out these flaws for years, in paper after paper, in the open.

Here’s the kicker.  On December 1, 2010, the UK credit card industry sent a letter to Cambridge to ask them to take a research paper off of their website.  Effectively, they asked the University of Sir Isaac Newton and Charles Darwin to censor the valid (yet embarrassing) research of one of their own scholars because he pointed out serious flaws in the Chip-and-PIN system.  I am not surprised by their request, nor by the response of the University… they refused

On the other hand, at the first sign of censorship, I encourage all of us to Read Dangerous Works, Think Dangerous Thoughts, and Embrace Dangerous Ideas.  Only through the consumption of dangerous ideas can they survive.  And survive they must, because all truly innovative ideas were, at one time or another, dangerous. 

What makes an idea dangerous?  When a powerful person seeks to censor it, it is dangerous.  This goes for burned books, blasphemous websites, and, yes, for dry technical white papers that point out that the banks are pushing for a massive shift in liability, hoping to move liability for fraud from the banks to the banking customers, to the tune of hundreds of millions of dollars, by “selling” us on a security system that is not secure.

The researchers at Cambridge have been getting the media to notice.  I encourage folks to watch this YouTube video, part of a BBC news broadcast:

 

Now, my regular readers may be surprised to see me take a stand against censorship.  After all, just a few weeks ago, I expressed strong concern over the publication, by Wikileaks, of a list of potentially valuable targets for terrorists.  Was I not asking for censorship then?  What changed?

I walk a fine line here.  After all, what is the principle that I am following that says “Cambridge is right to publish instructions for thieves while Wikileaks is bad for publishing instructions for terrorists.”  The principle is simple: value for human life.  If information, widely shared, has the opportunity to lead directly to the loss of human life, it should not be widely shared.  If, on the other hand, information widely shared can drive good behavior on the part of powerful people without endangering human life, it should be shared. 

Falsely yelling “Fire” in a crowded theater is not “protected free speech” because people can be injured or killed.  On the other hand, publishing a list of theatres that have inadequate fire safety protections is protected free speech, because the theatre owners now have a reason to improve their safety records or face the loss of business to competing (safer) theatres.  (If this example seems a bit antiquated, especially to those folks from outside the USA, I’m referring to a case in the US Supreme Court in 1919). 

The publication of imperfections in the security scheme of credit cards is similar to my example of publishing a list of theatres with poor fire-safety protections.  Customers who frequent merchants using the Chip-and-PIN system, and the Verified by Visa system, are not safer as a result and may, in fact, be LESS secure.  As consumers, and free citizens, we have the right to not only vote with our wallets, but also demand regulations that will drive good behavior on the part of credit card companies.  Now that the USA has a branch of the government specifically chartered with Consumer Protection, perhaps this is an issue that they can take up.