A Blockchain Solution For Identity?

Vinay Gupta
Mattereum - Humanizing the Singularity
44 min readJul 7, 2017

--

Video and transcript below. Slides available here.

This talk explains the use of insurance to integrate various blockchain identity solutions into a usable whole.

This talk title I basically put up kind of as a joke, because nobody really thinks about what they do as ontology, except for maybe a few librarians and knowledge managers. But I guarantee that if we sit down, every single organisation in this room will have a different idea of what we mean when we say the word “identity”, and most of the time they will use exactly the same words to describe the concepts which are used in different ways inside of their organisation. They’ll use words like “trust”, “We think identity is about trust,” and everybody will agree, “Yes, identity is about trust,” but what’s inside of their head will be a completely different thing, because when you say “trust” in medicine and when you say “trust” in aerospace and when you say “trust” in finance, they mean three different things. Trust in medicine: will the patient die or not? Have we done everything we know how to do? Trust in aerospace: you need a chain of custody for the stuff. Trust in finance: is this guy going to steal my house? Totally different models.

What I’m going to present is what I think is a simple way of getting organisations to be able to make their trust models interoperate, and the way that we’re going to do this is we’re going to bang everything flat and turn it all into money. Because all of the organisations know how to interoperate in a straight financial sense, and if we try and synchronise all of our ontologies and all the rest of that stuff, it’s going to take about 300 years before we get a thing that works, so we’re just going to bash it all flat until it looks like money, and at that point we should be able to build an identity system that actually works in the Netherlands in about a year. How does that sound for an ambitious goal? I think you could build a working system in less time than it will take to get prototypes done.

How do we clarify identity with better databases? We all know the blockchain is basically a database, and most of what I’m going to say could be done on any kind of better database: blockchains, meshes, Weaves, scaled blockchains, permissioned ledgers; it doesn’t really matter, because what I’m going to talk about is fundamental understandings of how the world works, and some ideas about how to build financial instruments that solve problems that we currently have. This is really a talk about a new financial instrument; what you use to implement in that financial instrument doesn’t really matter, you could even have different institutions using their own representations of these financial instruments and interoperating. The idea is to bash everything down to the lowest common denominator, which is cash, and then you can carry that using whatever underlying technology you have.

About me: hands up anybody that has no idea who I am or why I’m here. Okay — that’s shocking. In the 1990s I was one of the cypherpunks, I was very interested in cryptography and human rights. By 2006 I was working for the Department of Defense on a very similar project: the idea was to produce an identity card standard that could be used in Iraq and Afghanistan without differentially empowering bad actors. I led the processes for the Ethereum launch, so I basically held that part of the show together, and basically, there aren’t very many experts in this stuff. This is a frontier, and nobody has deep expertise on a frontier; you just have a bunch of people with capability, wandering around and doing things, and I’m certainly one of those.

I was asked to do a very quick run-through on what a blockchain is — I’m going to make this very brief. The only reason anybody has heard of the blockchain is because you can make your own money on a blockchain. You have some bunch of lunatics that figure out how to print $60–120 billion of anonymous digital cash. Everybody stares at this in wonder and then says, “Okay — whatever they did, I want some of that.” The underlying technology from Bitcoin is called the blockchain. Ethereum, the project that I was part of, then adds a whole bunch of programmability back into that system, and, as I mentioned, this stuff is ridiculously valuable right now.

The blockchain itself is basically three components that are fairly common taken to another level. There’s a peer-to-peer network, and the world is filled with peer-to-peer networking software: Skype, BitTorrent, Tor, all of this stuff — the underlying network technology in blockchain is not that new. A shared database, something like Google Docs or Dropbox, so there are a bunch of synchronisation algorithms that keep everything well-behaved, so one person looks at the world, somebody else looks at the world and they see the same thing. Then finally, there are some rules for updating the shared database, and this is where all the magic is packed. You could think of the entire thing as being a new software architecture, so it’s in a class with things like any of the big enterprise software abstractions: model view controller or cloud, all of that stuff are in the same basic architectural bucket. Usually the blockchain is seen as a platform, even though it’s actually an architecture; a running blockchain could be called a platform, but the underlying thing is actually an architecture.

The funny business is that normally when we’ve got a shared database, we don’t have sophisticated ways of figuring out how the rules will be updated. Most of the thing that we’ve seen so far from peer-to-peer networks is the unit of selection is a file, and you move a file up to the Dropbox or you take a file out of the Dropbox, but there are no real instances of these things that have complicated rules for modifying the contents of a shared file. That’s an operation that we don’t do very much, and that’s where blockchain actually gets its power from.

We set a set of rules for the shared game that constitutes Bitcoin or Ethereum. In that shared game, we have a software implementation of what amounts to a central bank. Any central bank that you’re going to see is basically a bunch of software and some government accreditation: the banks are made of software, so if somebody takes a very simple model of a bank, puts it on top of one of these blockchain platforms, it’s called Bitcoin, and it basically prints $4 million a day into existence and uses that money to pay the people that are running the platform. Now, the exact technical details of how it does that involve a whole bunch of very, very fiddly stuff about the speed of light and provable randomness and cryptography to ensure fairness and so on, but the basic operation is you have a software platform, everybody runs a little piece of the platform, the software platform implements a shared database, the shared database has a set of rules which allow you to move money around, it makes $4 million a day of funny money, and it just appears from nowhere, like a central bank of the Internet. It doesn’t actually sound that clever 10 years after we’ve had a chance to look at it, but let me tell you, five years ago we looked at this thing like “This was left here by aliens.” ~I’m not kidding. / I’m kidding. / No, I’m kidding.~

On that platform, we then come along and say, “What if we want to do something which is not a central bank but has the same kind of reliability and robustness that this central bank of the Internet has?” I’m guessing that none of you are all that interested in basically using currencies to solve these problems, the objective is not to make some kind of coin, but the objective is to take the underlying, enormous strength of these databases and use it to solve a different big problem; not “How do we make payments?” but “How do we manage…” I don’t even want to use the word “trust”, but we’ll call it trust for now. This is our starting point. We’re basically taking a chunk of this Promethean fire and we’re using it to make something else.”

Now, this is super important in terms of really getting why this technology is here, but it’s irrelevant to everything else I’m going to talk about, so if you don’t follow this part, don’t worry about it. The speed of light is why we have all of this technology around blockchains. Light is actually quite slow. You think of typically Internet communication as being instantaneous, but if you do a call with somebody that’s literally on the other side of the world, you’ll keep clipping the beginning of their sentences and they’ll keep talking over you. That’s happening, because the round-trip time on the Internet backwards and forwards to say Australia is actually something like a seventh or a tenth of a second, it’s long enough that the human being has systems that run much faster than that. The speed of light is just a constant, you can’t really change it, it’s as fast as it goes, so we’re always going to have this tenth of a second, up to maybe even a quarter of a second by the time you get into the networking layer, delay. That delay is just fixed, we have to live with it; the further away two things are, the longer the message will take to propagate.

Inside of your computers you have a silicon chip that does all the work. The chip is maybe a centimetre across, and light has to go backwards and forwards across that maybe 50,000 times before it can complete a transaction. If you imagine 50,000 centimetres, that gives you something like five kilometres, and in the length of time that it takes light to travel five kilometres, you can do one transaction, so by the time you measure to the other side of the world and back, you’ve done hundreds of thousands of transactions at home, or with your next-door neighbour or with the guy in the same city, in the length of time that it takes the single transaction to reach the destination and come all the way back to a confirmation. You can’t get around the fact that you’re going to have a geography to computation, and then from that fact the computation ~has a geography comes impossible~ alternative shapes for the geography. High frequency trading is we pick a single centre, we put the big computer in the middle, and then the traders radiate out of that and from that in concentric rings, depending on how much rent they can pay for the land that they’re on. In New York they sell space close to the exchanges by the cubic metre, and they’re packing supercomputers into ever-smaller boxes to carry out their trading algorithms and get the trades in faster. This is a colossal waste of time and energy, it’s a crazy, crazy way to run a world, but that’s what they do.

Similarly, Google faces this problem, and their approach is a thing called Spanner. They basically put an atomic clock in each one of their data centres, they time how long it takes to communicate between all the data centres so they’ve got a really accurate map of the delay propagation between the data centres, and then they have a synthetic now. Because they’ve got this map of how long the time delays are, “I say it happened now,” the data centre on the other side of the world receives a similar transaction at almost exactly the same moment, including the correction for the time delay, and the one that happened first becomes the real one and the other one is thrown away. But to get that kind of synchronisation function for the entire world they need an atomic clock, they need to relativistically calculate and compute their networks. This is crazy expensive, and it’s not something that most institutions are ever going to do. It’s also not likely to become planetary infrastructure, because how do you trust that the people calibrating your clocks have done it right and aren’t playing with time for their advantage? It’s hard to trust somebody else to manage your clock.

The final thing that you come to is blockchain, and the blockchain just comes along and says, “We’re going to get rid of the speed of light advantage by having a 10-minute block, or even a 15-second block, so the block is long enough that the speed-of-light delays don’t make much difference, and then we’re going to make it even more reliable and robust by randomly picking where in the world we’re going to use this as the centre of the world for the next transaction batch.” You randomly pick a new point in the world on every batch, all of the mining stuff is about making yourself eligible to be picked as a potential site to process the batch, you then process the batch any way you like, in the order that those messages came to you usually, then you put the thing out there and that’s the next block, and that’s your blockchain. But it’s worth understanding that this is not a trivial response to a bunch of lunatics who wanted to print their own money; it’s a fundamental approach to a hard problem in computer science that in one direction gave us HFT and in another direction gave us Google Spanner and in the third direction gives you blockchain. Anything that we do in the future is going to have to overcome these constraints, so you’re going to continue to see weird computing architectures that attempt to allow us to live in the illusion of a simultaneous now, even though the speed of light delay dominates everything to do with machines, so there is a deep thing there. If you want to understand this stuff, dig up a thing called the CAP theorem, it really runs through why this is the way it is. That’s a little bit on blockchain, and now we’re going to get into the trust stuff.

Most of the stuff you’re going to see in identity is just what I’ll refer to here as an irregular mass of edge cases, it’s just tons and tons and tons and tons of weird, difficult stuff that doesn’t quite behave the way that you wanted it to behave. You think you’re going to use DNA for everything: you’ve got identical twins to deal with; you think you’re going to use passports for everything: you discover that some failed states issued passports after the point where their government collapsed, because the guy in the passport office still had a job, but there was no state to issue the documents; or you wind up with all kinds of irregularities, and as a result it’s very difficult to sit down and take a clean, simple approach to identity, because identity is much closer to biology than it is to engineering. We’re fundamentally looking for a whole bunch of stuff which is about managing a living system called people. And if you think about how irregular and strange biology is, biology is filled with exceptions. Every rule that you could possibly have, biology will have an exception: ants that fly, fish that walk, snakes that breathe underwater — you name it, there is an exception — and this is what identity is like, it’s just always filled with exceptions, there’s no regularity that we can push on. There are really reliable averages, but unfortunately, you can’t build a system just to handle the averages; it’s like having only one size of shoes.

Let’s take a new baby. Child number one is born with massive identity in the storage, because the parents are known, they own houses, they’ve got jobs, you’ve got a solid record of that kid’s ancestry for three generations in the tax database; no ambiguity, the identity is completely straightforward and unproblematic. Baby number two: two refugees have a kid while they’re travelling, they arrive, they are granted asylum, but they don’t have any money so they haven’t got a fixed address, they’re basically living with friends, and the country of origin they came from is now a failed state, and you’ve got no idea what their identity situation is over there. You can have a situation where both of those kids are fully legitimate UK citizens — one has rich identity, the other has practically no identity and just a birth certificate — and the rest of the national system has to treat them exactly in the same way from that point onwards because they’re both UK citizens. And this is by no means the weirdest kind of problem that you get in an identity thing, that’s a typical problem in identity. How can you make a system which will equally well represent both of these individuals, when one of them has a completely solid frame of reference and the other has nothing at all?

This is the identity problem. It’s not about the average case and figuring out how to make the average case work. If you build a system that kind of sort of works for the average case, it will be completely harassed by the things that nobody ever thought of when they built it, and then it will become an enormous maze of exceptions. This is part of why banking is the way that banking is. Banking is a sea of exception handling: every time something goes wrong, every time something gets complicated, another department, another couple of people in the department, another set of rules for those couple of people to manage; it’s a constant accumulation of edge cases. The same thing why the US tax code is 100,000 pages long: edge cases. The real world is super complicated, and when you put money on the table, people get really, really clever, and whatever identity architecture comes out of this is going to have to deal with that.

Biometrics. Everybody loves biometrics in theory; in practice, I have never yet met an organisation that was happy with its biometric rollout. The stuff doesn’t work, not cleanly, not reliably, not for all people. If somebody comes in and they’ve mashed their hand in a car door over the weekend, they can’t get into their computer. Now what do we do? “I don’t know what my password is, I always use my thumb! — Well, your thumb is in a bandage. What do we do?” Lots of companies will make claims for their biometric systems which are largely exaggerated, they’ll tell you they have 100% accurate eyeball scanner or whatever it is. Some percentage of the population don’t have eyes: problem. If you’re introducing it for everything, this will be an issue. Eyes change over time; the companies will tell you it is 100% reliable, but 30 years later how much has the eyeball changed? They’re not likely to be willing to pick up the pieces when you begin to get these kinds of problems, because the companies that are selling the technology are not large enough to bear the weight of fixing the problems if the problems are large.

Biometrics are unlikely to provide a kind of one-click answer to the identity problem, even before we start looking at the terrible problems of privacy and trust and people cloning biometrics. If we went to a largely biometric solution, I bet there would be a roaring trade in rubber fingers. You put your glass back on a bar, somebody lifts the fingerprints, takes a picture of your face from the security camera, sells it as a package on the Internet, “I’ve got a fake finger for this person,” and you could have tons of identity fraud even in a system that had really good biometrics, because you only have to fool the sensor on the end of the process, and the sensor on the end of the process costs 50 cents. It’s not going to provide you with the kind of bulletproof protection that people want, because it’s very hard to keep control of the biometrics, and if I know what it’s supposed to look like, I can almost certainly fool a cheap sensor. There is no solid joy down this path, not in that instance.

The other problem that we have with identity, and this is really the worst problem in the entire space, is that identity is a magnet word for bad thinking; the only worse magnet word for bad thinking that you’re going to get is religion. Everybody has a feeling inside of them that they are a unique entity that in some way is special, and the thing about this specialness is it’s ineffable. You can’t put it into words, you can’t communicate this special, magic essence inside of us to people. Maybe there are a few great artists or dancers or novelists that in some way manage to communicate this special thing about them in a concrete way that other people recognise, but for the most part, part of what makes a human a human being is that this deep, internal sense that I am something distinct from other people is almost impossible for us to express in a way that others can recognise. We hear the word “identity” and we think, “Well, I am this special snowflake, I’m unique, I have this thing which is meness,” but what we don’t understand is that we have no way of expressing that to other people in a way that they can instantly and immediately recognise.

Everybody thinks, “Identity is that special me-thing, we just need to find a way of naming that,” but in practice, it’s almost impossible to get a reliable expression of this identity concept in a way that allows other people to recognise it, never mind the machines, and this mismatch between how we feel our own identity and the problem of dealing with identity as a bulk property of enormous numbers of people means that our intuition about identity is almost always completely backwards. The constant grasping for the sense of a solid, Platonic, real self that we can them somehow tag with a machine just leads us up a whole sea of dead ends, and everybody that comes to this field goes through a phase where you just go, “Oh, it doesn’t work,” and then we have to pick up and move on from there. I can see some of the people that have been around this for a while nodding, like, “Yeah, we got there too.” Don’t worry, there is a solution! [laughter]

What I want to talk about now is I want to nail this thing called reductive trust. Reductive trust is where you make the problem simple enough that you can find the trust solution that will work for it. A good example is cryptography on mobile phones: you simplify the problem down until you could put it on a little chip, you put the chip on the phone and the phone is now authorised to take money out of your bank account. Reductive trust is really great if you can find a problem that is suitable for a reductive trust, you can engineer systems and they work cleanly and it’s all great.

Humans cannot be handled with reductive trust. It doesn’t matter how much you try and simplify and standardise — you’re going to tag all the kids when they’re born, you’re going to carry them through the system with a constant handoff of identity documents — you can build all of that, but the infinite variability in human life and the really surprising variability in human biology make it almost impossible to build reductive trust systems that work with humans. Even the military can’t really make reductive trust work for its own staff. You know the saying about military spec in America? You give two young Marines in a padded cell two bowling balls, and when you come back one bowling ball is missing and the other one is broken, and neither Marine knows anything. Even completely artificial environments where we have massive control over circumstance, we can’t get reductive trust to work. Even in a military, how much equipment goes missing or is broken unaccountably or just gets lost? How often do you find that personnel have a different background from the one you thought? How often do records get lost? Even in the most controlled environment chaos is still king — entropy rules everything.

Lesson number one: we cannot simply stamp a number on the engine block of every human being. There is no way of applying reductive trust to human beings, we cannot simplify the problem in any meaningful way and expect it to behave itself, so anything that smacks of “We just reduce the problem to putting a big old number on every single human being, we’re going to enumerate the set…” Forget it, it’s just not the way it will work. In practice, 1–2% of the population will turn out to be completely immune to whatever reductive trust model was applied, because they don’t have the right kind of history or they don’t have the right kind of biology and the system will fall to pieces — it’s just not viable.

I want to suggest looking at a different question, which is not the insoluble identity problem but the consequences of identity. We have this utopian vision of how we want the world to work if the identity system is perfect. Somebody comes and says, “Hey, you want to buy my house?” you smile, you shake hands, you have a little conversation, you wave each other’s mobile phones at each other, all of the document stuff gets verified, maybe you shoot a little video of yourself agreeing to the deal, and then the deal is done — boom, everything is perfect. This is the sort of world that we want if the identity system works. A working identity system should just let us get things done. That might be a world that we could create, without having to get this kind of reductive trust model going.

Other things that are kind of hard to do that you’d like to be able to sort out: how do I make sure that if I’m in an auction, none of the other people in the auction are actually the same person? If I’m the one selling the object, it would be very advantageous to be able to bid against myself sometimes in the action. If we’re playing poker and two of the people at the table are actually the same person, they’ve got all kinds of tricks that they can play, where one player will sacrifice short-term interest for the greater interest of the other player. Quite often we want to prove negatives, and proving negatives is really hard. Almost all of the identity set that we’ve discussed today is about proving positives. Is X Y? Is the person who bought this house now the person that sold the house then? But trying to prove that people are different is an equally important problem, it’s less common but it’s incredibly key to the functioning of the economy in a lot of places; you don’t want the businessperson to also be their own auditor. We need an identity which allows us to separate as well as to prove continuity, you have to be able to make distinctions.

What I’m going to suggest is that we want a system that lets us make good decisions often enough that the weight of the good decisions outweighs the weight of the bad decisions, but what we’re really looking for is a systemic function. The system as a whole functions, the system as a whole is profitable, we are always in a position where we’ve got enough good stuff going on that it covers for whatever little problems we have when things don’t work. I’m going to suggest that the rational way of approaching this is that we need to design insurance models which allow us to take this kind of trust problem and manage it in an efficient, integrated, sensible way.

Insurers are already people that know a ton about you. They know what you’re worth, they know anything special that you own, they know where you live, and they’ve often got these data sets for years and years and years. Most people probably haven’t thought about their insurer in years: they send you a bill, you pay the bill, and it goes on. There could be somebody in this room with a 15–20-year relationship with their insurer, and you’ve almost certainly thought about it less than you think about your bank. The insurers are probably in a position where a lot of people already trust them, they’re in a position where they’re relatively good at keeping secrets, they’re in a position where they don’t really talk very much to other institutions, which is quite nice, because it means they don’t have a bunch of unfair advantage to draw from insider knowledge. They’ve got a bunch of really good attributes for doing the kind of stuff that we want them to do.

Also, insurance is something which is relatively limitless. Right now VISA is probably the biggest global access point to the financial system that we have, and VISA is choked off by its inability to manage the identity problem. They can’t get outside of basically the rich areas of the world, because you just can’t issue a VISA card to a homeless person in Bangladesh, you don’t have a foundation of trust that allows you to extend credit to them, and as a result VISA is deadlocked in its ability to do finance. Ideally, what we would like is a system that has global reach. Use the insurance paradigm to build something which allows you to get into every single corner of the world in principle. The technology and the finance and the regulation might stop you from getting there today, but you want to be able to get there in the long run. I think this is particularly relevant for a Dutch audience, because at the end of the day, you’re a trading nation. You understand that a lot of the stuff that came in the past that made your ancestors wealthy came from places where they barely understood the culture; they just showed up and did business. That ability to just show up and do business is incredibly useful, and you could be custodians of a system that allows you to do that. There is no reason that a small country can’t have global reach — this is one of the great advantages of the Internet.

Conclusions: biometrics and state documents are both unreliable. People are constantly trying to get driving licenses when they shouldn’t have one, they’re trying to forge passports, they’re making fake identities for identity theft, they’re doing all of this stuff and it’s common. Biometrics has similar problems, and this is before we’ve seen any really organised criminal attempt to subvert the biometric systems. At this point, give up all hope of getting hard identity out of the human system — not happening, not going to come. If we’re going to have to deal with probabilities where every identity is a probabilistic function… I don’t tell you that John is this particular John that has that tax number, I tell you this guy is 95.9% likely to be that guy. You think, “That’s not good enough, I need better information, I need 99.5% before we can do the deal.” All of the identity transactions that occur in the real world are probabilistic transactions, and that’s the case for the real data. The actual defect rate for almost anything is about 1% if human beings are involved; I’m sure there are organisations like banks which are trying to track down money that they’ve given people, I bet they know exactly what their misidentification rate is and I bet it’s not zero.

Active, oppositional fraud, constant exposure to other people’s risk, so you’re constantly getting countries that have less solid identity data coming and doing business with you… Not a problem to be solved but a risk to be managed. This is the kind of paradigm shift that I’m asking people to make. Can we generally agree that this sounds sensible? Are there any questions or objections at this part? Because if this part is wrong, the rest is basically a waste of time. [laughter] Does this sound about right?

Question: Is there also a probabilistic law about identity and legal compliance?

I think it has to be probabilistic. In fact, I’m going to come to compliance at the end of this, because this is a super hard problem. The law would like it to be that you as the business know exactly who you’re dealing with, but in actual fact, the law knows that there is an error rate, and somehow there has to be a negotiation about taking that error rate from something that’s pushed under the rug, to saying, “This is our identity compliance rate. Is it high enough or not?” which is exactly what they do for say aviation. It has to be six nines perfect, five nines perfect or four nines perfect, and you figure out what the compliance rate is and then you express that probabilistically. I can’t see another way of doing this.

Okay, now some fun: blockchain guy will beat on the blockchain.

The cryptographic fallacy. It’s just a computer: if you tell it something which is wrong, it will store the wrong thing that you have told it perfectly. The cryptography doesn’t correct errors; it preserves them, it pickles them, it makes them really, really, really clear, it stores them for you, but it doesn’t correct the errors. What cryptography offers identity is lowering transaction costs, and this it can do very reliably. You got it right the first time, you store the fact that you got it right the first time in there, you can transmit that and extend that fact to other people very easily. What it does not do is help you get it right the first time. The reason that blockchains have this amazing reputation for reliability is because they don’t touch the real world, and if you have any computer system that never touches the real world, it’s incredibly reliable. It just lives in its own universe: the coins are generated on the blockchain, they’re spent on the blockchain, they’re used on the blockchain, all the assets are on the blockchain — this is where their reliability comes from.

HTTPS certificates are supposed to be the gold standard secure backbone for processing all your credit cards online. Agencies called certificate authorities are supposed to provide the identity layer for this, you give them $15,000 and you ask for a certificate, they check your paperwork and make sure that you’re working for the organisation you’re supposed to be working for, and they hand you a certificate. This goes wrong really often. There was a cracking one of these where a Microsoft Security Update certificate was mis-issued, and somebody went around, basically issuing viruses that your computer was programmed to treat as Microsoft. If even the certificate authority infrastructure is vulnerable to fraud, how are you going to have an identity system that’s equally protected? It’s just not going to happen.

Nothing is worse than bad information with digital signatures on it. Because people look at the digital signature, they usually don’t even verify it, “It’s got a digital signature so it must be true!” If they do verify it, “Oh, it was said by the person who said it, that means it must be true.” They really want to believe that the magic block of crap on the bottom of the message equals the message is the truth, and that’s a sociological problem. People are still mystified by this wizard crypto stuff, and they constantly want to overextend the trust onto those systems; we don’t want to do that. The cryptography is for reducing the transaction costs; it’s not for securing the system. These things have to be separated.

Given that we’ve got this predictable risk — 0.5%, 1%, 2%, 4% — given that we’ve got systems that are in ~constant attack of~ fraud, let’s figure out how to get the insurers to take up the slack. VISA only works because VISA insures every credit card transaction. You would not use VISA at all, if every time somebody ripped you off you lost the money that they had taken. VISA is big enough and heavy enough that it generates enough extra transaction volume that it can just pay for the losses on the way. They’re taking their 1% cut of 1% global transactions or something like that, and they’re just using that to smooth out any little problems that occur, “Oh look, we lost a lot of money over there. Just pad that out,” and as a result, we use the VISA system as if it was completely reliable and bombproof. Our user experience of VISA is that VISA is perfect; VISA’s experience of VISA is that they are running around like mad people just whacking problems with a mallet, but we never see that because our risks are insured, and so the system appears to be perfect to the people who matter, who are its customers. This is what we could do for identity. We could get a system that’s 99.5% right, we could cover the remaining cracks with insurance, and then we could have a system which basically feels perfect, because it’s got a properly-designed way of handling the gap between what the computer says and what the world actually did.

Comment: Credit card fraud is in the billions, that’s what we talk about. The issuer of cards can offload it to the schemes and in the end the merchants pay, and so that’s how it works.

Exactly, and if we didn’t have that system, credit cards just wouldn’t be a thing. We could have a much more reliable credit card infrastructure, we could cut fraud by 90%, and you could argue that the insurers are preventing us from doing that by making us too comfortable… Pathological.

Comment: Flaw in the system [~3, 34:47], ~but also the fact in your published documents~ [~2].

Yes, absolutely. There’s no denying the credit card infrastructure should be tighter, but if it’s a choice between a somewhat loose credit card system that covers up cracks that maybe it should be fixing…

Comment: By the way, the same for MasterCard.

Yeah, absolutely. It just shows that these systems can be viable global infrastructure even with a bunch of cracks, and we’re not going to get perfect systems, so we’ve just got to figure out how to get the insurance guys involved, or we can’t do it.

This is another important pattern. If the systems are really close to perfect, they need more insurance, not less insurance. Aviation is dominated by very nearly perfect engineering and absolutely gigantic insurance settlements. With taxis, when they go wrong, they just turn the wrong way on the road and you wind up having to go 20 miles outside of your way and they charge a lot more money, and there’s no insurance for that because it happens all the time. When you get closer and closer and closer to perfect systems, the need for insurance rises rather than falls, because nobody ever plans for them to go wrong. The identity system is going to be an insurance system.

The pattern here, the deep, conceptual template is a composite like carbon fibre. With carbon fibre you have this long, stringy stuff which is the actual carbon, and then you put it into epoxy resin and it becomes incredibly strong, because we’ve got two systems that do slightly different things, different kinds of strength are combined. The cryptography gets you the hard transit: you take something, you lock it down with cryptography, it stays locked and you’re good. But at the edge of the cryptography you get this squishy, hard-to-navigate real-world thing, and into that you deploy insurance. The system is a composite of cryptography and insurance, and I think that that composite of cryptography and insurance is going to be the dominant paradigm for most of the 21st century’s financial innovation. I really believe that cryptography and insurance are perfect counterbalances for each other, because cryptography is mathematically perfect and precise, and therefore the problem will never be in the cryptography, the problem will always be where the cryptography connects to other systems. It’s always going to be weaker at the join than is inside of the cryptographic structure, so we go through the world, deploying cryptography to solve whole new classes of problems, every single one of those will be weaker at the join with the real world, and every single one of those is going to get covered by insurance. This is the design pattern for the 21st century.

The data is soft, I think we’ve already covered that pretty well.

The blockchain gives us hard data sharing, we covered that.

Let’s actually take a crack at designing a little bit of a system that might actually do the job. This is a little vague, I didn’t intend this to be a business plan, but I think you’ll see how nicely it fits on top of the stuff that the Evernym guys were talking about. It’s actually quite a close fit, we use a lot of the same terminology; we hadn’t actually looked at each other’s stuff in about a year and a half, so it’s funny how much convergence there has been.

We start with a decision-centric model. As an organisation, I publish the fact that there is a specific decision that I make. I’m an organisation that will decide whether or not I will grant a mortgage, and I publish that. I’m a mortgage-making organisation, and here’s a list of the criteria that I use for deciding whether to give somebody a mortgage. The idea that you could have published lists of the things that you want, what are the things that need to be in the profile before I can make a decision, that seems very reasonable, we’ve got informal ways of doing that, and now that can be formalised very easily. A decision maker is a person that publishes a profile request, “Give me this information and I will make a decision for you.”

A claim is an insured fact — this is where we begin to hit new territory. I go to my state’s driving license issuing body and say, “I would like you to give me a certificate, a digital certificate that proves that I have a car license,” they say, “Sure, here’s that information.” But if I then use that information for car insurance, and that information turns out to be wrong because I gave them a fake passport and they give me a fake ID, the insurer is now left carrying the bag, the car insurance person. That car insurer might want insurance on the certificate that’s come from the department of motor vehicle licensing. I take my certificate from the motor vehicle guys, I go and say, “I would like to give you to give me some professional indemnity insurance on this statement of fact from the motor vehicle guys,” they look at the motor vehicle thing, they say, “That’s okay,” they want a second look at my passport, they want to look at some GPS information from my car, and at the end they make a decision that for the grand fee of 75 pence they will give me £20,000 worth of additional insurance on that fact, and I then take that to my car insurer.

The idea is that we’re basically building a business model where people take statements of fact from institutions, couple them with an insurance product and then use them as ways of generating certainty, and this is exactly the same model as VISA. VISA takes a payment instrument and wraps insurance on it; we take an identity instrument and we wrap insurance on it. The insurance is what makes VISA usable; the suggestion that I’m making is the insurance is what makes digital identity usable.

A claim vendor is somebody that knows something about me and is willing to issue insurance on that fact — an attribute, the fact — the insurance would bring the two things together, we take a set of those claims and we assemble them into a profile. The profile is lots and lots and lots and lots of different insured claims that other people are making about my life. “Here’s a claim from my university, here’s a claim from my power company, here’s a claim from my government,” I take all of these things, some of them came with insurance — some of them I have to buy insurance for — and then when I go to do something like get a mortgage, I slam this entire thing on the desk and say, “All of these people are willing to put their money behind my claim that I am who I say I am. If something goes wrong, you can claim against this enormous pile of insurance, and all of your losses will be made whole.” At that point, it should be fairly easy to do the deal, because the person on the other side of the desk experiences no identity risk; we’ve covered the identity risk with the insurance package.

Very simple models. You could have small-time claim insurance people: you take a passport to an office, they verify the passport, they look at it and they call the passport office, they do some basic fraud check and they make sure the picture isn’t stuck on with glue, and they write you a certificate. I then hand that to somebody, and they don’t have to see my passport, they just have to know that somebody who is an expert has seen the passport and has written a claim insurance on it. Suddenly, we begin to see how insurance can substitute for information. If I show you the insurance, you don’t have to see the data that was used to generate the insurance. The possibility is that we reclaim privacy, because we have very strong relationships with our insurers and they know an awful lot about us, but that information is not given to the people we want to do business with, only the proof of insurance is given. So there’s an opportunity here to build a business model which has the ability to generate privacy again in a way that’s advantageous for everybody in the loop. I as a bank don’t really want to know all that much about you, as long as I know that your loan will be paid; I as an insurer want to know everything about you, but I don’t want anybody else to know that stuff, because it reduces the need for my insurance. The insurers actually have a very strong vested interest in making sure that their customers’ privacy is respected.

Other people that could make claims. Universities know an awful lot about their students, particularly their academic performance. Passports we’ve discussed. Proof-of-address verification: there must be 50 entities that have a strong proof of your address, any one of them could give you a little bit of insurance, “Yes, we know that address is valid.” Lots of the claims that you want are things that somebody in your environment has really good, solid information on: your landlord, your mortgage holder, your employer — all of these people have repositories of data. You don’t want that data just being handed to third parties, but an insurance object based on that data is something that would be very useful to be able to present to people when you wanted to validate an identity.

Comment: It’s a fascinating idea. We’ve tried to solve this problem for 30 years, and we have, and now you’re introducing another middleman which is an insurer. We give Nobel Prizes to people who tried to understand this, ~and you’re doing~ [~2, 43:44] the insurer has the right incentives to behave nicely.

Precisely.

Comment: But from then you mentioned business models. By introducing another middleman and this insurance entity you solve a lot of problems, but for real-world use cases and the killer app and for uptake, which we have tried and failed for 30 years… You’re adding friction, another person. How can you compete with the existing world by adding magic and another middleman?

This is the magic which allows you to use the blockchain for managing identity. If you want to, you could manage identity on a blockchain because you want the ultra-low transaction costs and the high security; my assertion is you’re not going to be able to do that successfully without bringing insurers along with you. I think the price of using cryptography to solve the problems is that you require insurers to smooth out the edges.

Comment: Yes, but there may not necessarily be other parties. If you have an attribute provider and the signature that he puts on his attribute is taken to mean “I will cover anything that goes wrong because of your own claim for say $200,” only means that this party has another function, namely he will actually have to be liable to some extent for what he is telling. ~That’s another way of putting~ the same thing.

Exactly, exactly. All the kind of cryptography that people are talking about, with attributes and third parties and zero knowledge proofs and all the rest of that stuff, it’s a very small addition to add some cash into those. I don’t think that this requires much rethinking of the technology, but I think it provides that all-important smoothing out which will let the technology work in the real world. Because this is the problem that we all have. Everybody wants the technology to just work, but the technology doesn’t just work because of all these funny-looking edge cases. We need some way of smoothing it.

Question: Does reputation management in some way or form help out? If three parties basically attribute that I am Andrew, would my insurance be lower?

Reputation to me is just… I hate reputation systems, I just hate them. The reason I hate reputation systems is that I think that they affect human society in much the same way as inappropriate surveillance by the state. Being constantly watched by your neighbours, by the people that you’re doing business with and having all that stuff published with star ratings… Uber drivers are not very happy because their business is constantly being monitored, and crazy people get into your cab and then ruin your reputation just because they’re crazy, and by the time the system has smoothed out the bumps and the reputation over some statistical average, your career is ruined.

Comment: We’re looking for more of an objective [~5, 46:47].

If we assume for a moment that peer-based reputation systems are just inherently difficult, i.e. evil — I really dislike them — the other prospect is that you could have actual performance. You’ve turned up to work every day for years and your boss thinks you’re great; that should definitely lower your insurance stuff, if you’re doing some HR-related function. There ought to be lots and lots of places where good conduct is rewarded, but it ought to be in the form of people that are willing to put money behind your claim that you’re an okay person. Because the thing about reputation is there’s no consequence to reputation, good or bad, so it tends to be awarded on things other than performance, whereas when you put money on the line and say, “How much do you really trust your brother-in-law?” At that point we get a much more accurate read of what the real trust structure is. There’s another version of this talk that I have not yet given, which basically tries to break down the idea of trust into actually trust is just a proxy for accountability, and this is an accountability system. It’s not a trust system; it’s an accountability system. When something goes wrong, I will be sorted out. I took the risk, it went wrong, you will make me whole, that’s the relationship we have — in many ways it’s a ~non-trust~ system. The person making the trust is the insurer, and the insurer’s job is to not make mistakes, and there’s good, constant market pressure on those guys to be accurate.

Comment: There are at least five identities in any credit card payment: the citizen or the consumer, issuing bank, the card scheme — it could be VISA, MasterCard or whatever — the merchant or the merchant acquirer and the acquiring bank, and the key problem today is the identity of the merchant, that is the core of the problem. At least 20% of the merchant data is incorrect, according to a famous global consulting firm. How do you see that through an insurance model you upgrade the flaws in the system to have less fraud, because that’s the real problem. From a consumer perspective, the consumer is covered anyhow. However, the consumer may not get what he wants, because know your supplier is also a complex problem on the Internet.

Yes. VISA is a really hard case because they’ve already got an insurable thing here, but what they have is a very badly-behaved insurable thing. Because the entity that is charging the transaction fees is also acting as the insurer, there is no competitive pressure in a market of insurers to force the credit card fraud rate down.

Comment: As long as the merchant pays, there is no issue.

Precisely. So what we’ve created is a situation where we’ve taken the market pressure out with the insurance game, and at that point the incentives are misaligned.

Comment: And it’s the game of swapping the liability, which they did some years ago, instead of [~2, 50:05], and as a merchant you have to pay even more.

Yes. That mechanism has basically taken all of the incentives out of the insurable risk structure. Because it’s a monopoly, because they can charge what they like, they can move the weight of the fees around, the pressure towards actual technical excellence to lock those systems has been removed, and as a result the whole thing is very slack. It’s quite problematic.

Comment: Does that mean insurance companies will take over the risk of consequence?

I am not sure that VISA will let anybody get into that game, so it may be that that structure is beyond help.

Question: What about a reinsurer?

A reinsurer… But if VISA is making so much money that it can just insure its own risk on this stuff, then the competitive market for insurance will not provide any benefit, because we have a monopoly of insurer that’s acting in a pathological way.

Question: Could we introduce prediction markets into this element? Because obviously competition in prediction could act as an insurance model.

Absolutely, yes — very, very fast.

Specialised claim insurers cover hard cases. In the same way that if you want to insure art you can grab something from a specialised insurer that understands the value of art and manages it appropriately, specialised insurers that cover specific kinds of identity risk — a few are listed.

Claims are ideal for blockchain use. A claim is basically a contract, it’s very unambiguous, it doesn’t have much squishy stuff on it; it’s a great thing to put on a blockchain where the ultra-low transaction cost environment is a good fit for the actual object. The facts can be ambiguous, but the identity of the insurer and the amount of insurance being offered and the policy documents, all that stuff is cut and dried. So there’s a really good possibility for having a regulated industry, which are these information insurers, and all that stuff is a good fit for the blockchain, and we can keep the actual facts off the blockchain as much as possible.

Profiles can be partly on a blockchain. All kinds of clever stuff about zero knowledge proofs, zk-SNARKs, all of this kind of stuff that the identity companies are working with, all of that stuff is a great idea. Exactly what model will win out probably depends on advanced mathematics, and there’s so much work being done in those areas, like homomorphic encryption, that today’s model will not be the same model we’d use in 10 years, it’s likely to keep improving.

Stacking claims is how it works in the real world. You take every little place where somebody has got a set of information about you, you ask them to turn that into an offer for a claim, you then buy as many of those claims as you need to get the job done on whatever thing it is you want to do that requires this kind of hard identity. What we’re talking about is being able to monetise knowing stuff about people in a way that the people themselves control. I as the customer pay you for information about me in the form of an insurance product, and that keeps the individual in control of the system as a whole because the individual is the payer. If we move this around so the individual is no longer the payer, you’ll get the same kind of structural corruption that you see in the VISA example. You need to make sure the individual has skin in the game.

Consequences and consequential loss. It’s just about managing consequential loss, keeping everything wrapped up into these consequential loss units. There’s really no other reason that we want all this information and all this identity stuff other than managing consequential loss.

Accurate pricing of risk is a really good idea, and for this you need competitive markets. If you wind up with pathological monopolies, the risk is no longer going to be accurately priced. It’s very important that there is market design in this kind of a thing, and this is an ideal role for regulators. Regulators need to be really on top of this thing, because if you get monopolies, you will not get a performing identity architecture.

Setting a price on the risk of wrong facts. Basically, it all comes down to how much coverage you need when you’re buying an insurance product: the more ambiguity there is, the more expensive it will be; the more cover you need, the more expensive it will be. This is a great place for people to put all their clever proprietary training algorithms. There are huge opportunities here for smart people to get really good at figuring out what the truth is, and I think we all could agree that prediction markets would be a mechanism you could use there, not the only one, but having a set of industries whose job it is to get really good at figuring out the truth is incredibly useful.

The identity ownership thing. The individual facts are owned by other entities; the assemblage of those facts and the rendering of that information to something that could be used is owned by me, and this bridges nicely this problem of half of the stuff is owned by one person and half of it is owned by another. “You know my driving record, I am the person that was doing the driving. We package that up into an insurance product, I pay for it and we sell it.” Very, very simple.

The hard question about regulators. To get this thing to work, regulators would need to be comfortable with companies having a probabilistic relationship with their insured risk. “We are 97.5% sure that we have everybody’s identity correct, the edge cases would amount to no more than 1% of our portfolio by weight.” This is why you need a small country to implement this kind of thing, because right now almost all the financial system regulations simply state “You’re going to know who your customers are, end of story,” and then you have to eat the costs of the risks internally. Small country advantage is you can have sophisticated relationships with regulators, you can show them the internal fraud data, and they can use that to manage the risk expectations.

Facts are useless, give me insurance: this is basically the story in a nutshell. That’s what we want to preserve.

Four final rules. Human systems are imperfect, don’t forget that. Whenever you hear somebody talking all the techno-nonsense about how cryptography will save the world. Humans are imperfect, and as a result human systems are imperfect. Not everything can have equal focus. Some parts of your system will be very tightly controlled and very reliable, like avionics; some parts will be incredibly unreliable, like luggage. Whenever anybody says the blockchain will make everything equally transparent: no, it won’t — it’s not how the world is. Facts in blockchains will be a mess because the facts will be mishandled, because everybody will see that this fact was on a blockchain and will assume it to be true. When you get bad information into a blockchain, it will be much more dangerous than bad information scribbled on the back of an envelope. Be aware that people will psychologically overinflate the value of facts on a blockchain, we can’t stop that happening.

Finally, specialised knowledge. If we figure out a way of taking all the slack in these systems, all the imprecision, and selling it to people whose job it is to pick through it and fix it, you’ll get a whole bunch of new technology from people that are very, very good at figuring out what is true, and this is socially good. What we’re talking about is basically figuring out how to build new industries who are in the job of doing information assurance, and to create those new industries we have a market design, and that market design is really the challenge. If we can create a market which will allow these companies to exist, they’ll come into existence and they’ll solve our problems. Thank you very much. [applause]

[57:50 — End of Transcript]

--

--