Why Is Certification Harder Than It Looks?

Ed Gerck
© E. Gerck and MCG, 1998, 1999.
All rights reserved, free copying and citation allowed with source and author reference.
The original version of this message was sent (unformatted) to the MCG-TALK list server, 12.May.98


This essay discusses internet certification and its necessary dependence on intersubjective variables such as names and trust, in contrast to network certification that may depend only on objective variables such as keys and authorization. It is shown that internets are radically different from networks since they present multiple control boundaries and lack common references. This introduces added difficulties for internet certification protocol reliance, which are also dealt with. The essay posits that confusing internets with networks is at the root of current Internet certification and cross-certification problems for various protocols,
which grow in importance as the Internet expands and becomes less and less a network.

Paraphrasing Bruce Schneier's well-known paper title [1], this essay picks up on a paradox. The paradox is that while certification is harder than it looks, it must be used by the average user -- not the average cryptographer or mathematical science Ph.D. The discussion is rather lengthy, which will tend to self-justify its title.

Public-key cryptography may give the impression that security can be simply achieved. It seems that one only has to allow the public-key to be distributed at will, no need for secrecy, and anyone can receive private and secure messages. The same procedure being applied to each side, sender and receiver, both could immediately engage in private and secure communication. However, who is at the other side? Is that key really from the sender? Is the key still valid? Questions soon appear and it becomes clear that public-key cryptography has indeed solved the problem of public-key security but not the problems of public-key acquisition, recognition, revocation, distribution, re-distribution, validation and, most importantly, key-binding to an identifier and/or key-attribution to a real-world entity. Communications can be verified neither for origin authentication nor for data-integrity -- communications can be private but not secure. Of course, a private communication with a thief is not secure just because it is private.

Clearly, without binding the key to an identifier such as a person's common name, the key is just a byte string and can be yours as well as anyone else's. But, common names or identifiers are oftentimes not enough -- where legal capacities must be defined, one needs to have some assurances that the key can be attributed to one well-defined real-world entity such as a person or company. Certification is needed -- i.e., a secure binding between the public-key and some desired attribute, usually the entity's name and/or the entity's real-world confirmed identity. Which still contains all the previous questions, such as certificate acquisition, recognition, revocation, distribution, re-distribution, validation and, most importantly,  what are the intended senses or meanings for key-binding to an identifier and/or for key-attribution to a real-world entity. Thus, certification just shifts the same previous key problems to a previous layer -- where, however, attributes are used as convenient references to differentiate one certificate from another. However, just to the extent of the reference's sense, whatever it is. Thus, it is easy to see that certification depends on a series of hard to define assumptions, which include not only references but also sense.

Therefore, anyone that works with certification protocols, sooner or later, recognizes that certification is a difficult problem, much harder than public-key cryptography. As we will see, the main reason is that certification must be concerned not only with "correctness", as in simplified cryptographic procedures, but essentially with "effectiveness" [2].

The notion of effectiveness brings the question of trust to the forefront -- we need to trust the protocol,  i.e. have some degree of reliance on the procedures and the parameter's values and meanings. However, except for the work developed by the author and the MCG [2], there is no clear consensus and no actual working picture that explain key and trust interplay in certification procedures for different scenarios and threat assumptions.  This was recently highlighted by the US Director of Central Intelligence, George J. Tenet, who declared [3]:

Thus, the effectiveness of keys (i.e., whether they work) depends on trust -- not only on syntactics. Also according to Tenet, "Much of the public discussion and rhetoric is about encryption -- with little attention focused on what is needed to make its use trustworthy."

Since last year our work on certification has been affirming essentially the same thing -- almost verbatim [3].  Summarizing, to include trust in the design is to invite complications, but not to include it is the same as giving up on a secure design. For a secure design, neither trust can be added by outside procedures (e.g., Practice Statements) nor afterwards -- trust must be part of the protocol. Accordingly, the development of the generalized certification model  [4], [5], [6] shows clearly that certification in any form needs at least two references: "proper trust" and "proper keys". The work has also shown that these two references (trust, keys) must first be acquired by cognition and then validated (hence the word "proper"), before they can be used in recognition protocols [4].  In other words, the work shows that certification cannot be just based on perfunctory actions, but needs to concern itself with difficult things such as trust and subjective names  -- or give up any hope of being useful. The generalized certification model further predicates that certification can be defined in two completely different modes, called extrinsic and intrinsic -- albeit with a possible combined mode. All three modes are being implemented by the MCG in an open effort [2].

Trust is the "semantic vehicle" of useful information, as shown in the Trust Theory developed by the author [6].  This means that trust is not only essential for the cryptographic meaning (i.e., providing for valid origin authentication and data integrity authentication) of the certificate but also for the non-cryptographic meaning of each of its atomic parts -- for each of the names that it may contain, such as common names, keys, hashes, etc.  Clearly, given the general context of the present treatment, the same applies to any communication process, whether on the Internet, over the phone, postal mail or even person-to-person -- where trust must be likewise essential to provide for collective as well as individual meaning.

So, to bring trust to the forefront -- as done in [4], [5], [6] and as George Tenet has recently advocated -- is a natural design requirement for certification. Its neglect has consequences which cannot be left for a wishful future, as Tenet has so forcefully expressed with concern [3]:

However, trust is used here in the subjective and intersubjective concepts of qualified reliance on received information [6], not as some sort of authorization or even license as oftentimes used -- when trust is reduced to an objective property, as usual in network security.  The Internet is, however, a network of networks and there is no common reporting of any kind that could allow objective trust to be authoritatively defined for different networks in an internet. For internets, trust must be handled intersubjectively and also subjectively -- thus, internet security is radically different from network security.  This realization is however ignored by Internet certification protocols such as X.509 [5], SKIP [5], IPSEC, DNSSEC and PKIX [5], which deal with the Internet as if it were a network and have no mechanism to define trust but by authorization (i.e., objectively).

The differences between networks and internets will be further discussed elsewhere but some aspects have been pointed out also by Einar Stefferud in his Paradigm Shift series and communications [7], who notes that in an internet no one can control both ends of a connection, neither sending nor receiving.  Perhaps the earlier mention to internets being radically different from networks in regard to control boundaries and lack of proper common references was made by Paul Mockapetris in 1983, the author of IETF's RFC 882 [8], as we can recognize in the very first remarks of RFC 882 -- which also apply directly here:

As applications grow to span multiple hosts, then networks, and
finally internets, these applications must also span multiple
administrative boundaries and related methods of operation
(protocols, data formats, etc.). The number of resources (for
example mailboxes), the number of locations for resources, and the
diversity of such an environment cause formidable problems when we
wish to create consistent methods for referencing particular resources
that are similar but scattered throughout the environment.
The same reasoning also applies to names -- such as in the Internet Domain Name System (DNS) [9] used to designate resources on the Internet, or in the Distinguished Names (DN) used in certificates [5] to denote the keyholder, which names must also be viewed as intersubjective quantities in internets, not objective values as they can be defined in networks or in a legal trademark naming system.

Therefore, certification is harder than it looks because it needs to involve not only objective quantities -- keys -- but also subjective and intersubjective values -- trust and names, for example, and in various linkages.

However, a question presents itself: even if the final protocols are easy to use, how can the current work on generalized certification allow the public to trust protocols they may not fully understand? Clearly, the design has increased in complexity and this could potentially make the issues even worse when compared to today's difficulties faced by users-- as when trying to evaluate the efficacy of different procedures.

To the user, one must recognize that technical explanations on the developments are of little value.  The problem is that while technical details are not important to users and certainly utterly beyond their average field of work -- they need, however, to be trusted by the users.

But, when we regard trust as qualified reliance on received information  [6], then we see that trust can be built in several ways. For example, by full publication of all source code which is involved in the procedures -- which will allow the public to eventually perceive that the protocols have nothing to hide and that they work fine.  In the same way that a plane's pilot can trust the plane's inertial navigation system defined by laser gyroscopes, also when it is foggy at night -- because they have proved they have nothing to hide and they work fine. Even though neither the passengers nor the plane's pilot may have any idea how a laser works and how laser light can measure rotation without an external reference (and, to fully understand all the reasons behind laser gyroscopes and inertial navigation would most surely require a Ph.D.).  In general, we thus learn to trust tools because they work, not because we are told they work [10] -- as the dictum goes, "Trust is earned".

Trust as qualified reliance on received information is thus needed both for certification as well as for certification protocol reliance.

This leads to the observation, to be further developed elsewhere, that security must be primarily viewed as a form of understanding, not of confinement.  It makes little sense to confine use to X, if one has no justification to trust that which one is using in order to confine use to X or, if one does not know what one is confining in or out. Also, in dynamic systems such as internets which are essentially open-ended, one cannot confine what one does not control.

Security is a form of understanding, not of confinement -- thus, confinement can be a tool to security but never its goal.


[1] "Why cryptography is harder than it looks", in http://www.counterpane.com/whycrypto.html

[2] MCG 1997-98 Technical Report, in http://mcwg.org/mcg-mirror/report98.htm

[3] http://mcwg.org/mcg-mirror/cgi-bin/lwg-mcg/MCG-TALK/archives/mcg/date/article-447.html

[4] http://mcwg.org/mcg-mirror/cie.htm

[5] http://mcwg.org/mcg-mirror/cert.htm

[6] http://mcwg.org/mcg-mirror/trustdef.htm

[7] http://mcwg.org/mcg-mirror/paradigm.htm - In a personal communication, Stefferud noted to the author that he was first led to this realization from the perspective of management trying to deal with the way shared computing fits into an authoritative top down structure. For example, in the early days, Universities funded computing from the campus budget, so the Computer Center was in some sense reporting to the President under a budget given to support the whole institution. But the users were at several reporting levels down, and had no feedback to the top, so there was no good way to decide how much computing would be enough, or what kinds should be provided. This predates the ARPANET and the Internet back to 1964-65 when Stefferud was at Carnegie Tech (now Carnegie Mellon) and solved the problem by forming a bottom up committee of Deans to advise the President with Computer Guidance based on the choices of Departments. They did this by dividing the useful capacity among Departments, expressed as a fraction of continuous production for each Department, and told the Departments that they could only change their shares by petition to the Guidance Council.  This forced the Departments to make value judgments and offer up budget funds to get more computing. How and where value judgments are made is the underlying key. This experience completely discredited the top down computer service management model, and showed that internal service provision cannot be managed top down, in any kind of distributed authority institution.  Thus [author's comments], one should understand internets as general organization structures made up by networks of networks, each one independent, and that internets are essentially open ended in the way they are connected or expand  -- of which the Internet as we know it is just one example, so that one can certainly consider other cases such as the Departments in a University or even traditional commerce with producer, distributor, reseller, customer, banks, etc. as "networks".

[8]  ftp://ftp.is.co.za/rfc/rfc882.txt

[9] http://firstmonday.org/issues/issue4_4/gerck/

[10] http://mcwg.org/mcg-mirror/augustine.txt