MCG: One Year -- 100,000 Visitors

Summary of Current Technical Developments

Near-Term Perspectives for Binarily-Secure Communications

Ed Gerck
© MCG, 1998.
This message was originally sent (unformatted) to the MCG-TALK list server

It is perhaps clear that the Internet is on the verge of a transition -- from a wide-area multi-parochial network to a truly peer-to-peer international communication medium, giving birth to what is often called a cyber-society. A telltale sign is today's increasing public awareness of present certification problems, which begins to be more and more perceived -- e.g., as a road-block to widespread e-commerce.

Such perception was not so clear one year ago, even to experts in the field -- which just exemplifies Moore's law. Then, the MCG was formed by the understanding of a handful of people that current certification systems that are based on tertiarily secure certification had basic shortcomings which could not simply be solved by yet better and clever implementations of today's standards or by proposing YAPKIs (YAPKI -- Yet Another PKI) after YAPKIs. The first results were published in one year ago and called attention to the lack of symmetry between a peer-to-peer Internet and centralized hierarchical certification solutions.

From the onset, certification was recognized by the MCG to be a difficult problem, much harder than cryptography. Much harder than it looks. The reason is that cryptography deals mainly with syntatics -- objective quantities such as keys, deterministic calculations and closed-form protocols. Certification deals, however, mainly with subjective and intersubjective quantities -- unknown keys, unknown assumptions, estimated behavior, open-form protocols. In other words, certification needs to include all three research areas in semiotics (i.e., the study of symbols): syntatics (form), semantics (meaning) and pragmatics (environment influences, attackers and eavesdroppers). Thus, mathematical models were needed -- a bottom-up approach imposed itself. In that, certification must be concerned not only with "correctness", as cryptography,  but also with "effectiveness".

Using mathematical models whenever possible, following papers and published e-mails showed why current certification systems such as X.509 and PGP could not scale and would not provide the needed security for a rapidly expanding and increasingly international Internet.

However, pinpointing problems is not enough. The MCG proposed and is developing non-proprietary solutions, in a public and open effort, protected by copyright law but waiving patent protection from the onset:

And the public has been responsive to the MCG work, for example as reflected in its Website statistics, even though the MCG Website has zero graphics and gizmos -- just technical content. In one year, the MCG grew to include participants from 25 countries and has provided more than 2.3 Gigabytes of information -- almost all text -- over the Internet. More than 100,000 visitors (excluding 5,000 machines from neighboring domains) have directly reviewed the work being presented, which has reached more than 18,000 different Internet hosts (also excluding 5,000 neighboring machines).

In a nutshell, what are the basic concepts, models and protocols being developed?

Certification is shown to depend at least on two concepts: "proper trust" and "proper keys". Which must first be adequately qualified in communication theory terms and then modeled in an useful way.

Further, trust and keys are not seen as objective quantities (not even approximately) but are treated as fully subjective -- implying that both must be transferred from one party to another, while obeying distinct laws of acquisition, recognition, decay and validation. Further, such laws are recognized to be intersubjective in many ways, even regarding parties that may not be a visible part of the dialogue -- such as the relationship between a CA and a non-subscriber.

To develop such laws, the mathematical framework of Shannon's Information Theory was expanded with 150 year-old tools from Grassmann's "Ausdehnungslehre" and Gauss' intrinsic geometry. Basically missing in Shannon's work was the concept of meaning (from which trust depends) and the expansion of its geometrical signal model to include intrinsic geometry (from which coordinate-invariant systems can be derived).

The concepts of entity, identity, anonymity, pseudonymity and names in general were also revisited and discussed under the light of the truth conditions and truth values which they may express, because certification depends essentially on what it references. Using the distinction between "sense" and "reference", as defined by Frege in his work on semantics, and the notion of accountability, it was possible to meaningfully divide all possible entities into two classes, called Domain-Space and Image-Space -- as discussed in the slide documents, the MCS-FAQ and elsewhere such as in the mcg-talk messages. This conceptual division was then applied to the concept of certification levels and degrees of extension, allowing hitherto undefined but nonetheless useful certification levels for a wide variety of entities to be possible and, already predefining nine different levels.

A further central point of the work was on the conceptualization of trust and the definition of its mathematical properties. Following the public discussions in the mcg-talk list, it is possible to perceive that we have come to a security standoff in the Internet world -- as the Internet expands from a parochial to a planetary network for e-commerce, EDI, communication, etc. The standoff was found to hinge on the often forgotten question of trust in communication systems. How and to what measure can I acquire and transfer trust? Since not all parts of a public and distributed network can be supervised by myself and some parts do not even belong to myself, while any part can be unwittingly shared with malicious attackers, how can unsupervised reliance be defined and evaluated? How can I rely upon an entity's declarations and acts when the entity is using an Internet link? How can two unknown parties reciprocally transfer a meaningful and reliable set of objects, such as their respective cryptographic public-keys? Thus, either we develop a real-world model of trust in communication systems --  that must be able to fully handle trust and all its subjective and intersubjective aspects -- or we cannot continue to deal with limited and faulty-ridden trust models that treat trust artificially, as mostly objective. Citing "Towards a Real-World Model of Trust":

The implicit definition of trust in communication systems, given above, has also allowed an explicit definition to be obtained under some rather general assumptions, as "trust is that which an observer has estimated with quasi-zero variance at time T, about an entity's (unsupervised) behavior on matters of x". Here, the estimator is seen as a quantitative forward- and backward-predictor  for the acts of an entity regarding matters of x, when that entity is not supervised by the observer. This means that trust is not auditing -- trust is that which can be relied upon without surveillance by the observer (possibly because it cannot be measured due to physical, secrecy, cost, time or other difficulties).

Further, trust was shown to be non-transitive, non-distributive (in social terms), non-associative (in mathematical terms), non-symmetric and, in general, non-boolean. However, expressing trust as qualified reliance on received information has allowed trust to be defined by mathematical operators which can represent the concept of soft-trust, when the truster permits (as in the real-world) some degree of  transitivity, distributivity and so on, which turns out to be essential to Internet communication processes -- but  which open a series of security risks. Currently, the work is centering on the development of a proper trust algebra (using Grassmann's Algebra) that can represent and allow soft-trust and its risks to be calculated with  a type of proposition calculus. Trust algebra is non-boolean but begins with boolean propositions of the type "A trusts B on matters of x at time T"  and unfolds into fully intersubjective calculations on n-dimensions, which can be visualized by using the concept of multivector intersection.

The concept of  "proper trust" can then be mathematically defined as satisfactorily as the concept of  "proper keys", by  allowing trust and keys to be fully described by convenient metric functions in a coordinate-invariant formulation of certificates within a seven-dimensional metric-space. As a general result, certification in communication processes was shown to be mathematically equivalent to the geometric problem of distance measurement in a metric-space -- as can be intuitively motivated by observing how  key-distribution works.

For two parties in a dialogue, all possible certification procedures are then classified in only two models: extrinsic and intrinsic, with a combined mode. All known security designs correspond to the extrinsic model -- which depends on references that are extrinsic to the current dialogue, with certification relative to a third-party or past events. The intrinsic model is a new security design -- which depends on references that are intrinsic to the current dialogue, with certification obtained by measurements that rely upon intrinsic proofs.

Which leads to an unification of all possible certification methods in just three categories, which can all be represented by Meta-Certificates:

Further, MCs are being designed to obey the first rule of a standard: interoperation. For example, MCs are being designed to interoperate with X.509 and PGP -- allowing not only existent systems but also existent expertise to be used.  They are also being designed to enhance current procedures even without  any changes to the original standards, such as exemplified by the concept of X.509 epoch certs.

The intrinsic and combined certification methods offer more than just a different (i.e., binary) solution to certification. They recognize that third-parties such as CAs and TTPs are actually artifacts -- i.e., they are artificially introduced into an essentially binary problem (i.e., communication between two entities is essentially binary).

Which may be useful to provide a tertiary reference, if needed and if one agrees to the corresponding legal encumberments, but which do not have to be there in order to warrant security. In fact, security levels can be higher without a third-party -- the intrinsic certification model shows that arbitrarily high levels of reliability and fault-tolerance can be reached even in the presence of malicious interference, without hierarchy or central control of any kind. The need to accept cryptographic-key control by TTPs is thus avoided, even if legally mandated for TTPs, because TTPs are not needed in order to warrant security.

Regarding possible certification encumberments by TTP legislation in some countries, there are several solutions that can complement the use of binarily-secure certification systems.  The hash of a public-key can be used as a name (i.e., a reference) and such was suggested by the MCG one year ago (without any political connotations) as a simple way to completely and legally circumvent privacy and international concerns caused by any of the proposed TTP legislation, while affording full security -- when that key is not included in the certificate (as it does not need to be).

How do the MCG developments affect the question of a global PKI? By showing that a global PKI is possible, albeit different than imagined. Instead of a trajectory-invariant system of fixed CAs and TTPs which provide global references for local meaning (such as the fixed stars which are used for any trajectory in ship navigation), an intrinsic global PKI is a coordinate-invariant system which provides local references with a global meaning (such as inertial navigation systems which are used in missile navigation).  This answers very recent  (i.e., after the initial publication of this Report) concerns on current lack of Internet security and lack of proper trust management for global PKIs.

How close is that to a practical system? As has been publicly invited, some companies accepted the risk and cost of developing trial applications -- which are allowing a practical Meta-Certificate Standard to be written and to be revised before widespread public presentation. Currently, some products which implement extrinsic MCs (with enhanced features) have successfully finished their beta test cycles and will be announced by their developers as soon as their product wrappings are ready. Other products implement intrinsic MCs and are being beta tested. The MCS itself is nearing its first revision phase and will be released also as a draft for public discussion -- even though much of it is already available at the MCG site, in separate papers and e-mails.  Following work, in 1998, is extending the model to allow ambiguous and non-unique names, with the full picture of  "proper": trust, semantics, keys and paradigm -- called TSK/P.

Reviewing the year's work, it becomes apparent that the open and public MCG approach to Internet discussions has been able to provide a fruitful two-track approach to standard development: (i) tools for public peer-review and criticism from a large audience, and (ii) mechanisms for efficient professional and confidential discussions on various levels.

In the context of the definition of trust advanced in our discussions, such two-track approach has led to varying degrees of trust being transferred among the different participants -- the majority of whom never physically met. This is perhaps another indication of the dawn of a cyber-society, for which we hope to contribute by allowing communications to be binarily secure. In a practical and simple way, by boiling down all such considerations into an effective and easy to use "cert-o-meter" -- with full interoperation with any other standards the user may want or need to use, such as X.509 or PGP.