Read also: 1997-98 Report: Technical Summary of Developments


version 1.0 issued in Jul-17-97
Comments welcome. First issued: Jul-14-97
Last Modified: Jul-30-97, SA EST: 16:15
E. Gerck
Copyright © by the author, 1997
All rights reserved. Copying and partial citation allowed, with source citation.


The Internet is growing at explosive rates, and yet it is constrained by security issues. Future widespread use of the Internet for commerce and business may not be able to rely worldwide on current mid-scale solutions for certification as represented by X.509, PGP and centralized control bodies based on a PKI formed by CAs and TTPs. When traffic achieves mega-scale proportions and national borders become just a .xx extension at the end of an URL, it may be more and more futile to try to enforce central control of a distributed system owned by none.

Further, we know that naming conventions such as e-mail addresses, DNSs and IPs are just convenient mirages in the worldwide Internet. For example, it is perfectly possible for a site that ends with .jp (i.e., Japan) to be hosted in the USA -- so, just by the DNS convention one cannot affirm anything about the site's whereabouts. The "parochial model" of the Internet breaks down easily when we recognize that all machines and addresses are essentially peers in the Internet. To make matters worse, DNS hijacking can make connections to go to -- without anyone noticing it. Thus, adding to the problems dealt with in this paper, current standards such as X.509  and the corresponding "certificates" are already very doubtful in their meaning, purpose and validity scopes -- also by the non-hierarchical and decentralized nature of the Internet, which is at odds with hierarchical control structures such as a PKI and TTPs.

A solution has been advanced by the MCG, as a non-hierarchical and distributed certification system based on an archetypical trust model called Meta-Certificate or MC. This new trust model interoperates with all current models but does not depend on PKIs, CAs or TTPs, does not need CRLs and implements a generalized certification model, introducing the concepts of intrinsic and combined certification -- besides the usual certification concepts which are unified under the denomination of extrinsic certification, also allowing for several enhancements.

This text is an introduction to these themes, providing a primer text on metric spaces in certification, the generalized certification model, intrinsic certification, enhanced-extrinsic certification, meta-certification and other related subjects.


We must first note that the terminology itself, as currently in use -- such as "private", "secure", "certification", etc. -- is unfortunately systematically ambiguous and ambiguously used. So, it may seem natural for someone to interpret our results in her own terms -- using previous and ambiguously defined concepts without even consciously perceiving them as ambiguous.

While for some this may seem like a futile play on words, our knowledge is expressed as such and we must, sooner or later, view our language as a type of "programming language" so precise as any programming language can be and also subject to world standards. Even more so for an Internet proposal, where English is not the mother tongue for the majority.

To exemplify and also already advancing a central distinction to be made here, we quote the following paragraph, with three phrases:

taken verbatim from the newest browser Netscape Communicator 4.01, in menu Security, when you visit a site without https (SSL).

The first phrase is an assertion. The second phrase reflects the usual understanding of the consequences of non-encryption, i.e. everyone can read it, and is a logic consequence of the first phrase. The third phrase is wrong because it is possible to verify the identity of a site (i.e., to certify a site) that only has unencrypted pages, even if the website restricts itself to the capabilities found in the  Communicator browser.

The reader should note that we have equated "verify the identity of a site" with "certify a site" -- a point that brings that third phrase into the central context of the MCG, certification.

Thus, equating "encrypted" with "certifiable", is erroneous and may be ambiguous, at best. Unfortunately, the public in general may not have the technical perception for this evaluation and may accept the third phrase as an "expert opinion" and take it in a very broad sense, adding to the general ambiguity.

But not just the above words are sometimes ambiguously or erroneously used, regarding security and certification. To solve this problem,  one must take care to define the basic concepts used -- which may not be what the reader uses. Who is correct? This is not an issue for us, and we do not want to take sides or take a ballot -- we just want that the words we use will mean exactly what we want them to mean and nothing else (a unified Humpty-Dumpty declaration, we could say).


As discussed in the MC-FAQ-Section 8, we have identified a set of reference definitions, which have to be clear to us and to the reader. Further, we want this set to be minimal, complete, as self-contained as possible, etc., as explained in the MC-FAQ, Section 8.

Some of the definitions presented in the MC-FAQ have been updated in the paper Certification: Extrinsic, Intrinsic and Combined, on extrinsic/intrinsic/combined certification, hereafter called the Certification paper. The Certification paper has defined the needed terms in a self-consistent and precise way, with several examples -- given in Section 3, "Basic Definitions" -- and we would suggest that section as a first reading.

Of course, other sources and other authors may define the terms in a different way, but the reader only needs to do a change of reference frames in order to see our concepts using other terminologies.


Distance is measured between two points and is always relative to the two points -- there is no "absolute distance". The measurement of distance is the subject of geometry, using metric functions.

The Certification paper  explains why certification is also a metric function, a result proven in the document Is certification a metric? To motivate the result, using a geometric analogy, we observe that two certificates ("points") are equal if and only if the "distance" between them is "zero". This means that certification is a relative measurement, like distance. Further references are contained in the documents Exposition and Checking Validity.

It is important to note what this means. Like distance, certification needs a reference. If we were to say "What's the distance of your hand?" you would probably just stare at us. You would need a reference, such as implied in the question "What's the distance of your hand to the computer screen?", measuring distance between your hand and the computer screen, using a suitable metric function.

The same happens with Internet certification and we will see this point in the next topics. This will allow us to consider geometry and certification as mathematical analogues, with suitable metric functions providing the measurement rules for "distance" in each case.

Of course, analogies play essential roles in Science and in Information Technology in special. For example, genetic algorithms, simulated annealing computation, Boolean Algebra, Markov chain and fractal compression, etc., are all familiar names in Computer Science borrowed from other scientific domains -- together with their basic concepts. This paper will present some of the most important consequences of the certification versus geometry analogy, with references to further published material.


Because it proves that certification of an entity needs three attributes:

(i) a reference for the relative measurement, i.e. the certificate
(ii) a reference frame for the observers of the measurements, i.e. the issuer and each entity
(iii) a suitable rule for coordinate transformations between reference frames, i.e. a PKI (Public-Key Infrastructure)

Thus, the mathematics of reference points, reference frames and coordinate transformations can be borrowed from geometry -- that is, notions we are used to and know for centuries. These three items above will be used both for extrinsic as well as for intrinsic certification, with suitable modifications for intrinsic certification.


Now, we must understand what we mean by extrinsic certification.

Menezes et. all. define certification as "endorsement of information by a trusted entity" and the discussion of this definition is given in the Certification  paper already cited, in Section 2. In the usual terminology, the "trusted entity" is called the issuer and the information originates from the subject, being verified by the verifier. The information is received and signed by the issuer, with the certificate itself being delivered by the issuer to the verifier.

This definition is called "extrinsic" because it depends on knowledge which is external to the parties in the dialogue -- such as a root-key from the trusted entity and/or trust on the trusted entity. It is important to note that extrinsic certification introduces an artifact, which is the CA. To prove that, we just have to find a different CA that would allow certification to happen -- which is of course possible in principle. Thus, the CA is not an intrinsic and unique property of the dialogue, not even at the beginning.

It is also important to note that all current certification procedures are extrinsic: X.509, PGP, SKIP, etc., either in a space-like dependence such as a root-key or in a time-like dependence such as trust.

Self-signed certificates are also extrinsic, because they depend on a time-like extrinsic variable: trust on the issuer (i.e., who signed the certificate). In other words, such trust must be previous to the dialogue -- thus it is external to the dialogue.  We further note that acceptance of self-signed certificates is either a leap-of-faith or it contradicts the general assumption that the two parties were unknown to each other.

For a discussion on the aspects of trust, the reader is referred to the document Trust Properties.


The Certification  paper defines certification of a subject as "a secure process for the designation of X to the subject, within a process boundary", where X is a set of objects that the subject knows and which allows the subject to be recognized, as well as distinguished from unlike subjects.

Here, the subject is a machine-executable object and may be an identity, an authorization, a procedure, or whatever. The subject's certificate is a secure wrapper for X, that the verifier can read but not change after it was accepted. Note that this definition is general and does not involve a third-party. If the process cited in the definition only depends on the two parties in the dialogue, then we call it "intrinsic certification". Of course, if the process depends on a third-party, then the usual definition of extrinsic certification is obtained.

Here, certification is called intrinsic because it depends only on the two parties in the dialogue and may not be influenced by outside sources (called "enemies" in the Certification  paper).

Further, the two parties are supposed to be previously unknown to each other, so intrinsic certification cannot be a sequence of self-signed certificates issued by the subject -- because the verifier would have no reason to trust them and no way to verify them either.

Of course, the question now is: "Does intrinsic certification even exist?" Clearly, we did define it but we still did not prove it exists. That will come in a few topics ahead, after we begin to use the mathematical similarities between geometry and certification and, results from 150 years ago are recalled.


In extrinsic geometry, distance is measured using vectors from an external reference frame chosen "ad hoc". This means that distance is measured relative to measurements to a reference frame. This introduces two dependencies when one measures distance from a point: a reference for the relative measurement AND a reference frame for the measurement itself.

These two dependencies are usually hidden and understood as one, by considering one of the points being measured to be coincident with the origin of the reference frame used for measurement. This is not a problem, because it can be proven and is an important result in extrinsic geometry that distance is independent of the reference frame used for measurement -- so, even though the reference frame is an artifact, it is not important which reference frame you decide to use. This is guaranteed by suitable rules of coordinate transformation between reference frames, such as the Euclidean or the Lorentz transformation. For example, we can say that distance is invariant to coordinate transformation, when the Euclidean transformation is used.

Thus, to measure the diameter of a sphere, we would use any reference frame (for example, the corner of your room) and draw vectors to this sphere, allowing the diameter to be measured as the maximum distance between points that still lie on the sphere. This means that the diameter will be measured by geometric relations calculated using the vectors that originate from your chosen reference frame -- done "ad hoc".  As commented above, such diameter is independent of the reference frame used for the measurement and the extrinsic dependence on the reference frame used for the measurement vanishes.

Thus, independently of the reference frame, the sphere can be identified and distinguished from other spheres with unequal diameter.

In extrinsic certification, "distance" is measured relative to a "reference frame" provided by a PKI (Public-Key Infrastructure) composed of CAs and TTPs. For example, see Exposition.

In the ideal case, extrinsic certification (like the diameter of that sphere) must not depend on the CA or PKI path used, as long as it is a valid path. This is similar to the result that distance must not depend on the reference frame used to measure it.

Thus, extrinsic geometry and extrinsic certification are mathematical analogues, with suitable metric functions providing the measurement rules for "distance".

Using the results from item (4) above, extrinsic certification needs three attributes:

(i) a reference for the relative measurement, i.e. the certificate
(ii) a reference frame for each observer of the measurements, i.e. the issuer, the verifier, and the subject
(iii) a suitable rule for coordinate transformations between reference frames used for measurements, i.e. the PKI


Here, we will use the mathematical similarity between geometry and certification. Such reasoning is oftentimes called a metamathematical argument in the literature.

Our objective is to verify if the conditions (i), (ii) and (iii) above are definable for extrinsic certification and in what cases. If they are not definable for a case, then extrinsic certification cannot work in that case.

The first point to be analyzed is the reference, attributes (i) and (ii).

Of course, for geometrical distances one has the Earth as a standard reference, which means that the Earth's reference is free, worldwide, neutral, trustworthy, generally accessible, and always available. Thus, the Earth is a very good reference frame to use in (i) and (ii), with (iii) supplied by the Euclidean transformation or the Lorentz transformation (as a general case) according to the needed mathematical exactness.

One can also use one's own house, own hand, the Sun or the fixed stars for (i) and the Earth for (ii), or any mixture of them, using the corresponding transformation of reference frames with mathematical exactness.

However, each person and machine in the Internet cannot have equally free, worldwide, neutral, trustworthy, generally accessible, always available standard references to use in (i) and (ii) -- thus also negating (iii). Further, it is not possible to solve (iii) -- e.g., finding a suitable PKI rule -- if (i) and (ii) are not definable.

This result was demonstrated with the publication of the Certification paper, although already contained in the other MCG papers published since the beginning of April/97.


The results of the previous Section show that Internet extrinsic certification is undefinable for two unknown parties.

However, we know from geometry that when a generalized coordinate system cannot be built -- such that it would not be possible to define properties (i), (ii) and (iii) as given in Section 4 -- we can often still define a localized reference frame which only has properties (i) and (ii). In other words, such a reference is called a local reference system, totally devoid of any transformation rules to other systems and thus totally isolated -- but useful by itself, as an astronaut in an orbiting station does not need to know his position relative to the control center on Earth in order to use his notepad, but certainly needs that information if he wants to communicate over radio or go back.

Using this geometric example, the analogue result in certification would be small-scale -- though isolated --  PKIs. Indeed, as shown in the document Trust Properties and in the  Certification paper, even though Internet extrinsic certification is based on undefinable references, it can be accepted within a "critical radius" of risk -- i.e., when the PKI only involves direct trust references and is also so small, trustworthy and localized that a PKI reference can be previously considered adequate by all parties in any dialogue -- in other words, if a prior act (which can be trust or another reference) was already established and is known to all parties. The main point here is that the PKI must not include indirect references, because indirect references would necessarily depend on indirect trust. However, indirect trust could introduce unforeseeable risks because the trust properties are modeled as essentially non-transitive, non-distributive and non-symmetric -- which casts serious doubts on the practice of  trusting an untrusted CA because it is trusted by a trusted CA.

This means that everyone in this small-scale PKI would have implicitly and previously agreed that the risks of forgery, collusion, spoofing, error, etc.  are acceptable to all parties within the confines of direct trust observed in that small-scale PKI, such as in a circle of friends. This is exemplified for the case of PGP in the paper Overview of Certification Systems: X.509, CA, PGP and SKIP.  We can say that direct trust defines a "hard" "critical radius" of risk. Of course, if the PKI grows and includes other branches, it will eventually collapse under its own weight because this assumption breaks down. This is caused by the essentially non-transitive, non-distributive and non-symmetric properties of trust, which defines a "soft" "critical radius" of risk. The "soft" limit can depend on several factors such as the accepted risks, the number of parties, the number of parallel branches, the penalties, the insurance protection involved, the time that the PKI needs to propagate a CRL, etc.

Thus, extrinsic certification could rely upon a level of common and previous (extrinsic) trust and  knowledge -- within a risk level -- in order to develop a small-scale mutually accepted and direct reference point, in an environment which is assumed to be reasonably friendly to all parties involved, even though the parties are unknown to each other. Of course, such a solution still contains the problem of CRLs (Certificate Revocation Lists), which can never guarantee revocation of invalid certificates and further reduces the "critical radius" of risk as a function of time.

Now,  regarding the validity of such small-scale domains of trust or PKIs, we must observe that such a domain may be acceptable to parties within the domain itself but may not be acceptable to other parties or domains. In general, certificates from such small-scale domains will not be acceptable outside the domain, of course. Also, parties from within a domain may trust their CRLs but parties outside that domain have no trust on CRLs from that domain and cannot define if  the certificate exists or if it was revoked. Also, such small-scale domains are like "isolated islands"  which just allow communication within themselves. It is not possible to have secure references from the outside to the inside of such domains, nor to allow secure references to be established in the other direction. It is akin to a "local reference frame" in geometry, as already mentioned, and is thus isolated from other reference frames.



Here, we must  acknowledge that while small-scale domains of trust may be possible and could build local reference frames, they are  isolated from each other and would not be generally valid in the unfriendly, fraudulent, competitional, geo-political, business, ethnic, personal diversity found in a generic exchange between two previously unknown parties located anywhere in the worldwide Internet.

As a first consequence, the Certification paper shows that Internet certification between two unknown parties, as currently used in X.509, PGP, SKIP, etc. (which depends on extrinsic references as discussed above) is unsound and has basic flaws which cannot be solved by any implementation, however clever, and can only work in small-scale and friendly models. This is a mathematical  fact and cannot be improved by any added layer of legislation, insurance or procedures -- which would still be based on undefined references and carry their weight.

As a secondary result, the Certification paper shows that if -- before certification -- the two parties have a common body of knowledge which is unique to them, then extrinsic certification as provided by X.509, PGP, SKIP, etc. can apply if and only if the unique common body of knowledge can supply the attributes for (i), (ii) and (iii). This is discussed as "combined certification" in the Certification paper. Of course, such a solution would still contain the problem of CRLs, which can never guarantee revocation of invalid combined certificates.
The bottom line is that Internet certification cannot scale if extrinsic certification is used. This means that current Internet certification procedures will fall short of commerce and business requirements for borderless and worldwide communication -- blocking  easy access to large markets.


In intrinsic geometry, distance is measured using vectors from a reference frame calculated without any external reference. This means that, using intrinsic geometry, we can measure the diameter of a sphere (i.e., identify it) without using any external reference frame for the measurements -- while still using distance as a relative measurement between two points, of course. Such a possibility was first imagined by Gauss, 150 years ago and later on developed by Riemann. Intrinsic geometry is the mathematical basis for Einstein's General Relativity Theory.

While the mathematical theory of intrinsic geometry would not fit in the space we have planned for this text, it can easily be found in standard textbooks. Here, we will just motivate the reader by using a very intuitive reasoning, which Gauss reported also as his own way of realizing it: "Why do I need an external reference frame in order to measure a property that is NOT allowed to depend on ANY external reference frame?" Thus, if the property (example, the diameter of a sphere) does not depend on the reference frame used to measure it, what is the inner "cancellation mechanism" that eliminates such dependency? Could we define functions that would already have canceled out such dependencies, in such a way that these functions could work without any reference to an external space?

Of course, the answer to the last question is positive.

This means that distance is measured relative to measurements in an intrinsic reference frame. Of course, this still introduces the same two dependencies already seen for extrinsic geometry, when one measures distance from a point: a reference for the relative measurement and a reference frame for the measurement itself. However, the reference frame for the measurements is not open for choice: it is an intrinsic reference frame, which is unique and depends only on the chosen metric function, for each entity being measured.

For a given metric function, this eliminates the choice of reference frames given by the attribute (ii), as given in extrinsic geometry, and reduces the attributes to two. This means that we can define a "Universal Metric Function" (UMF) and apply it to measure (and thus, identify) any sphere. Further, we can calibrate and verify the validity of such UMF against known spheres and prove either deterministically or probabilistically that the UMF we have chosen is correct, within an arbitrarily high degree of reliability. This UMF will then be our "gauge-function" to identify any sphere without an external reference.

The same, by metamathematical arguments, applies to certification because both problems are measurement problems in metric-spaces.

We will now investigate how it can be implemented, also referring to the certification model presented in the Certification paper.


For intrinsic certification, the "surface" to be measured is not a sphere but a "hyper-surface" in higher dimension (>3) as postulated by Shannon's Information Theory, if we identify the entity to be measured with a "signal" as done in the Certification paper.

To measure a given class of "hyper-surfaces" intrinsically (i.e., to certify any entity that belongs to a certain complexity class) we define a suitable UMF and test it either deterministically or probabilistically against chosen "calibration" targets (that belong to the same complexity class). The tested UMF becomes our "gauge-function" and we can certify entities (within a given complexity class) without ANY reference to an external reference.

The "process to designate X" used in our definition of certification is then implemented as the UMF and as such certified to be a "gauge-function" in what is called the "cognition step" of certification, as given in the Certification paper.

Another type of proof and explanation, by construction, is given in the Certification paper and the reader can equate:

- "UMF" with a "green reader object",
- "gauge-function" with a "certified reader object",
- "calibration targets" with "self-calibration" and,
- "hyper-surface" with the "witnesses objects"

as they are called in the context of the Certification paper.

As a further result, intrinsic certification completely eliminates the need for a PKI.

Using the results from item (4) above, intrinsic certification needs only two attributes:

(i) a reference for the relative measurement, i.e. the certificate
(ii) a "gauge-function" for the intrinsically observed measurement, within a given complexity class chosen by the verifier.


Here, we will use the mathematical similarity between geometry and certification and investigate two cases:

- Two unknown parties
- Two known parties

where the second case will correspond to the second certification of two previously unknown parties (i.e., the second visit after the first case). Our first objective is to verify if the conditions (i) and (ii) of intrinsic geometry as given in the Section above, are definable for intrinsic certification by two unknown parties.

As explained in the Certification paper, the intrinsic certification of two unknown parties involve two steps:

- cognition,
- recognition

The first step needs a joint work between the subject and the verifier, when the verifier receives a "green reader object" and must perform two entirely different acts:

- use the reader in self-calibration procedures that will provide the verifier with a chosen (to her) degree of certainty that the reader is "honest" and,

- use the reader with the witnesses provided by the subject and decide within a degree of certainty chosen by her that the witnesses form a coherent set and that they allow the subject to be uniquely distinguished.

With the first act, the verifier is accepting and testing a "gauge-function" which represents a metric function adequate to the complexity class she expects to need. With the second act, the verifier is actually accepting the subject's certificate as his unique (to their jointly defined objectives) reference.

The second step (recognition) is immediate for intrinsic certification, but may include a learning function (memory) which is triggered at the second step. The learning function will allow trust and other immaterial qualities to be objectively described. This is referenced in the Certification paper. Thus, intrinsic certification of two known parties is immediate.

Note, further, that revocation lists (CRLs) are not needed.


The Certification paper classifies all possible certification modes in three types: intrinsic, extrinsic and a possible combined mode. Also, certification is divided in two steps as in the former Section, which allows the following table to be presented:
 STEP 1 
 STEP 2 
recognition 0 
recognition n>=1
recognition n>=1 
recognition 1 

where "recognition 0" means recognition in zeroth-order (as when one recognizes Skywalker's name to be correct because it was directly seen and heard several times in several places that one has freely chosen), "recognition 1" means recognition in first-order (as when one recognizes Skywalker's name to be correct because a  friend said it was correct), and "recognition n>1" means recognition in higher-order (as when one recognizes a X.509 certificate for Skywalker because Thawte said it was correct and even though Thawte is not known, AT&T is trusted and AT&T has said in the past that Thawte was trusted by AT&T). Here, "recognition 1" can also be called "direct reference", while "recognition >1" means "indirect reference".

It is important to note that even if one trusts the first step completely, so as to allow a local and isolated reference frame (i.e., a small-scale and friendly PKI), it is not possible even in this case to go reliably beyond "recognition 1", i.e. to use an indirect reference. As shown in Section 9,  indirect references would depend on very questionable properties of trust, which do not exist in the general case (e.g., if you trust your friend it does not mean that you must trust your friend's choice of friends). Of course, extrinsic certification also introduces the questions of CRLs and the corresponding unknown time-lag to revocation -- unsolvable problems by themselves.


The Certification paper shows that certification can be either intrinsic or extrinsic, with a possible combined mode. There are no other possible modes.  Further, the intrinsic/extrinsic/combined certification modes can be grouped under a generalized certification model, called the observer/observable model.

The geometric properties of extrinsic/intrinsic geometry also allow a generalized certification process to be defined, as was proved here based on metamathematical arguments. Thus, the geometric model is equivalent to the observer/observable model. Meta-Certification, or MC, on the other hand is an implementation of the generalized certification model, which can work in all its modes.

Further, MCs allow different modes to be sequenced, layered or combined. For example, an intrinsic certification procedure can provide a certified reader, which becomes a witness for another reader that wishes to be certified. This can link objects from the subject with objects from a "delegating verifier" -- allowing authorizations to be defined and delegated in a consistent way.

Thus, MCs incorporate so-called "identity-certs", "auth-certs", "role-certs", etc., --- all represented as MC's "object-certs".

Also, MCs implement the security model to be used, which is not defined in intrinsic certification. Here, MCs do not enforce a degree of security, rather security is viewed pragmatically as a function of need . Security is not axiomatic.

So, instead of a "take it or leave it" attitude, MCs allow the subject (i.e., the MCC holder) to define a security window and the verifier (i.e., the private-MC holder) to define a security frame within the provided window -- according to his need (e.g., as defined by an insurance policy coverage, by a cost/risk analysis, by an educated guess, by trust, by trivial content, etc.).

This can also be applied to enhanced-extrinsic certification, as called in the  Certification paper, which allows an extrinsic certificate to be graded and accepted as a function of a degree of belief -- measured as a probability level -- on a series of assertions. So, instead of the usual "yes/no" result, an extrinsic certificate may be ranked as compared to other certificates for the same subject, which may lead to a much improved decision process based on risk versus cost. Enhanced-extrinsic certification as implemented by MCs also allow for "life-lines" (which can make CRLs unnecessary even for extrinsic certification) and "projection" (e.g., which makes it possible to issue X.509- or PGP-style certificates from any other standard -- interoperating  between standards both for input as well as for output).

We must also remark that oftentimes a low degree of security is all that is needed -- as low as already provided by http and by visual reading of the website's URL in the location window, or medium-scale as provided by the SMTP handshake mail protocol. This is also taken into account by MCs, because the verifier may choose his accepted level of security. On the other extreme, it is clear that 100% security is impossible, but MCs may allow for 100% risk acceptance based on 100% certified procedures.

These concepts are defined and partially exemplified in the MC-FAQ. Thus, MCs are the "real-world" implementations of a very abstract idea, the generalized certification model, which can have three modes: extrinsic, intrinsic and combined. Further, MC does not limit the model, but allows disjoint "hyper-surfaces" to join in more complex structures -- as the need arises, in a consistent way.

The reasons for the names used are:



MCs take on three different but coherent and conjunctive tools for the two steps of certification (cognition and recognition as they are called):
  1. Deterministic: intrinsic certification must be used for the cognition step in certification, using Intrinsic Geometry (metric-functions) and Second-Order Cybernetics (observer/observable) as a model.
  2. Probabilistic: cognition is viewed as the recovering of a signal buried in noise,where the signal is the subject and the noise is the enemies, using Information Theory as a model, together with current Cybernetics definitions, to define Secure Multiple Channels (SMC) of information and their evaluation methods with gauge-functions (which incorporate deterministic and probabilistic measures).
  3. Second-Order Cybernetics: recognition includes a learning function and learned gauge-functions, which can add different metric properties as  experience allows the feedback from results as compared to expectations, using an observer/observable model.

The information presented in this document and other cited documents, represent "privileged public disclosure" and is protected in its entirety by Copyright Law, with all rights reserved in all countries, in the name of E. Gerck, 1997. Copying and partial citation are allowed with source and copyright owner citation. A license is required for the commercial use of this information and the current release does not imply the granting of any such rights -- which will be negotiated and granted to ANY requesting party without ANY restriction and without ANY limit on time or usage.

In the famous Lewis Carroll passage, one can read: "When I use a word," Humpty Dumpty said in a rather a scornful tone, "it means just what I choose it to mean -- neither more nor less."