Toward Real-World Models of Trust: 
Reliance on Received Information

"A theory, ultimately, must be judged for its accord with reality."
S. Leshniewski, (1886 - 1939)
Ed Gerck
Copyright © 1998 by E. Gerck and MCG, first published on Jan 23rd, 1998 in the mcg-talk list server
All rights reserved, free copying and citation allowed with source and author reference.
 

This work presents a formal and abstract definition of trust which allows any number of explicit trust definitions to be derived for different application areas, such as communication systems, digital certificates, cryptography, law, linguistics, social sciences, commerce and day-to-day living -- providing mutually compatible and useful real-world trust models or trust instances. The paper presents more than thirty of such equivalent instances, and discusses their general formation rule for qualitative as well as quantitative uses of the concept of trust. The paper also compares and contrasts trust with auditing, power, belief functions, probabilistic models in the frequency and the Bayesian interpretations, fuzzy logic concepts, surveillance, open-loop control, risk, insurance, information, meaning, accountability,  reasonable reliance,  justified reliance etc.  From the discussion, trust emerges as the mathematics of subjective certainty and precision -- a concept to be further developed in the context of  non-boolean logic over a multivector space in Grassmann Algebra.

 
Note 1: The following discussions are contained in the Appendix:
     
  1. Model of Trust versus Trust Models
  2. Linguistics
  3. Trust Propositions and Matters of x
  4. Internet Names, TSK/P, Uniqueness, Reference and Sense, Metrics, Biometrics, Bio-implants,  Examples, etc.
  Note 2: If the reader wants to initially have a contact with examples and practical Internet questions, it is suggested to begin reading the paper by  item 4 above.

Note 3: The abstract trust definition presented here is a single, implicit formal definition that depends neither on instances nor on observers. The concrete realizations of trust in real-world usage built using the abstract definition, however, can depend on references and may have many representations, as discussed.

Note 4: The terminology "real-world models of trust" is used for all particular instances that are derived from the abstract trust definition, as representations of it, and which apply to the real-world -- including the so-called virtual reality or cyberspace.

Note 5: A Summary is available at the end of the paper.

 

1. Introduction

Trust -- the next frontier? The conceptual frontier seems to be more elusive than the physical dimensions of space and time. Since immemorial time, the concept of trust pervades the religious, philosophical and historical writings . Branded as "unscientific", trust became the ugly duckling of science -- utterly condemned to be subjective, imprecise, unreliable ... even untrustworthy. Now that the Internet provides an example of important, needed and yet unreachable  events,  trust is being often mentioned either as its savior or its nemesis.  However, what does trust mean?  What is trust?

This paper will initially focus on the subject of trust in communication systems  -- specially the Internet -- which will allow us to view trust within the broad picture of information exchange and follow Shannon's ideas as closely as we can, building a base and a unifying concept for all other uses of the concept of trust, even in other areas besides communication systems and Internet certification protocols.  In fact, if there is information being sent or received whatsoever the medium (e.g.,  by TCP/IP Internet protocol, by fax, by written messages, by oral messages, by body language, by field measurements,  etc.), whatsoever the communicating parties  in any combination (e.g., persons, cybernetic agents, software, hardware,  etc.), the formalism here described is general enough to be applied.  Communication systems are thus a very intuitive framework and we can relate several examples to day-to-day experience and humanistic sciences -- as we investigate the abstract idea of trust and proceed to model it out of the "common denominator "of all its real-world molds, targeting a hopefully useful conceptualization of trust.

The first subject is on "modeling of trust" and not on "trust modeling" -- the second being derived from the first. What I am saying is that we must first define and understand what trust is (and, possibly, is not) in the context of communication systems before we go into cryptographic algorithms and message protocols -- which can serve well either to be a means of conveying said understanding or, of obfuscating said ignorance!

For example, today's Internet certification protocols such as X.509, PGP and others, take a leap of ignorance on what trust is and start by defining means to convey it. Such attitude is not even empirical, it is indeed arbitrary. To justify this leap of ignorance, standards such as X.509 have statements to the effect that "... such will be defined in the CPS, which is not a part of this document." -- as if assumptions could be defined after the theorems that use them.

This is important for three main reasons:

  1. We want machines to use a well-developed, real-world, tested, qualified, notion of trust;
  2. We want machines to be useful to us as our agents in terms of decisions that depend on trust; and
  3. We want the same notion of trust to be communicable and interoperate among humans and machines.
In short, we want trust in cyberspace (e.g., between machines) to be based on that same notion of trust, as a form of reliance, that we have been using for millennia between humans and in business. It turns out, however, that there is wide disagreement as to what a definition of trust might be  -- even for us, humans.  Thus, as my first task, I share my investigation of what --and what not-- trust might be. My conclusion is that, even though we all use different trust models, even though we all decide to rely on a different way, we all share the same notion of trust. Using Information Theory terminology, this paper defines this notion of trust as:

"Trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel."

We cannot use the same channel for both the information and the trust for that information, neither sending nor receiving. A decision to trust a set of bytes (such as someone's name, a source of a communication, a name on a certificate, a digital signature, or an electronic record) must be based on factors outside the assertion of trustworthiness that is contained in that same set of bytes. Likewise, a decision to trust someone must be based on factors outside the assertion of trustworthiness that the person makes for himself.

So how does the trust model work? -- This is the wrong question to ask here! The real question is: "What trust model would you like to use?" There is a built-in notion of the meta-rules (given by the trust definition), that any trust model has to follow, but I might buy a trust model from someone and add that model, design my own model, or even augment a model that I bought. Different trust models can be used as long as they conform to the given trust definition.

The problem today is, thus, basic: lack of  understanding of trust's truth conditions cannot allow trust's truth-values to be well-defined, try as we may.   And, such confusion is not a prerogative of  today's Internet security protocols, as McKnight and Chervany show in their extensive study on the meanings of trust  [McK96] in several other areas. It exists in all other areas where the concept of trust is used,  such as in management,  interpersonal relationships, business relationships,  security policies,  etc.  And, cannot be solved by just positing a behavior for trust -- since trust is a fundamental concept both used and useful in the real-world as we can see by its widespread application in all cultures and respective law systems.

While the paper oftentimes uses humans to exemplify notions of trust, it is not relevant if there is, or there is not, a human in control of an end point, a machine or some software.  It can very well be another machine. Trust is defined in such a way that its usefulness is no longer limited to human communication.
 

2. The Real-World Model of Trust

Fifty years ago, Shannon was faced with a problem: he needed to define the concept of information, but in a way which would allow its unambiguous use in communication engineering while still conserving a real-world significance.  Preceded by the efforts of Szillard [Szi29], who in 1929 identified the unit or "bit" of  information when dealing with entropy and the Maxwell's Demon problem in Physics, by Hartley in 1928 [Har28] and by Nyquist [Nyq24] in 1924, he took a different approach than just positing a behavior for information. Let us follow his steps in Information Theory [Sha48], with has found applications in several areas including his own ground-breaking paper onCryptography [Sha49]. As commented in  [Ger97]: "In Information Theory, information has nothing to do with knowledge or meaning. In the context of Information Theory, information is simply that which is transferred from a source to a destination, using a communication channel. If, before transmission, the information is available at the destination then the transfer is zero. Information received by a party is that what the party does not expect -- as measured by the uncertainty of the party as to what the message will be." Shannon's contribution here goes far beyond the definition (and derived mathematical consequences) that "information is what you do not expect". His zeroth-contribution (so to say, in my counting) was to actually recognize that unless he would arrive at a real-word model of information to be used in the electronic world, no logically useful information model could be set forth!

Now, in the Internet world, we have come to a standoff: either we develop a real-world model of trust or we cannot continue to deal with limited and faulty-ridden trust models that treat trust artificially and objectively, as the Internet expands from a parochial to a planetary network for e-commerce, EDI, communication, etc. We must be able to fully handle trust and all its subjective and intersubjective aspects.

And, what would be a "real-world model of trust" for communication systems, e.g. the Internet world?  Akin to Information Theory, the concept of trust in communication systems must have nothing to do with friendship, acquaintances, employee-employer relationships, loyalty, betrayal and other overly-variable concepts.  Here, trust is not to be taken  in the purely subjective sense either,  nor as a feeling or something purely personal or psychological -- trust is to be understood as something potentially communicable. Further, if trust must bridge different instances and observers, otherwise communication would be isolated in domains, then all different subjective and intersubjective realizations of trust must depend on some common, basic and abstract idea -- an archetype in some terminologies.  As used in the context of Generalized Certification Theory [Ger98a] trust is, simply:

"trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel".

This is a formal and abstract definition of trust.  It defines trust by the properties it obeys, without citing any context, without even providing an example we could denote -- it does not provide a value, only behavior. Thus, the given definition achieves the broadest possible conceptualization of trust, since it is both environment-invariant and observer-invariant (environment and observers are abstract).  But, we expect it to contain the seed-thought or root-idea of trust. In other words, we expect it to contain trust's implicit truth conditions for any explicit trust application we may need, from the social scenario to automatic communication processes. This should afford an unified "gist" of trust to be perceived in all applications -- which we also expect to be close to the real-world gist of trust.

The author considers (and the paper shows) that an abstract definition is much more general, and preferable, than an explicit definition that would depend on a particular set of environment assumptions.  The different environment assumptions then represent nothing more than different stances for the abstract definition of trust, not different concepts of trust.  Semantically, the  abstract definition of trust is a logical proposition which is assumed to contain the Fregian [17] seed-thought for the full concept of trust, which then unfolds as explicit truth-conditions when applied to each practical stance, which, in turn,  may provide different truth-values  to each observer. In other words, the abstract definition is fully abstract -- so, it can be behind all different  truth-values one may derive from the concept of trust, for each observer and in each case.  Accordingly, the abstract definition for information is defined to be "that which is transferred from a source to a destination, using a communication channel" [Ger97], which may highlight the differences and similarities with the given definition of trust, above -- and also does not use any a priori uncertainty models. Mathematically,  the author views an abstract definition as an abstract class, which can be represented by appropriate operators in almost any number of formalisms or stances, that may not be isomorphic to one another and which can be calculated in specific reference frames or observer coordinates. Such operators do not have to be transformable into one another and can directly yield final values -- which operators and values, clearly, may be very different as a function of formalism and reference frame but which, nonetheless, result all from the same abstract class. Application of the abstract definition leads to explicit definitions, each of which needs an explicit stance and explicit observers. For example, to apply the abstract trust definition to certification, one only needs to see a certificate as a secure communication channel between the parties in the dialogue, past and present -- also including third-parties such as a CA. To apply it to other areas, one only has to recognize what the communication channel is and what is essential to it -- as viewed by an observer. In other words, we are now at liberty to define any number of concrete trust models (i.e., explicitly model ling a particular situation at hand, to the best effort) that conform to our one abstract model of trust (i.e., the abstract trust definition). For example, more than thirty different explicit trust models are derived in this work, as examples, but many more are possible. It is important to note that the observer can be either party, both, none (i.e., can also be a third-party) or several in several combinations. If  there is no communication channel from the outside to the system, then the system views itself as isolated and it is not possible for the system to have any trust besides self-trust -- i.e., an isolated system can only have self-trust because it only communicates with itself, past and present. The outside world may however receive communications from the system -- which can allow the outside world to have trust in regard to an isolated system. See the definition and discussion on self-trust. So, using the definition of trust just given and moving toward an understanding of the definition by using examples, when a lion communicates with a lamb the lion does not need to receive any transfer from the lamb besides that which is communicated in the channel itself, whereas the lamb needs to know whether the lion is hungry -- which is not information and which cannot be transferred in the same channel. If such data were information, then it would be new to the lamb (sorry, ex-lamb, now food). If such data would be transferred in the same channel how would the lamb know that the lion was not lying? This example shows the interplay between trust and power. A very large difference in power, of one agent over another, implies that the more powerful agent can offset and control the other agent to such a degree that the other agent's actions are immaterial, even if the actions are already occurring -- hence, no trust on the least powerful agent is needed in such case. On the other hand, the least powerful agent needs trust on the other agent's behavior, since it cannot offset or control the other agent's actions in any degree -- it needs to know with high reliance what the other agent's actions can be and, in some cases, what they cannot be, before they happen. One example of the interplay between trust and power was observed in history when the sea explorer Vasco da Gama circa 1498 opened the first commercial route from Europe to India by sea and used the mutual exchange of  "willfull-hostages" (the old version of ambassadors) to physically warrant with their lives the mutual contractual obligations in the bilateral merchant agreements  [Mein98].-- since this was done because there was no initial mutual trust. The current diplomatic action of recalling one's ambassador, considered diplomatically to be a strong exterior sign of disagreement between countries,  has its roots also in the early use of ambassadors as willfull-hostages subject to physical retaliation (as ambassadors have been jailed and killed because of political/mercantile disagreements, even in the recent past) -- notwithstanding the consideration that  it is deeply illogical to disrupt a communication channel  (i.e., that uses the ambassador as the trusted carrier) exactly when the channel is mostly needed. Further, ambassadors are an anthropomorphic example of the fact that trust is indeed the carrier of information, not the other way around, as this paper discusses in Section 4. Thus, loosely speaking, "information is what you do not expect" and "trust is what you know". Linking both concepts, "trust is qualified reliance on received information". We have thus used the abstract definition of trust and built the first two explicit definitions of trust, albeit in a very much simplified context and using an anthropomorphic metaphor. "To make progress in understanding all this, we probably need to begin with simplified (oversimplified?) models and ignore the critics' tirade that the real world is more complex. The real world is always more complex, which has the advantage that we shan't run out of work." [Ball84]. All these considerations can now be viewed non-anthropomorphically when dealing with the concept of trust in communication engineering and security design -- i.e., using the given abstract definition of trust and applying the same reasoning to computable processes. For example to understand how software agents could benefit from similar concepts and strategies. Further, the importance of the anthropomorphic metaphor used in this work is to provide a class of test examples that are a priori observable (i.e., exist at least once), describable, decidable, finite and possible -- without being necessarily causal, ergodic, reversible, deterministic, probabilistic, etc. This may motivate the reader as to the engineering usefulness of some apparently philosophical passages in this work and to their direct application in several areas of work, when properly instantiated.

As a better approximation to the definition that "trust is what you know" in the anthropomorphic metaphor, consider that "trust is what you know you know you know" -- i.e., the lamb not only needs to know (i.e., be aware of, can spontaneously recall) that it knows the lion is not hungry but must also be able to know how to act upon that knowledge. At the human instance, this means that you cannot use your 'prior knowledge' (i.e., what you know you know) in order to do anything, unless you also know about the applicability of that 'knowledge'. The extension to software agents is immediate; for example a trusted mobile agent may not be trusted as a function of changes in its operating platform (the pragmatics used) -- even though the platform itself may be secure and expressible enough -- if you have no evidence of that.

Trust and information are to be understood thus as two cardinal properties of communication systems -- their interplay affording new modes of communication (to be dealt with elsewhere), for example allowing meaning (semantics  in semiotics) to be relinked with names (syntactic in semiotics) at the receiver side [see [A.4.3].

The second explicit trust definition given above will now be cast  in equivalent Information Theory terms. Here, I also exemplify a second derivation method. Instead of using the abstract definition directly, it is possible to begin with the already derived expression "trust is qualified reliance on received information" and concretely specify the observer, the observed and the existence of a (yet unnamed) reliance metric -- deriving another explicit definition of trust as "that which an observer knows about an entity and can rely upon to some extent".

To proceed, we can now specify the reliance metric and its applicability. First, I note:

(i) "that which an observer knows and can rely upon to some extent"  can be modeled in Information Theory as an estimator with variance as small as desired, which estimator  by an observer (quasi-zero variance), that has measured the expected unsupervised behavior (i.e., unsupervised by the observer) of an entity and,

A note on quasi-zero variance. The prefix "quasi" means "as if", "approximately". Thus, the term "quasi-zero" is used to illustrate two important points: (i) quasi-zero means "approximately zero" and represents a positive value which is as close to zero as desired by the observer, (ii) the amount of "closeness" is subjective and is defined by the observer. Thus, "quasi-zero variance" means a variance that is "as if" it were zero to the observer --  i.e., so small that is considered to be zero to the observer,  which however could be considered large to another observer. The  same applies to the term "high-reliance" to be used later on in the exposition, where "high" means as close to 100% as desired by the observer -- however, possibly different for different observers. In both cases, one is enforcing the concept of high-reliability -- while high-accuracy is dealt with by the proper extent of "matters of x". (ii)  the abstract definition depends on an abstract temporal or event clause  -- trust must be defined at some time T in relationship to the communication process itself.

The next explicit definition is thus "trust is that which an observer has estimated with quasi-zero variance at time T, about an entity's (unsupervised) behavior on matters of x". Note that the word "estimated" does not mean probabilistically, but is linked to any estimation or inference process in general --  such as by using inference, deduction, computability, probabilistic theories, constraints, etc., and also in combination. Hence, an observer can rely upon an estimator that it has obtained in the past in order to predict future unsupervised behavior of the entity regarding matters of x -- because the estimator has an expected quasi-zero variance.

Thus, trust can be described by a "Non-Probabilistic Inference Model of Trust" (NPIMT). Of course, "non-probabilistic" does not mean that probability is not used in the trust model -- as explained above.

The underlying concept of this model of trust is that of justification. It is not essential to this model of trust whether a natural or logical connection exists between trust and justification. Of course, the use of probability or deductive-logic may serve to raise the level of justification for a subject matter, when compared to a natural connection that is simply observed.

Justification defines the context of trust for the truster. In other words, justification defines the context of a relying party's (RP's) reliance [9]. It's easy to say that a truster's justification of trust, or a RP's justification of reliance, is a matter of need. But this is not quantitative. If we are to make some progress in this, we need to define those needs as degrees or levels -- i.e., in terms that we can identify their differences and their required trust models. Thus, it all hinges on the definition for justification, as a metric function that decides what is justified and what is not. Which definition can be changed as needed, even for the same person in the same trust act (this might sound strange but  is very much needed as we usually deal with more than one person/company at a time even for one transaction --e.g., buying, paying, delivery, maintenance, etc.). All these definitions, however, must interwork in terms of meaning. It is not enough that they interoperate syntactically.

I now introduce the concept of "best justification" as that justification level which leaves the truster with no doubts. This is consistent with the idea that trust (or distrust) is  always 100%, what changes in value is the extent of trust that  is chosen by the truster. Likewise, the extent of the doubts that must be satisfied  in "best justification " is chosen by the truster.

To simplify, let us begin with the following five reliance levels, from weaker to stronger, for a truster (or RP):
 

(0) What the truster relies upon without any consideration of "why" and without any recourse. It is a subjective metric, here called "open reliance".

(1) What the truster relies upon without any consideration of "why". It is a subjective metric, also called "actual reliance", similar to the same concept in law.

(2) What the truster relies upon because it is presented by a party accepted by the truster for that purpose. It is a intersubjective metric, usually called "authorization reliance", similar to the same concept in law.

(3) What the truster as a reasonable man might do, with all prudence that  might be reasonable to use. It is an objective metric, also called "reasonable reliance", similar to the same concept in case law  which establishes  it as an objective test given by  "what would be reasonable for a prudent man to do under the circumstances".

(4) What can be justified by the truster after an examination of the facts presented. It is a subjective metric, also called "justified reliance", similar to the same concept in law.

Another example might be what a fair random process might choose, given all possibilities. This is an objective metric, here called "statistical reliance", similar to the same concept in law, used in lotteries, auditing and some payment systems for example. Yet another example might be what has been verified with some chosen technology, but only automatically. This is an objective metric, here called "process reliance", and can be useful for security-automated processes. - "matters of x": description of expected action, action/reaction or linkage in a setting, in terms of the truster and confined to the largest extent that still fits in the "best justification" metric [A_3]. - "epoch" or only "time": space, time, events, agents, persons, objects or a set thereof in terms of the truster, which define a setting for "matters of x" and for "best justification", such as: initial date, expiration date, revalidation date, usage periods, number of times used, event trigger, distance data, environment data, language specification, network protocol specification, platform used, physical network used, location, users, etc., either as a single point or as a sequence of points. - "entity" or "trustee": person, agent, object or a group thereof on which "matters of x" logically or naturally depends, in terms of "best justification" as seen by the truster and within the "epoch". and define, in terms of a truster and trustee (entity):

- "trust-point": matters of x, for a given entity and epoch.

- "trust": (noun) a linked collective of trust-points

- "to trust": (verb) to rely on trust

- "A trusts B on matters of x at epoch T": a boolean trust-proposition, which is either true or false.

Now, I note that "to rely" is an essential aspect of trust as a verb, since A cannot trust B on matters of x at epoch T unless A actively relies on that trust-point. I also note that when the definition says that "to trust" is "to rely on trust" this does not imply circularity because "trust" is used as a previously and independently defined noun in order to define the verb "to trust". Further, to compare with matters of law, the usual legal concept of "reasonable reliance" is understood [SCUS] to be an objective legal standard for collective evaluations (such as by a jury) while "justified reliance" is  understood [SCUS] to be a subjective legal standard more generally acceptable for evaluating individual actions -- which makes "subjective reliance" in law very similar to the concept of trust based on the metric of best justification, as given above.

Which means that trust is not auditing -- trust is that which can be relied upon without surveillance by the observer (possibly because it cannot be measured due to physical, secrecy, cost, time or other difficulties). It  is also possible for the observer to indirectly and anonymously gather sufficient information to define a suitable estimator for the entity's behavior on matters of x, without any contact with the entity itself --as when using a trusted proxy (which, however, depends on a primary trust relationship with the proxy). Further, the observer can also use the measured estimator at time T to analyze past behavior of the entity (i.e., before time T). So, the estimator can be seen as a quantitative forward- and backward-predictor  for the acts of an entity regarding matters of x, when that entity is not supervised by the observer.

The next three paragraphs touch upon a large difference between the technical use of "trust" (as defined) and "trust" in the social and linguistic domains. The difference is not on the meanings of trust, which remain equivalent also with the considerations below, but how to represent different degrees and extent of trust. The exposition presents a method (based on full atomic qualification) which is precise and compact, well suited both for technical as well as for non-technical use -- however, perhaps too formal for every-day use. As the second paragraph shows, such "poetic" and "every-day" use of trust also wrongly permeates security work or communication protocols -- rendering trust concepts difficult to use, because ill-defined. This may explain trust's "bad name" as a difficult concept, perhaps even as an overly-loaded terminology. The problem is not however in the concept of "trust" by itself, but in using wrong trust quantifiers.  The third paragraph extends the method (i.e., based on full atomic qualification) to the questions of defined versus undefined trust, allowing indeterminacies to be resolved in a simple and intuitive way.

To represent degrees of trust, the estimator's (quasi-zero) variance is not allowed to change, because such would not be a useful model (see next paragraph). Rather, without loss of generality, the estimator is kept at quasi-zero variance but its reach (e.g., as given by matters of x) is increased to reflect a higher degree or decreased to reflect a lower degree of trust. This is similar to the mathematical procedure of finding an area under a curve that represents near 100% reliance (i.e., quasi-zero variance) on matters of x.  In other words, we regard the issues of reliability and accuracy as  two fully independent variables: (i) high-reliability is demanded as the primary parameter and is reflected in the estimator's quasi-zero variance or high-reliance, (ii) accuracy is measured by the extent of "matters of x" that still allows high-reliability. Thus, if the observer has no trust on the observed entity, then x is the empty set (i.e., 100% reliance on nothing represents zero area under the curve -- or, no trust). As the observer increases its degree of trust on the entity, then the estimator becomes more and more complex and represents an enlarged set of matters of x for which the entity can be represented with quasi-zero variance (i.e., with near 100% reliance) -- i.e., achieving more accuracy for the predictions, but without sacrificing reliability.  This means that trust can also be defined explicitly by "trust is that which an observer has estimated with high-reliance at epoch T, about an entity's (unsupervised) behavior on matters of x".

Oftentimes, some trust models, risk management policies and security policies try to qualify degrees of trust with concepts such as "partial trust", "marginal trust", "fully trusted",  or by defining multi-level logic with rules for majority voting and precedence.  That is done to try to convey the idea of how well can "trust be trusted" -- which  easily leads to circular truth conditions  and undefined statements.  For example, does "partial trust" mean increased unreliance on the expected outcome, on the expected model for all possible outcomes or a reduction on the model's scope?  Partial in relationship to what?  Thus, such trust "qualifiers" are unable to address atomic qualities in the trust concept and just operate, at most, as a collective qualitative indicator from that particular observer's viewpoint. The author takes the stance that no qualifiers  whatsoever (i.e., partial, marginal, complete, bad, good, large, small, minimum, maximum, etc.) should be used with the word trust because they are neither well-defined, quantitative nor needed.  Further, they introduce an additional layer of intersubjective concepts and they decrease the semantic importance of the word trust itself. Instead, it is better to recognize  that the concept of trust has already by itself such qualifiers  "built-in" in its own qualifier "matters of x" -- which can however include the atomic qualities which are unaddressable from without. To exemplify, compare the phrase "Bob has partial trust that Alice will not receive a ticket for speeding" with "Bob trusts Alice on matters of x", where "matters of x" is "Alice may receive tickets for speeding". The same thought is expressed in both phrases but the last phrase allows a precise answer to the question "What is trusted?" and can thus be directly applied in appropriate predicate calculus.  The last phrase can also easily lead to a quantitative and atomic treatment, when Bob  has more knowledge on Alice's behavior and may be able to define "matters of x" as "Alice receives tickets for speeding with a 10% chance every time she drives at night, with a +-5% absolute variance".   It is not necessary either to define distrust or lack of trust, because distrust is simply the atomic negation of a particular matter of trust, which can be as extensively negated as we need, e.g. in "Bob trusts Alice on matters of x" where x is the null set -- so that effectively Bob trusts Alice on nothing.  However,  trust can be negative as when you affirm that you trust someone to be untrustworthy. Further considerations on these issues are given below and in the discussion on matters of x.

Still on the question of degrees of trust, it is also customary to discuss "unqualified trust" versus "qualified trust" -- with expressions such as "Alice trusts Bob" being "unqualified trust".  This paper takes the stance that such expressions are dubious, are not necessary and should not be used in technical work (albeit possibly useful in poetry) -- for example, in "Alice trusts Bob" is "matters of x" unknown, abstractly defined by Alice or, is  x the Universe set? As explained in the former paragraph, the built-in qualifier "matters of x" has to be recognized and its truth-value must be defined in the trust proposition -- e.g., by defining "x" in "Alice trusts Bob on matters of x". Thus, if one means that Alice trusts Bob on all matters then this can be expressed by using "x=U", where U is the Universe set. Or, if one means that its value is abstractly defined by Alice then one uses "x=Alice" and this means that Alice defines what Alice trusts on Bob. If "matters of x" is unknown then one uses "x=0" where "0" is the null set  -- i.e., trust on the unknown has a null set of trusted matters. This standard usage allows a precise statement of the trust proposition, clearly a need for precise calculations, which is natural and easy to define in the presented formalism. As a special case, it is however useful to define that "Alice trusts Bob"  necessarily means the case with x=U -- which matches the intransitive usage of the verb trust, as "In God we trust". Thus, without a "matter of x"  qualifier, an unambiguous and intuitive use of the trust proposition "A trusts B" should imply that the qualifier x must be equal to the Universe set.

It is  instructive to view trust as an open-loop control process, in control theory terminology -- i.e., a control process which does not rely on a closed feedback loop in order to achieve its purposes. This comparison allows one to recall the advantages and disadvantages of open-loop control (e.g., trust) vis-a-vis closed-loop control (e.g., close surveillance) and apply them to the case at hand. In control theory, the basic parameter to measure performance is position-error -- which translates here to the trustee's actual response as compared to its expected or estimated (i.e., trusted) response. In open loop-control, one method frequently used to decrease position-error is to introduce periodic checks of any convenient system variable, not necessarily the control variable. This is equivalent to the well-known dictum: "trust but verify" -- implying the need for a pre-defined policy of checks and balances that can periodically adjust the trust estimator as a function of observed behavior. Further interesting qualities of trust over close surveillance can be exemplified by the mentioned control theory analogy, regarding the main advantages of open-loop control over closed-loop control: simpler systems (hence, less cost and better fault-tolerance), immediate response (i.e., nothing needs to be measured in order for it to operate), easier design (e.g., avoiding probable but unknown pitfalls of complex designs), easier interfacing (i.e., suffers and exerts less influence on the rest of the system), modular design (i.e., complete and interchangeable),  cheaper, etc.  Thus, trust can also be explicitly defined as "trust is an open-loop control process of an entity's response on matters of x" or,  less precisely but more concisely, also as "trust is to rely upon actions at a distance".

Trust on an entity cannot be viewed as a consequence of insurance or, as often wrongly expressed "It's not about who you trust, but who backs and indemnifies the context of the trust" .  The use of insurance always signals lack of knowledge  -- so, clearly, it  cannot replace it,  it cannot replace trust. Further, there is no insurance needed for a sure event and there is no insurance possible for a sure risk. To exemplify another problem caused by such understanding, if a truster  (e.g., a CA subscriber that trusts the CA) is going to for pay insurance to cover his liabilities and the trustee's (e.g., CA's liabilities)  -- which is what it would amount to if trust would be based on insurance because the bill has to end somewhere --  then responsibility has gone full-circle and is now only in the truster's hands -- both to get adequate coverage and to pay for it. While the trustee (e.g., the CA) has zero risk.  However, that does not solve the risk problem for the truster  either, if the trustee's acts may affect third-parties  -- such as when a CA's (i.e., the trustee's) certificate  is issued for a CA subscriber (i.e., the truster) but will be actually used by a generic user (i.e., a third-party)  to certify the subscriber. Here, one cannot make the whole world sign up one huge insurance policy -- so the truster and the trustee may be protected by the insurance policy that the truster has bought with their names as beneficiaries but  that does not protect a generic third-party (ie, the rest of the world) that may rely upon the trustee's acts on behalf of the truster (e.g., the certificate issued by the CA and purportedly including the intended subscriber's correct data).

Trust is not to be confused with accountability -- as sometimes expressed: "for e-commerce, trust is pretty well irrelevant and what you need is accountability". Indeed, the interplay between trust and accountability is sometimes difficult to delineate. But, here, logic can help.  Suppose you have the information that A is accountable on matters of x. Can this information be trusted?  So, trust is the vehicle, the carrier for accountability.

Trust is not belief but may be expressed in terms of belief. As one can derive from the work of Dempster  and Shafer [DS97], [Ger97], "belief is the probability that the evidence supports the claim". Thus, belief can indeed be used to gauge reliance on a trust-point, i.e., to verify if "matters of x" really represents "well  enough" the entity's actual behavior vis-a-vis the evidence. If one uses  the concept of belief, then trust can be defined by "trust is received information which has a degree of belief that is acceptable to an observer" -- which is linked to the concept of local knowledge in [Ger97], hence "trust is knowledge acceptable by an observer".

Trust is not probability, neither in frequency nor in Bayesian interpretation, but may be expressed in terms of probability in either case. The frequency interpretation suffers from the "objective" aspect it assigns to probability and from a strong dependence on past events -- while trust is subjective and may suffer an abrupt transition to zero in one event. In the Bayesian interpretation, even though one can compare Bayes-belief between different events and such belief is subjective, "Bayes-knowledge" gained from recent events and knowledge assumed from prior events cannot be treated as members of the same set of "knowledge". Thus, "new trust" would be essentially incompatible with prior trust, under Bayes. For example, "new trust" would need to be binary and could not be learned unless a non-zero probability for it already existed. Difficulties in the belief revision aspects of Bayesian probability are important here, as discussed in the literature for example by Wang [Wan93].   Further,  some aspects of trust imply conceptual coherence, while probabilities only describe perceptual coherence. For example, if I have a formula that can purportedly calculate any n-th digit of pi  in base-16 (the Bailey-Borwein-Plouffe pi formula), then my trust in this formula depends on its conceptual coherence with the underlying mathematics. Which trust can justify my reliance on all its possible perceptual outputs even though I cannot perceptually measure all of them (i.e., I cannot verify all the infinite digits of pi that the formula can predict, to see whether they are true or not).

As initially defined by Wally [Wal91] but modified here in order to disambiguate imprecision from randomness and uncertainty from outcome prediction, the terms uncertainty and imprecision can be used to highlight different aspects of models. I define that a model is uncertain if we cannot make statements about a single outcome, or certain otherwise. The binomial model for flipping a coin is a good example of an uncertain model, as we cannot predict a single flip, even though we are sure that half of the tosses should come up heads if we wait long enough. A model is imprecise if we cannot predict the long run behavior, or precise otherwise. For example, we may have insufficient information about the failure rate of a component of a new car type, in order to make a precise model. Using this terminology together with the usual definitions for objective and subjective, we can see that probability is an objective (frequency analysis) or subjective (Bayes) precise uncertain model, belief is a subjective imprecise uncertain model, fuzzy logic is an objective imprecise certain model, whereas trust is a subjective precise and certain model. Thus, probability is the mathematics of objective and subjective uncertainty while trust as defined in this paper aims to be the mathematics of subjective certainty and precision.

Trust can be negative -- meaning that you know you cannot trust. This is a situation of "knowing with qualification that there is a definite lack of trust",  exemplified by the phrase from an actual work e-mail message (names changed) by a contributor to this discussion: "As I stated to James in our Team phone call on Wednesday, Acme  has now taught us to trust Acme to be untrustworthy, and we must  hold that trust until Acme breaks it,  since there is no basis for any other kind of trust." Of course, if we know we cannot trust then that qualified lack of trust is trust.

Trust can be neutral, neither positive nor negative, as exemplified by  case C for Phill's modem in the Appendix [A.4.10] -- corresponding to the case where one needs no trust. As  explained in NOTE 2,  "needs zero trust" or "needs no trust" is not the same as  "has no trust". To say that "channel A has no trust for property X"   is the same as to say that "channel A does not transfer trust for  property X" -- so, if you need trust on property X you cannot  use  channel A alone.  However, when channel A "needs zero trust for  property X" it means that no other channel is needed in order to transfer property X, but channel A.

It is interesting also to compare trust  with risk -- which is one of the counterparts of trust. Indeed, if the risk is null then anyone can be trusted. Or, if the risk is sure then no one can be trusted.  Further, more trust means less perceived risk that some piece of data, behavior, etc. will turn out to be different than expected. The comparison between risk and trust seems to have another contact point:  this paper considers that trust indeed has components which must be individually "perceived" or earned --  in the same way that risk must be individually "perceived", cf. Shrader-Frechette [S-F97]. This means that neither can be just "assigned" -- because both are linked to what an observer can estimate and rely upon to some justifiable extent.  In her study on risk evaluation, Schrader-Frechette explains:

"Assessors who subscribe to the "Expert-Judgment Strategy" assume that one can always make a legitimate distinction between "actual risk" calculated by experts and so-called "perceived risk"  postulated by laypersons.   They assume that experts grasp real, not perceived, risk, but that the public is able only to know perceived risk. This essay argues that all risk is perceived, even though there are criteria for showing why some risk perceptions are more objective or better than others. It argues that, although risk is not wholly relative, it is unavoidably "perceived." After showing what is wrong with the Expert-Judgment Strategy and the ethical consequences following from its use, the essay argues for an alternative approach to hazard evaluation and risk management. It describes a new, negotiated (rather than merely expert-based) account of rational risk management." thus, defending the use of subjectively centered perceived risk over expert-based objective risk and further discussing eight different reasons for it. Using this paper's terminology and Shrader-Frechette's study, risk can be explicitly defined as "that which an observer has estimated at epoch T,  about an entity's failure possibility on matters of x", which allows risk to be quantitatively used when calculating risk/cost factors as a function of trust.

However, trust should not be confused with the absence of risk. The fact that some parts (ie, aspects) of trust may use risk to formulate a decision process does not mean that trust as a whole must be based on risk. Further, if an aspect of trust uses risk as a tool in the decision process to trust then that part may be described by probability but, not necessarily -- as risk may not be ergodic itself.

Further comparisons can be useful, without a doubt.  They are also interesting to exercise the explicit meanings of trust which are being developed as stances of the abstract definition. However, Section 4 will provide general arguments that will allow a broader understanding of the role of trust in communication systems -- making it possible to deal at once with several comparisons.  Section 3 deals with the need for such comparisons, as a way of measuring if and how well the abstract definition of trust can represent reality -- as a source for useful real-world models of trust.

This paper considers trust to be essentially subjective, as one of its main truth conditions. Which may present an apparent contradiction with situations where trust may be perceived by some as objective (such as trust on an objective fact -- e.g., life and death, money) or sometimes also as intersubjective (such as trust on a professional ability -- which also depends on the chosen professional).  The main word here is "subjective" -- which means that one needs to take a subjective or personal instance in order to evaluate an object.  For example, beauty is a subjective concept ("beauty is in the eyes of the beholder").  A secondary word is "intersubjective" -- meaning that this instance can yield different results for objects of the same class.  For example, a medical diagnosis for a patient is intersubjective because the diagnosis itself is a particular instance from the class of all diagnosis possible for that patient at that time, each clearly dependent on the patient's relationship to the physician and different from the other.  It is interesting to note that an intersubjective concept is overly-variable in reference to a subjective concept, because it also depends on the particular instance of the class' object.

Thus, the paper considers trust to be subjective ("trust depends on the observer") because trust is similar to beauty and dissimilar to a medical diagnosis in that regard: trust and beauty are abstract objects that cannot be differently instantiated.  However, even though trust is subjective, trust on a CA certificate is intersubjective because it cannot be harmonized for all CAs or, even, for all similar certificates issued by a particular CA. The conclusion is clear: trust is subjective but can acquire an intersubjective dependence.  Further, the subjectiveness of trust  may still allow a possible coherent intersubjective concordance over a large population in regard to one entity -- which could lead to an impression of its objectiveness.

Therefore, the proposed definition of trust can also  easily explain the oftentimes contradictory  and seemingly confuse behavior of objective, intersubjective and subjective perceptions of trust -- leading one time to what seems to be "objective trust" (e.g., currency, life and death cycles, the Earth's orbit, etc.) when there is a large collective of agents that coherently trust one  target, other times  to "intersubjective trust" (e.g.,  mother and son, certificates from a CA, etc.) when there are some collectives of agents that develop mutual trust relationships,  and still other times to "subjective trust" when the subject independently defines who or what the trustee is. Therefore, all these "trust modes" can be simply explained by recognizing that they depend atomically on the collective and individual actions of a large or small sample of agents that, nonetheless, trust one another entirely subjectively.  Which easily explains historical difficulties such as faced by Galileo Galilei, when he proposed to change the then "objective" trust that the Sun revolved around the Earth.  Clearly, it is much more difficult to change trust when it is confused with fact -- which was the case.

As this paper shows,  all trust is essentially subjective and all trust is essentially knowledge that an observer has acquired and upon which can rely to some extent -- which means that the observer not only evaluates trust but also stores it either directly or indirectly, with all its multiple interdependencies and relative reliabilities along a timespan.  If the occurrences that we see in time are called "perceived facts"  --  whether objective, subjective or intersubjective -- then trust is not the facts themselves but knowledge about the perceived facts -- which depends on each observer.  Essentially, in our interactions, we compare trust -- not perceived facts and not facts.  The same happens with our cyber agents, software programs and also hardware -- which can then be recognized as equally able to deal with and use "their" trust as we can with ours, even and most importantly when in interaction with us. As we can understand from the given real-world models above,  this allows a common ground for process-trust and social-trust, linking cyber and 3D worlds.
 

3. The Trust Definitions: Abstract and Explicit

To summarize the results so far, all possible "real-world models of trust" for the Internet, law, e-commerce, linguistics, etc. are postulated and defined by one abstract and formal definition, which is the seed-concept for all the other definitions: trust: "trust is that which is essential to a communication channel but cannot be transferred from a source to a destination using that channel", In a general communication context,  trust can then be defined  from the formal definition of trust given above by any of a series of combinations of different instances and observers, leading to any number of  equivalent explicit definitions such as: (where the term "entity's behavior" is to be understood as unsupervised by the observer, except possibly at epoch T):
 
trust: "trust about an entity's behavior on matters of x is that which an observer has estimated at epoch T with a variance as small as desired",
or, conversely, by the equivalent explicit definition: trust: "trust is that which an observer has estimated with high-reliance at epoch T, about an entity's behavior on matters of x", or, by other also equivalent explicit definitions -- which may convey other modes of thought when the abstract definition is placed in different contexts: The definition of trust can also be instantiated for each particular worldview such as objective, intersubjective and subjective -- from the abstract formal definition: subjective trust: "trust is what you know you know you know" -- you know, can recall at will and know how to use. Using the definition of belief [DS97], [Ger97], as "belief is the probability that the evidence supports the claim", one can also write: trust: "trust is received information which has a degree of belief that is acceptable to an observer",

trust: "trust is knowledge acceptable by an observer",

and, when using the concept of "one's perception" as a filter and a gauge for reality, so that "one's perception" is actually a qualifier, it is also possible to write: trust: "trust is knowledge about one's perception of a fact",

trust: "trust is that which provides meaning to information",

and, using other stances including the absence of trust (as discussed elsewhere): trust: "trust is a link between a local set of truth-values and a remote set of truth-conditions",

trust: "trust is a link between reference and referent",

trust: "trust is a link between referent and sense",

trust: "trust is a link between reference and sense",

trust:  "trust is measurable by the coherence of understanding"

trust: "trust is that which absence can make any state possible",

trust: "trust is that which absence can make any state transition possible",

trust: "trust is that which absence can make a process non-ergodic",

trust: "trust is that which absence cannot justify reliance",

etc.,
 

Further,  if we consider the rather naive but objective "definitions"  of time and space as "time is what can be measured by a clock" and "space is what can be measured by a scale" then which anyone can try by timing five seconds without a clock and five feet without a scale, for example -- where the time and space measurement depend on subjective trust as "what you know you know you know" or, "you know, can recall at will and know how to use". Perhaps, harder to do if I had asked to measure one meter without a scale -- for the US readers. So, this definition also means that: And, perhaps as difficult to objectively define as time and space -- since we must always incur in some degree of circularity in their definitions in terms of other terms. Which point out to the usefulness and generality of the abstract definition of trust, that only depends on formal relationships between intuitively definable objects (essential, communication channel, source, destination, transfer).

The following definitions are also useful, in terms of the concept of a "trust-point" for "matters of x"  [A_3], where a trust-point is the "elementary unit" of trust in a given metric:

so that trust can be defined in terms of  trust-points, as a molecule can be defined in terms of a linked collective of atoms: and we can now distinguish well between the noun and the verb functions in reference to trust and introduce the concept of a "trust-proposition" in boolean logic: The above definitions can be shown (A.1 and A.2) to link well with the real-world use of the word "trust" as given in linguistics and social sciences.  In the author's opinion, linguistics holds a hidden treasury for software and for behavior modeling regarding trust, risk, etc. -- specially when one views it as an anthropomorphic metaphor for software/hardware and targets also the mind/brain dichotomy as it applies to software-hardware and what software really "is", besides the bytecode-runtime (brain).  According to this view, one should recognize that many complex relationships have been already "modeled" and "coded"  in each particular linguistics, including different historical perspectives and commerce practices.  This  leads to a new approach to semiotics to be pursued elsewhere in its generality, which unfolds naturally from the central concept of coherence  -- coherence as a natural or logical connection.  One of its applications is  a redefinition of  "identification" and "identity" in terms of coherence [Ger98b];  with various levels of identification given  as I-1, I-2, etc. , and including trust  at level I-2  -- as that which is measured by the coherence of understanding[Ger98c].

It must be pointed out that while many more explicit definitions of trust are possible,  also as a function of pragmatics (in semiotics),  the abstract definition is perhaps the most general and invariant formulation -- the seed-concept for all the other definitions of trust.  As shown also in the Appendix, not only useful explicit definitions for process-trust  but also for social-trust  can be derived from it.  Thus, whenever we refer to the "trust definition"  we mean the abstract formulation in first place and the explicit forms as secondary.  It is also important to note that other abstract definitions can be derived from the given one, not just explicit definitions, but that is usually not so useful because abstract definitions cannot be directly applied to a case without first defining the stance and the observer (i.e., by defining an appropriate a explicit definition).

However, what is the use of so many different derived definitions?  Here lies one of the most powerful aspects of the present treatment -- since all such definitions are equivalent, they can be potentially mixed with one another in adequate logical propositions that may allow for different trust stances and observers to be combined, as necessary.  This means that, for example, one may perfectly well consider in one statement a trust proposition that depends both on trust on a system (which is process-trust and acquires an objective quality) and, trust on the intentions of a person using such system (which is social-trust and has an intersubjective quality). The different statements (i.e., social versus process trust) would simply use different trust-points to represent the different operators for matters of x.

Another question that the reader may have at this point concerns a perhaps expected polemic around the above definitions, specially the abstract definition -- "is the abstract  definition of trust the right one?".  Clearly, this is a right question. Paraphrasing Tarski [Tar44], I hope that nothing that I have written here,  will be interpreted as a claim that the abstract definition of trust is the "right" or the only possible one. However, what is "the right one"?  Here, perhaps the only metric we may accept is that given by Leshniewski and quoted as this paper's motto:  "A theory, ultimately, must be judged for its accord with reality". This is the reason why we have extensively looked into the question of what trust is and what it is not,  when comparing the predictions made by the abstract definition with the technical and linguistic usage (see A.2) of the word "trust" in our reality -- for several stances and observer relationships. We have indeed verified that the given abstract definition produced results which were semantically equivalent to a series of meanings of the word trust which were investigated, with no exception. Since we covered the majority of meanings that are needed in communication systems, based on several common stances and observer relationships, the paper justifies the abstract definition by its accord with reality -- at least, for the tested part of reality.  The reader is invited to test other parts of reality and to communicate the results of such tests to the author, either for positive or for negative findings. Particularly interesting could be the application of the abstract trust definition to other areas besides technical communication processes , such as to test its usage also when modeling trust for legal and social communication processes -- e.g., power relationships, managerial activities, auditing, interpersonal relationships, art evaluation, etc.

It is useful also to consider the question whether the author should have used a different word, for example "drust", instead of "trust" for the abstract definition.  However, the objective -- from the onset -- is to define "trust" so that the abstract definition must be able to produce explicit definitions which should be equivalent to the real-world (i.e., social, legal, etc.) uses of the word "trust", at least for the majority of useful cases.  Thus, the question is not what is the concept herein defined, but whether it is equivalent to what one would expect from linguistics, social sciences, etc. Which, indeed, is the case at hand -- as already commented above.

The provided trust definition leads to several consequences, to be pursued elsewhere, but the ones we need to cite here are:

  1. "trust depends on the observer" -- or, "there is no absolute trust". What you may know can differ from what I may know.
  2. "trust only exists as self-trust". This means that only self-trust has zero information content, so trust on others always have information content (surprises or, unexpected behavior, either good or bad).
  3. "two different observers cannot equally trust any received information". Direct consequence of (1) and (2).
  4. "a self-declaration cannot convey trust to another entity when using one and the same communication channel". Direct consequence of the abstract definition.
 Self-trust is what the self knows it knows.  It includes everything that it knows about itself and that it knows about anything external to it (all B such that self knows B), but it does not include what the self does not know it knows.  Self-trust (Merriam Webster) is equivalent to self-confidence, which means "confidence in oneself and oneself's powers and abilities" and dates back to 1637.  In psychology, self-trust is linked to "recall memory" -- which is the memory you can access at any time without any prompting or clues.  This is distinct to "recognition memory" -- which depends on clues or external stimuli to be accessed.  Recognition memory is unsafe, as students often find out -- when they trust they know the subject but they are unable to recall it without proper stimulus when facing a blank sheet of paper...Clearly, you may have excellent powers and abilities that you spontaneously ignore -- but which may be nonetheless explored against you either by a semantic "denial-of-service" attack, by a semantic "man-in-the-middle" attack, etc. Not all attacks are syntactical,  as we can recognize when we explore our understanding how trust works.

The concept of self-trust depends also on communication channels, but from past to present. Thus, the entity can transmit information to itself at a later time, which is called memory. Self-trust depends on the contents of such memories, when the entity can rely upon them to some extent.

The same considerations above can also be used to understand actions that may increase or decrease network security, where "self" is the particular autonomous unit being considered (e.g., a program unit, a smart-card, a piece of hardware, etc.).  In the particular case of Internet security,  self-trust is concerned with spontaneous capabilities and performance, including the pragmatics (ie, the area of semiotics that describes the environment and passive/active attackers/observers) but without any stimulus to self from the pragmatics.
 

If we accept the given trust definition then the above four consequences are as mathematically unavoidable as Shannon's Theorems and leave us in a severe predicament. If it is not self-trust then trust must be qualified by defining the extent "x" of the observer's reliance on the entity -- as given by an estimator with quasi-zero variance on matters of x -- which means that trust must be acquired somehow. However, how and to what measure can I acquire trust?  How can I communicate it? Since not all parts of a public and distributed network can be supervised by myself and some parts do not even belong to myself, while any part can be unwittingly shared with malicious attackers, how can unsupervised reliance be defined and evaluated? How can I rely upon an entity's declarations and acts when the entity is using an Internet link? How can two unknown parties reciprocally transfer a meaningful and reliable set of objects, such as their respective cryptographic public-keys?
 

4. The Mathematical Properties of Trust

"When you can measure what you are speaking about, and express it in numbers, you know something about it; but when you cannot measure it, when you cannot express it in numbers, your knowledge is of a meager and unsatisfactory kind", by  Lord Kelvin. To answer the above questions, we must now look at the mathematical properties of trust. This is also similar to Shannon's approach -- when the logarithmic function was found very useful to represent information content and allowed new insights. As in [5], trust has the following main mathematical properties: where the reader can see the first two properties exemplified online in [5].  The last property is straightforward: the fact that a lion trusts a lamb does not mean that the lamb trusts the lion.

What is then the solution?  How then and to what measure can I acquire and communicate trust?

First, trust cannot be thought of as a type of authorization loop, where trust flows from the source to the destination and back to the source, similar to a battery and electric current. [6]

Further, contrary to information, trust cannot come in by a type of add-on -- such as modulation on a carrier. Why?  Because when you modulate a carrier you are encoding information into that carrier and you suppose that the carrier is pre-existent -- so the carrier has a very low information content while the modulating signal has a very high information content. Ideally, 0% and 100%. On the other hand, according to our definition, trust must have zero information content (trust is what you know).

So, trust cannot be thought of as a modulating wave -- it is the carrier! This is the paradigm shift that the development of intrinsic certification  [Ger97] was based upon in the first place. First acquisition, then recognition.

Without the need to continue with a stepwise investigation as done in Section 2, we can now generalize. The bottom line is that trust is akin to a carrier of information -- which information can be anything we may need:  accountability, evidence, responsibility, validation, reliability, generalization, uncertainty, consistency, truthfulness, legal reliance, liabilities, warranties, ethics, monetary values, contract terms, deals, person's name, person's DNA, fingerprints, bank account number, public-keys, etc.

So, not only accountability but even truthfulness depends on trust. Trust is a basic property of communication channels, similar to information.  I could say, in a very broad generalization, that "everything is information and rides on trust"... which trust allows you to act or not, when based on that information. So, this is a second-order Information Theory -- in which we are not any more interested only on how much "surprise" data is being transferred over a channel -- as measured by the uncertainty of the party as to what the message will be.  Rather, I now focus on what is essential to that message but which cannot be transferred using that channel (as trust is defined here) -- which can be equally quantitative as information, though both are subjective.

For further examples of using the abstract definition of trust, see A.4.10, A.4.11 and the mcg-talk list repository.

My  following assumption then is to mathematically model any suitable explicit definition of trust (i.e., this is not a play on words but we have to model our real-world model of trust) as a multivector operator on information, which is parameterized by (t,d,s,...) where t=transitive, d=distributive, s=symmetric, ... + other properties such as time (see [6]). Of course, the mathematical model may change if we change the explicit definition used to form the representation -- but all models are upward compatible with the single abstract definition of trust.

Any suitable trust model can now allows us to answer the basic trust questions, as a function of cost and risk [7].

When (t=0, d=0, s=0, T=0, ...) we have "hard-trust" -- i.e., zero information content (no surprises) and no risk. But, also, as isolated as an island -- trust cannot be acquired or communicated.

When we allow the parameters (t,d,s,T, ...) to take non-zero values, then we have "soft-trust" -- i.e., non-zero information content (bad and good surprises) and ... risk. Here, trust can be acquired and communicated but always tainted with information. In other words, "hard-trust" is only applicable to self-trust -- because self-trust is untainted by information (by definition, since it is known to the observer). However, trust must be properly gauged [8] also as a function of risk/cost if it is to be properly used in the soft-trust regime.

As a final remark on the mathematical properties of trust,  a cursory reading of this paper may give the impression that trust just depends on  appropriate out-of-band information.  For example, one may think  that trust is warranted if  "I, entity A, have independently verified that the certificate is in fact from CA B, by virtue of having exercised the appropriate out-of-band security procedure to confirm its authenticity".  This is far from truth, because of two main reasons already mentioned above: (i) the non-boolean properties of trust and, (ii) the multivector aspects of trust as a mathematical operator. For some examples, see [5]. This highlights the importance of the study of the mathematical properties of trust, briefly sketched here and in the Appendix. Further references are contained in the mcg-talk list (under the author's number-ID  416720) and in the sci.crypt newsgroup as well as in the lists e-carm, cert-talk , dig-sig, ssl-users, ssl-talk, spki  and others (under the author's name).
 
 

5. Conclusions

From the discussion, trust is seen to emerge as the mathematics of subjective certainty and precision. Trust was defined with only one abstract definition -- "trust is that which is essential to a communication channel but which cannot be transferred from a source to a destination using that channel" -- cast in the general framework of Information Theory but without predicating any uncertainty model. Information itself was also likewise defined. The abstract definition of trust was shown to lead to any number of explicit representations of trust (of which more than thirty were cited) that can take into account the appropriate instance and observer roles in a measurement process. Thus, trust can be differently represented as needed for each instance and observer, but always with upward compatibility to one common topmost parent concept. Which may be thought of as an interoperation mechanism that is built-in by inheritance, between different representations.

When trust is represented as qualified reliance on received information, it allows the definition of mathematical operators which can represent the concept of soft-trust, when the truster permits (as in the real-world) some degree of  transitivity, distributivity and so on, which turns out to be essential to Internet communication processes -- but  which open a series of security risks, as discussed in a broad context.
 

In practice, the theory is always more complicated.


Currently, the work proceeds on the development of a proper trust algebra (using Grassmann's Algebra) that can represent and allow soft-trust and its risks to be calculated with a type of proposition calculus. Trust algebra is non-boolean but begins with boolean propositions of the type "A trusts B on matters of x at epoch T"  and unfolds into fully intersubjective calculations on n-dimensions, which can be visualized by using the concept of multivector intersection in Grassmann Algebra. Trust has thus subjective, intersubjective and objective components -- as a multivector of arbitrary dimension.  Trust can be shown to be a cardinal property of certification systems, as discussed in "Why is certification harder than it looks?".

The arguments presented in the paper show already several common mistakes that we must be aware of and avoid, when dealing with the concept of trust in Internet certification, which are discussed in the Appendix -- specially A.1. Taking such model of trust further, as it will be presented in future papers and in the Meta-Certificate Standard, leads to what is called "archetypical trust model" as presented in the MCG-FAQ. In the model, even though the trust operator is clearly non-Boolean (see the mathematical properties above), it can be used to construct  Boolean trust propositions (A.3) that can represent not only binary but also tertiary, quaternary and generic m-ary trust relationships. The concept of "critical radius of trust" is also derived from space and time considerations of differently interacting agents, where the critical radius is defined as  the reach of soft-trust where risk and cost are equal.

As recognized in linguistics and semantics, words should be used within their generally accepted meanings as much as possible (in fact, some claim that this is the main difference between science and poetry...).  The  paper showed that the full content of the accepted social meanings of the word trust, albeit of difficult conceptualization [McK96] in its real-world and social uses, could nonetheless be well-modeled by an abstract definition of trust within the framework of Information Theory  and communication processes -- thus rendering possible its scientific and technical use a par with the social meanings.  Semantically, the  abstract definition of trust contains the seed-thought of the full concept of trust, which unfolds as explicit truth-conditions when applied to each practical stance , which, in turn,  may provide different truth-values  to each observer.  The abstract and the  explicit definitions of trust  can thus provide a common ground when dealing with trust,  in any context.  Trust is also shown to be a new type of measurement: how to rely upon actions at a distance.  Thus, trust affords an answer to the problem of measuring events that are important, significant but which are essentially unreachable -- as strongly exemplified in the Internet, but which may have applications in other areas of communication systems and science.

Further, since trust  can be shown (see  A.4.3) to be essential to allow meaning to be conveyed in communication, and not just references, this paper advances the thesis that trust is a basic property of Nature,  such as time and information.  Without trust, communication in Nature would have no meaning -- which is clearly not the case, and which negation supports the thesis. Further weight to the thesis comes from the very historical difficulty to define trust so far, as mentioned in the paper, which points out the basic nature of trust, as with any concept that cannot be well-defined because it is primary -- e.g. time.This shows the futility of any approach that may try to qualify trust by maiming it -- i.e., by denying some of trust's truth conditions.  Of course,  by artificially changing the contextual meanings of trust one cannot hope to change the need to understand it or the need to use the true richness of the concept it denotes.

The ancient Greeks, for example, defended for a long time the concept that all physical lengths were exactly measurable --  which would lead to the expectation that all numbers that represent reality must be rational. And yet, if we take a right triangle with sides equal to one, the hypotenuse is not exactly measurable -- it is equal to the square-root of two, which can be easily proved to be irrational. Further, such a triangle can be easily built and exists. Or, if we take pi, which is not only irrational but also transcendental, then we can construct any number of circles with perimeters that are not exactly measurable. Even if the Greeks would have artificially changed the meaning of the word rational to mean real, still such lengths could not be exactly expressed by measurements -- no matter how precise. The same applies to trust -- because it is  a basic natural concept that exists independently of the name we may assign to it and, thus, cannot be better understood if its properties are partially reduced. For example, using the name trust to denote  authorization, belief or a lesser concept will not make trust's truth-values more useful or easier to use in security designs. On the contrary, the truth-values of trust will be even more difficult to grasp and use if trust's truth conditions are ignored in a design, policy, theory or measurement.

Forthcoming  papers will show that trust and information are necessary and sufficient properties of a generic communication system that can use pragmatics (environment) to transfer not only syntactic (reference)  but also semantics (sense) from a source to a destination; some results are already available in the Appendix (see  A.4.3) and in the mcg-talk or trust-ref exchange.
 

Acknowledgments

The author acknowledges helpful hints and discussions with participants  of the Meta-Certificate Group discussions and the mcg-talk and trust-ref lists, especially  Einar Stefferud, Tony Bartoletti, Peter, tks, Pedro Rezende, Ricardo Dahab, Nicholas Bohm, Mike Rosing, Waldyr Rodrigues, Pedro John Meinrath, Frank O'Dwyer and also participants of several discussion lists such as e-carm, ssl-talk, ssl-users, dig-sig, dig-cert, itanet, cert-talk, SPKI list, IETF S/MIME , Usenet newsgroups such as talk.politics.crypto, comp.security.misc, comp.security.pgp.discuss, sci.crypt,  the Internet community in general. However, this list does not mean their endorsement or responsibility in this work, which is the sole responsibility of the author, reflecting his viewpoints -- not the viewpoints of any corporation, company, agency or Governments.
 


Appendix

(This section contains material from recent messages)

1. Model of Trust versus Trust Models


The title is "Toward a Real-World Model of Trust" -- which has two sides:

1. The model of trust or what should we understand by the word "trust" in communication processes,
2. The trust models we can use, which will allow us to represent our understanding of the word "trust" as defined.
These are two entirely different viewpoints. Let us initially investigate possible models of trust that could be used, and compare them with the model of trust defined in this work by the explicit and abstract definitions already presented.

First, some think that one cannot compare the "digital"  and "emotional" concepts of trust -- the "digital" concept being the technical use of the word trust as in communication processes, root-keys, digital signatures, certificates, etc. and the "emotional" concept being the social understanding of the word trust in commercial, legal and personal dealings.

Clearly, and to fix notation, the term digital trust is inappropriate when applied to a communication process -- which can also be analogue. Similarly, technical trust is also misleading, e.g. a technical argument in law is quite different from a technical argument in engineering. The best word here might be "process trust", which allows not only the protocol but also the software, hardware, etc. to be included in the trust concept -- e.g. a modem can also be trusted in the communication technical sense. Similarly, "social trust"  might also be better word to represent the emotional, real-world, 3D or personal aspects of trust. So, I will use preferentially both terms below: process trust and social trust.

The concept of process trust has several definitions, as I have located them and there are possibly more.

1. NSA: "a trusted system or component is one with the power to break one's security policy" [10].

Comment: While some may consider that this definition chimes in well with the relationship between a Trusted Third-Party and a TTP-subscriber,  it does have the merit that it considers trust to be subjective.  However, it includes any number of subjective, intersubjective and objective dependencies into the concept of trust, which may not be trust -- such as auditing. It also confuses the whole security policy of the truster with that part which the trusted system can influence. 2. X.509: "Generally, an entity can be said to "trust" a second entity  when it (the first entity) makes the assumption that the second entity  will behave exactly as the first entity expects. This trust may apply only  for some specific function. The key role of trust in the authentication framework is to describe the relationship between an authenticating entity and a certification authority; an authenticating entity shall be certain that it can trust the certification authority to create only valid and reliable certificates." [11] 3. ABA Digital Signature Guidelines (ABADSG) I: trust is not defined per se, but indirectly, by defining "trustworthy systems" (or, systems that deserve trust) as "Computer hardware, software, and procedures that: (1)  are reasonably secure from intrusion and misuse; (2)  provide a reasonably reliable level of availability, reliability and correct operation; (3) are reasonably suited to performing their intended functions; and (4) adhere to generally accepted security principles. " [12] Comment: This definition is unfortunate in that it confuses trust with fault-tolerance and other unrelated matters, especially so because (for example) fault-tolerance is objective and can be quantitatively measured by friends and foes alike -- whereas trust is the opposite. 4. ABADSG II: the ABADSG uses the word trust also in the legal sense of something held in trust -- i.e., a property interest held by one person for the benefit of another -- which has nothing to do with the issues here, but may confuse the reader in a phrase such as "private key trust service"  which is later on defined to be a legal trust concept in the ABADSG document. Comment: Perhaps, a better wording for such use of the word trust in the ABADSG would result from rephrasing everything in order to highlight the expression "in trust" for this legal concept, such as using "private key service in trust" instead of "private key trust service". [12] 5. PGP: even though PGP uses the word trust extensively, such as in web-of-trust, the concept of trust is not explicitly defined by PGP and one has the impression that PGP uses the social concept of trust. Comment: In fact, this would be appropriate because PGP was intended to be an e-mail security software for a close group of friends and the friends themselves would provide for the trust management issues -- in their own socially acceptable way.  However, the trust concepts developed in the paper point out some basic inconsistencies in PGP [9], e.g. when PGP enforces a model of "hard-trust" with "trust is intransitive" to setup entries in the web-of-trust but uses "soft-trust" to upkeep entries, without discussing its validity/gauge nor allowing for time factors such as lack of synchronism. 6. Real-world or Social: The concept of social trust can be obtained from dictionaries, such as Merriam Webster: " 1 a : assured reliance on the character, ability, strength, or truth of someone or something b : one in which confidence is placed. 2 a : dependence on something future or contingent : HOPE b : reliance on future payment for property (as merchandise)  delivered : CREDIT 3 a : a property interest held by one person for the benefit of another b : a combination of firms or corporations formed by a legal agreement; especially : one that reduces or threatens to reduce competition 4 archaic: TRUSTWORTHINESS 5 a (1) : a charge or duty imposed in faith or confidence or as a condition of some relationship (2): something committed or entrusted to one to be used or cared for in the interest of another b :  responsible charge or office c : CARE, CUSTODY <the child committed to her trust>"

Having presented the various definitions found for "process trust" and  "social trust", we can easily observe that they are not even concordant between themselves -- much less with one another.

However, it is perhaps clear that they should all be equivalent, even though different in their own domains. In other words, it should be possible to find definitions of trust in each domain that would carry over to one another as a matter of proper focus.

Thus, in this view, both "types" of trust are not apples and speedboats. Communication protocols can and should indeed be based on social trust concepts -- i.e., real-world concepts -- and not on some ad hoc and academically unrealistic models.  For example, a security design that considers trust just a synonym for authorization. Further, the author considers it already a bad sign if one is using a model of trust that divorces the digital world or communication concept of "process trust" from the emotional, personal or 3D world concept of "social trust". Instead of a "feature" of such a model, it is a bug.

In fact, the social and communication aspects of trust must be well integrated if a socially useful communication protocol is to be defined.  This will also be very important for the next intermingling of cyberspace with our 3D-world, as discussed in A.4.7. As commented before, one must recognize A_4_10that unless one arrives at a real-word or social model of trust to be used in the electronic world, no logically useful communication trust model can be set forth.

This idea is not entirely new. Besides Shannon, who used it successfully 50 years ago when modeling information,  Phill Hallam-Baker declared the following in Nov/94:

"We have two options either we can attempt to define wonderful academic forms of trust model de novo. Or we can observe the real world and attempt to model the trust mechanisms that allow it to function. Since we do not see a hierarchical trust model it is not the solution. We do not see anarchy either, or at least in places where it has taken hold it is disaster. What we see is binary interpersonal relationships heavily qualified in many ways. The approach that has always seemed most promising to me is to replicate those relationships allowing them full color with respect to the areas for which trust is granted (financial, notary, reliability etc), the extent of such trust and the confidence with which that trust is allowed." (SIC) [13] Indeed, the reader can verify that the new abstract and explicit definitions of trust in Information Theory terms can represent both the social and the communication process aspects of trust  in a single model -- that essentially represents an abstract model for trust's core properties in the real-world.

This can allow us to cross over different trust domains, so that a unique model of trust can represent "social trust" (i.e., 3D world, emotional, personal) when applied to a communication process (i.e., digital world) as well as represent "process trust"  (i.e., digital or technical trust) when applied to a social situation.

In law, the designation "reasonable man" is a legal standard and applies to the understanding that a judge must develop for a
jury in its decisions-- i.e., a "trust model", as the concept is defined in this work. For example, if a judge trusts that a "reasonable man" would have no doubt on the issues of fact involved, he may send the case for summary judgment.

A "reasonable man" is also a metric for the "reasonable reliance" trust model that a judge or jury can apply by themselves to a case.

I note that "reasonable reliance" is a legal objective trust model which is being increasingly abandoned in favor of "justified
reliance" -- a legal subjective trust model. This can be explained by the technical arguments outlined in this paper, to the extent that a person always bases its acts on subjective trust -- besides the legal arguments.

Of course, technical arguments have a broad worldwide application whereas legal arguments depend on country, state and even time.  However, legal arguments are interesting also -- as arguments cannot be played in isolation. I quote some of  the legal arguments and case law history from a decision by the SUPREME COURT OF THE UNITED STATES, where Mr. Justice Souter delivered the opinion of the Court in case No. 94-967 of WILLIAM FIELD and NORINNE FIELD, PETITIONERS v. PHILIP W. MANS, on November 28, 1995 [SCUS]:

2. Linguistics It is important to recognize the linguistic value of the proposed trust definition for communication systems, or, "is it really what we would use the word trust for, in some circumstances, or should we use something else as a name for the definition?"

Clearly, we would not (as cited above) use the words: assumption, knowledge, belief or information.

As to the word trust itself, it was chosen exactly on semantic grounds for the English language. Linguistically, "Trust" is akin to "true" and "faithful", with a usual first dictionary meaning of  "1 a : assured reliance on the character, ability, strength, or truth of someone or something; b : one in which confidence is placed."

So, in common English usage trust is what you place your confidence in or, expect to be truthful -- that which you can rely upon.  Of course,  this is a subjective metric since what a reasonable man may need to consider in order to rely upon something is quite different than what a naive truster may require. Perhaps, a naive truster will only require an indication as a reason for reliance, and perhaps also a reasonable man might do the same if what is at stake has a low value.

Thus, the explicit and abstract definitions of "trust" given here -- albeit technically directed to the terminology of Information Theory -- have a strong resemblance to everyday use, also when trust simply means reliance on an indication.

It is also important to realize the subjectiveness of the definition. Who defines what is "essential" in the formal abstract definition, or "high-reliance" in the explicit definitions? Who defines what is true?The truster.  The truster defines the metric used to justify what it can rely upon and what it considers to be true. Which can be subjective, intersubjective, objective or a mixture of such types -- as we can see in the real-world.
 

3. Trust Propositions, Matters of x  and Metric-Functions

The explicit definition of trust leads to concept of "trust proposition" as a Boolean representation of a trust act. A trust act is seen as an encounter (e.g., a "collision") between A (the truster) and B (the trustee) at time T, during which encounter A gathers information on B, possibly unbeknownst to B. Comment: Clearly, the least B knows about A's measuring actions, the better for A's reliance on the estimator as a valid representation for B's acts which are unsupervised by A. This is  similar to Heisenberg's Uncertainty principle and will be pursued elsewhere. Further, while trust is not auditing, B can clearly be supervised by other entities instead of A. This can lead to complex tertiary, quaternary and general m-ary trust relationships -- which can either increase or decrease security, as a function of the different estimators for each entity and their logical relationships. A binary trust proposition is of the form "A trusts B on matters of x at time T", which evaluates either to true or false. Binary trust propositions can be combined into m-ary expressions using the framework of Grassmann's Algebra, as pursued elsewhere.

However, the question is: what is "x"?

First,  "x" is a scalar, a trust-point. They represent behaviors which are either known or can be  predicted with quasi-zero variance. This is not to be interpreted as saying that there is no room for "maybes" or even unpredictability regarding the outcome of x, of course. The point here is that while the expression "B trusts C on matters of x" means "B knows that C is  predictable with quasi-zero variance on matters of  x"  -- and therefore B expects no surprises on such "matters of x" -- here is a short list of what "matters of x" can be:

Trust-points can also be absolute or objective, such as "pi is 3.141592...", "SHA-1 is defined in file F...", etc.

In each of these examples, I observe that we have actually defined an equivalence class for matters that represent the "same" behavior "x" -- the same trust-point in its interactions.

But, what is "same"?  Indeed, if we want to model trust, risk, certification, privacy or even the security of cryptographic protocols, one of the first questions we must ask is:

In other words, we need a notion of distance between two quantities -- for example between "trusted data" and "input data". How "close" is the input data to data that can be trusted? When are they the "same"? This is, of course, a basic question which we would like to answer as quantitatively as possible .

Further, we need such notion of "distance" to satisfy some requirements:

(i) the distance must be the same if I just exchange the two quantities, so that distance "looking into" one must be the same as
"looking into" the other.

(ii) the distance must be invariant under some class of transformations, so that I can change reference frames under that
class of transformations and still meaningfully refer to that same "distance".

(iii) the distance must correspond to a meaningful reference that I can express, order and compare  -- e.g., a number (even in cardinal form -- i.e. as a measure over set equivalence).

(iv) if the distance is zero I would like the quantities to be considered "equal".

(v) Conversely, if two quantities are "equal" I would like their "distance" to be zero.

(vi) If I have three quantities which are distinguishable from each other then I would like to define the notion that a direct path between two quantities is shorter or equal than an indirect path that also includes the third quantity -- like a triangle.

Now, I note that the notion of "equality" expressed above is not simply that given by the "=" sign, but rather contains the idea of "equivalence" -- for example, 2 and 4 are equivalently even numbers, though one cannot say that "2=4".  This is also often (and I will follow such usage) expressed as indistinguishability -- 2 and 4 are even numbers and are indistinguishable from one another in regard to being an even number, i.e, they cannot be distinguished from one another in that aspect.

In looking thus to a general framework to express how "close" or how "far" quantities are -- in other words, how distinguishable they are -- we realize that we have just stumbled into metric functions!

Indeed, a metric function d(x,y) is a positive-definite function (i.e.,  d(x,y) >= 0) that needs to satisfy four properties, which contain our "wish list" above:

Metric functions are used to define a concept of "distance" between point x and y, such distance being d(x,y). In particular, property (3) says that if the points are equal (eg, indistinguishable from each other) then their distance is zero. Conversely, property (4) says that if the distance is zero then the points are equal. The "triangle inequality" given by property (2) is the familiar statement that the sum of the lengths of the two opposing sides in a triangle is larger than the length of the base side, being equal if the sides are collinear. Last, property (1) says that the same distance is obtained, whether you are looking to x or to y.

How is that applied to "matters of x" and why is it useful?

Because "matters of x" defines what is trusted by the entity, one can thing of "matters of x" as a function of two arguments: the first one is x (trusted) and the second is any input y (to be tested). The output of the function indicates whether y is trusted or not:

matters-of(x,y) = 0 if trusted, a "number" > 0 otherwise.

It is easy to show that matters-of(x,y) can be defined so as to satisfy all 4 properties of a metric function.

Now, if matters-of(x,y) is zero, we can say that x is indistinguishable from y -- ie, they are "equal" in regard to trust. So, matters of x allows us to define how "close" or how "far" some input is from being trusted -- and can even provide us *paths* to move in "closer to trust".

This is already very useful because we can represent all the various degrees of "quasi-trust" that we also follow in our reasoning -- but now in software.

However there is more, in two basic results from mathematics.

The first one is that metric functions are rather easy to find, and we are free to define whatever suits us and the modeling we have. For example, we can use the notion of probability of error, Shannon mutual information, Kolmogorov complexity, Bhattacharyya coefficient,  least square error, etc.

The second one is that if we formulate these concepts into metric functions that obey the 4 properties, then it is irrelevant which one we use. It is largely a matter or convenience.

Of course, there is much more to be said about the (in)distinguishability problem and the use of metric functions, specially in potential uses of this theory of trust. But the above comments may already show the general principles involved, their usefulness and relative easy-of-use -- besides the extreme flexibility they provide.

The structure of x, while a scalar, is that of an operator that represents a process (see the specific definition of process in [Ger97]. This operator can be shown to obey the properties of a quotient ring in Mathematics, also called a skew-field. Which allows the trust-points "x" to be used as "elementary units" to construct multivectors in Grassmann's Algebra, allowing very complex m-ary trust relationships to be represented and affording an intuitive geometric vision. See the mcg-talk postings on such subject.

The concept of  "proper trust" can then be mathematically defined as satisfactorily as the concept of  "proper keys", by allowing trust and keys to be fully described by convenient metric functions in a coordinate-invariant formulation of certificates within a seven-dimensional metric-space [14]. As a general result, certification in communication processes is shown to be mathematically equivalent [15] to the geometric problem of distance measurement in a metric-space -- as can be intuitively motivated by observing how key-distribution [16] works.

For two parties in a dialogue, all possible certification procedures are then classified in only two models: extrinsic and intrinsic, with a combined mode [Ger97]. All known security designs correspond to the extrinsic model -- which depends on references that are extrinsic to the current dialogue, with certification relative to a third-party or past events. The intrinsic model is a new security design -- which depends on references that are intrinsic to the current dialogue, with certification obtained by measurements that rely upon intrinsic proofs.

4. Internet Names, TSK/P, Uniqueness, Reference and Sense, Metrics, Biometrics, Bio-implants, Examples, etc.

The above discussion on trust can be used to investigate several timely questions. Questions 1 and 2 are common Internet discussion items, nowadays answered in the affirmative. Questions 3, 4,5 and 6 were supplied by Nicholas Bohm. Questions 7, 8  and 9 were supplied by a MCG participant.  Questions 10 and 11 were asked by Phill Hallam-Baker in the SPKI list.

No. And, surprisingly, the solution may solve another historical flaw in public-carrier communications.

No one needs a unique name over the Internet, nor a unique e-mail address, nor even a un-ambiguous name in order to be uniquely identified. Neither globally nor locally. Everyone can use their own common names if they so wish, or any pseudonym they desire. This note shows that this is not an issue for identification or security -- while it is a recurring subject, an Internet myth. An equivocated security dogma.

Before we begin, it is important to comment that the method to be proposed allows name and address collisions to decrease, not increase -- as it is in the best interest of every user to have less collisions and they are free to implement any name change that they may desire in order to do so. This is similar to a social effect recognized in Economics, but where I take the stance of recognizing the possibility of a naturally occurring and autonomous virtuous process that can avoid what is called the "tragedy of the commons" --  arising when a  public resource is degraded by over-use from a group of "commons", which onset of degradation can however regulate the over-use by calling attention to the fact. The solution is semantic addressing. It depends on two well-established developments, logical semantics and public-key crypto, plus the current work by the author on qualified reliance (trust) in Information Theory. Using the terminology of semiotics (see item  A.4.3), it is hereafter called TSK/P (i.e., Trust, Semantics, Keys, over Pragmatics).

Logical semantics, albeit not very well-known, was pioneered by Frege (see item A.4.3) and recognizes that a common name has two quite independent components: reference (i.e., the symbol itself, the byte string) and sense (i.e., the symbol's meaning), where the name's reference is its syntactic value and the name's sense is its semantic value. In other words, a name is viewed as a logical proposition which has two independent attributes, the name's sense representing the name's truth conditions and the name's reference representing the name's truth values. Thus, the semantic theory advanced by Frege shows that an unlimited number of entities can share the same reference (i.e., the same syntactic expression, such as "John Smith") and yet each one can be uniquely identified by their sense (i.e., each referent can be uniquely reached if and only each referent has a unique sense).

In other words,  the apparently "intuitive" referential theory of meaning is wrong (see item A.4.3) and meaning can never be derived from references, no matter how many -- it is impossible to derive meaning from name. So, any person can choose at will any symbol to be represented by -- and, per se,  none will be better or worse for identifying the person than any other ... in fact, they will be all equally meaningless.

To exemplify the point, suppose I would ask you:

If all the people named "John Smith" could choose whatever symbol  they would want (ASCII, own photo, dog's photo, etc.) to be one of  their "names" in a certificate, what do you think they would choose:

(a) John Smith
(b) something useful and unique as decided by them
(c) John Smith plus something useful and unique as decided by them
(d) something utterly unrelated to anything that John Smith may be, know, possess or live nearby

What would be your answer?

My answer, in the case of the proposed method, is that it could be whatever John desires: (a), (b), (c), (d) or even all of the above at the same time. And security would not suffer, neither regarding John's interests nor regarding a third party's interests.

This motivates two very important points, that should be allowed in the system:
  1. Referent-Centered: Clearly, the referent himself is the closest person to himself and the best one to know his own sense and references ... which means that each person is better able to define his own references so that they can maximally aid the connection between sense and reference and not hamper it. For example, choosing one's own common name is helpful because it allows that name to be naturally linked to the legal capacities associated with one's own common name. And, conversely, the referent himself is also better able to define an 100% uncorrelated pseudonym, if he so desires to preserve his privacy by anonymity.
  2. Self-Assigned Names: Pseudonyms can be useful to allay privacy concerns in some cases. Artists and authors are known to use many different names. In some countries such as the UK, one can change names at will and none is less legal than another [7]. This does not speak against self-assigned names (such as we are considering here) but supports them. After all, it only depends on you to change your self-assigned name -- or nickname (not some key that you may have to keep because gazillions of people have it).
Thus, it is a mathematical fact that entities can share any number of like references and yet each one can be uniquely identified by their sense.  The question is, how to convey the different senses?

To show how that  is possible, one first needs two Lemmas:

- Lemma 1: item A.4.3 proves that certificates can fully carry references, but not sense -- not even partially and however minute. While this provides an irrefutable  mathematical reason for the total uselessness of certificates to convey sense, it also shows that certificates can wholly contain the name's reference -- securely and as detailed as needed.

- Lemma 2: item A.4.3 proves further that the link between reference and sense is provided by "proper trust", an essential mathematical property in communication systems (as defined by the author in Information Theory terms).

Thus, as mathematically proved by the two Lemmas above (even though already intuitively felt by many), a certificate is only meaningful (i.e., has meaning or, sense) when there is some degree of trust associated with its signature and,  each one of the certificate's data is meaningful inasmuch as it is atomically trusted to some extent. Which points out the key role played by trust in certification, in spite of the rhetoric being usually centered on the syntactic aspects of its encoding, cryptography and name schemes.

So, an entity's name can be ambiguous while the sense is not. References can be wholly and securely transported by cryptographic certificates. However, any reference, including what may be referenced in the certificate's legal "four corners", cannot be linked to sense unless one uses "proper trust". However, how can that be deemed useful, when contacting different referents that have the same reference?

This question leads to the essential role played by crypto in TSK/P to provide for reliable communications, which is not only a basis for certification but is also needed for encryption/decryption.

The final step is simple. Clearly, one hundred people could share exactly the same name and e-mail address and yet each could receive and send unique and private messages by using different crypto keys.

Regarding the issue of key uniqueness, it is well-known that a public-key of sufficient length is usually considered to be statistically unique with a very comfortable margin. For example, the number of prime numbers of length 512 bits or less is about 10^150, which is 10 followed by 150 zeros. So, public-keys of  1024 bits which depend on the product of two random prime numbers with 512 bits each can be generally considered to be unique, even if asynchronously issued by a large number of independent entities. Now, even though common names are just references, they are however good hooks for those keys. But if you go to the wrong hook by mistake or because of name overloading ... no problem, the key will differ.

So, the bottom line is: with TSK/P, each person is free to improve upon his own visibility or ... switch on an invisibility cloak. The TSK/P method allows any user to control the syntax of his own names. Which is valid for any "symbol" or "name" such as common names, e-mail address, DNS addresses, keys, key-hashes, etc. -- without any adverse effect on security in regard to a third-party but with several beneficial effects regarding one's own security and privacy.

Thus, contrary to widespread belief, there is no reason to demand unique common names or addresses in order to afford identification or Internet security, because names do not identify. The world can continue to use its historical practices. Clearly, if something or someone has a globally unique name then, that is advantageous just like a globally trademarked name is useful -- by providing zero collisions. But, as above, any number of name collisions can be handled by proper semantics, proper trust and proper cryptography.

Attacks:
  Regarding the issue of unique keys, what could happen in the case of errors, collusion, virus attack, simple theft, etc. in which one's private-key is compromised without anyone noticing it?  Since a key must always be treated as a name (i.e., a symbol in semiotics) then the same reasoning used above for common names applies to keys -- in TSK/P one must always suppose to be properly dealing with an unknown number of n-plicated keys, common names, key-hashes, etc. In other words, TSK/P's security must neither depend on keys being  unique nor on any other reference being  unique -- by hypothesis.

This is a very  important point. The security of TSK/P is semantic -- which presupposes that its three parts must work in cooperation:  "proper semantics", "proper trust" and "proper keys"-- and not just one part (e.g.  keys or unique common names) or even two parts (e.g., trusted keys, trusted common names).  A TSK/P user should be able to detect a key-collision (e.g., duplicated by mistake or crime) by using that key together with "proper semantics" and "proper trust" -- as long as properly allowed by the protocol  (see item  A.4.6 ), of course. A more subtle problem is that of avoiding eavesdropping, e.g. which may occur after an unnoticed security breach that compromises one's private-key. This question should be solved also by a combination of the three components of  TSK/P -- for example, by periodically renewing proper trust and keys as a function of cost and risk. Thus, the TSK/P method does not treat keys as the one and only security barrier, neither assumes keys to be unique and valid a priori.  In fact, since the method is based on an interplay between <semantics, trust, keys, pragmatics>,  key uniqueness must also be subject to the method's proofs and management.

Clearly, the TSK/P method presents also a side benefit of enforcing by protocol at least some minimum form of point to point cryptographic certification and encryption in day to day communications -- which would tend to make it essential and thus to be accepted by law and granted worldwide as everyone's basic right to be identifiable, since there is no other technical solution (the paper proves in items A.4.3 and A.4.6 that biometrics and even bio-implants cannot provide a solution either). To the effect that privacy and security can come as a bonus from the technology, allowing communication engineering to correct telephony's mistake of providing easy access to security and privacy breaches. Which solves the historical flaw in public-carrier communications: they are also content-public, with eavesdropping built-in.

There are other benefits to this approach, not the least being the "household effect" -- where crypto can become a household word and thus deserving to be widely accepted without the psychological blocks that derive from its historical use by criminals, spies, and other despicable abuses.

As an example of technology's reach by the household effect, not long ago possession of a simple radio receiver had to be registered with proper authorities in some countries and possession of even weak radio transmitters demanded a license -- possession of a transmitter was viewed with suspicion, criminalized. But with transistors it became evident that any $5.00 could allow one to make either a receiver or a transmitter, which lead the way to its present better and un-criminal status. The same can happen with crypto, as it can cost less than $5.00 and can be as essential to day to day life.

It depends on the technical community to show that to the general public, communication companies, e-businesses, and governments. Crypto is in everyone's best interest and, when linked with "proper trust", can completely solve the current name and address ambiguity that plagues the Internet and e-business, while providing both an irrefutable reason and a good argument to restore privacy to one's private communications.

To those that may argue that "proper trust" is not so easy to grasp and is a weak point, it is easy to point out that this is not a feature of the method, but a feature of sense. Sense cannot be transported in certificates, even if the certificate includes a thousand references and even if you have a thousand certificates, all from different issuers. The paper provides  a full mathematically rigorous discussion of why the referential theory of meaning fails, as initially proved by Frege, and why certificates can just transport references, never sense. Thus, certificates have no meaning per se -- even with so-called unique names, notwithstanding the names being local or global. And, clearly, when we consider the names we have in the 3D-world (i.e., common names) as compared to the ones we have in the cyber-world (i.e., keys, e-mail addresses, etc.) then we notice that the link between sense and reference is missing in both worlds -- not just in the cyber-world as often expressed.

To finalize:

Of course,  the same thoughts can be clearly applied to any other situation that needs identification -- for example, routing, e-commerce, credit-card protocols, rights management, etc. Thus, it can be applied to DNS addresses for example, allowing WWW sites to be reached by sense and not by reference. This will be discussed elsewhere.
  No.

The initial question is not whether common names or keys are temporary. Nor whether they are equivalent because they are temporary. But, what are their time scales. The Earth is temporary ... but that does not bother us at all because we live on a different time scale. Keys live on the time scale of weeks and even days or hours or minutes -- while common names last more than a lifetime (as testaments show), can be back-traced and remain legally valid even if legally changed and can be useful for centuries.  Thus, key-hashes (or, keys)  and common names are not similar regarding their lifetimes.

But, there are further reasons to consider common names and key-hashes (or keys) as different concepts altogether and, thus, not interchangeable.

Common names are traceable for generations (even first names and whole names, as families usually repeat first names, add I, II, III, etc.) so names have a tendency of being somewhat better if repeated, whereas hashes and keys are very ephemeral and we certainly do not expect them to be better if repeated. Thus, a repeated common name gives confidence (i.e., conveys trust) over time but a repeated key or key-hash immediately flags rejection after some time.

Further, reliance on a cert should increase if the cert contains a common name that is old (i.e., known beforehand) to you and that you can independently contact and thereby confirm the cert. However, reliance on the cert's validity must decrease with time, as discussed already.  Besides, common names carry an inherent  legal value which is essential in some cases -- such as in credit-card transactions, wills,  etc. -- whereas keys or key-hashes do not represent any inherent legal capacity of their beholder.

Another point, and using the influential work of Leshniewski (ibid.), is that common names can be seen as members of a collective class while keys and key-hashes can be seen as members of a distributive class:

Which sets {common names} and {keys, key-hashes} fully apart as to their logical properties and to the logical rules they obey.   This is a basic result and the lack of its observance leads to strong paradoxes, such as Russel's Paradox. Thus, it is not logically allowed to suppose that common names, keys and key-hashes have equivalent properties. One cannot treat them at par with one another in logical expressions, nor substitute one (e.g., common names) with another (e.g., keys or key-hashes) interchangeably.

Thus, keys or key-hashes cannot be considered at par with common names or in lieu of them -- they are objects, but of a different sort. They have different communication purposes, different lifetimes, different trust conditions and they belong to different logical classes.
 

(For an application of these concepts to Internet semantic addressing, called the TSK/P system, see item A.4.1. However, the needed theoretical backround is provided here.) Perhaps, one's tentative conclusion is that when one exchanges communications with an entity that uses a common name, one generally relies on being able to find behind that name either a particular mind or particular assets. This thought implies a referential model of meaning, similar to Plato's view of referential forms.

To investigate it, suppose we express the general concept of a name, as a sign or a symbol -- e.g., my name is a symbol for myself.  Then, for example, if you see footsteps on the sand (i.e., a symbol, a name)  then you generally rely on the existence of someone that walked by (which is the meaning or cause of the footsteps), or, if you see smoke (i.e., a symbol, a name) you rely on the existence of fire, and so on. Or, as in the above question, you expect to find a particular mind or particular assets that have a causal relationship to the name and which provides meaning to your communication.

However, this model breaks down as I exemplify later on and Frege [17] has shown around 1910 in Germany. He began his reasoning by asking the simple question: "why is it that a=b is informative whereas a=a is a truth of logic and can be known a priori?"

Frege's solution was the distinction between sense (Sinn) and reference (Bedeutung), The names "a"  and "b"  above have the same reference "a" but differ in sense. Paraphrasing  one of Frege's examples, if I tell you "I will photograph the Morning Star" or if I tell you "I will photograph the Evening Star"  then, clearly, the two phrases have the same reference (i.e., the planet Venus) but one describes it as the last celestial body to disappear at dawn and the other as the first one to appear at dusk -- thus, they have different senses or meanings.

In general, these concepts are defined in an interdisciplinary area pioneered by Frege and which exists between philosophy, logic, mathematics and linguistics -- which is usually called either semiotics or semantics. The main definitions we need here are:

Which leads to the following properties, some of them proved here: Now, to use an Internet example, it is possible to have phrases that contain a precise reference but which do not have meaning, such as "John's public-key is A56B..". This phrase has a definite truth value (i.e., reference) and expresses a reference for John's public-key. However, from that phrase alone we don't know the conditions under which it is true (i.e., sense) because we don't know to which John it refers to or even if there is such a person. Thus, the phrase has no meaning by itself, regarding Internet communication. In other words, knowing the truth value of a phrase (i.e., its reference) is not sufficient to understand it (i.e., be informed about its sense). It is also possible to prove that it is not necessary either: "knowing all possible references of a phrase is neither necessary nor sufficient in order to understand its meaning".

Applying this to certification and using Shannon's concepts from 1948, I point out that the difference between "a=b"  and "a=a" is simply that the first represents a transfer in a communication channel that links past to present, whereas the second represents zero transfer.  Since information is what is transferred from source to destination (i.e., information is what you do not expect) then the first statement is informational and the second statement is clearly always expected.  But Frege did not know this in 1910.

However, if we ask the question:  "what is a name in a digital certificate?" then we can see that the name is a reference (i.e., Frege's Bedeutung) which can be wholly contained in the certificate even down to any desired minute data in any language or form ... and thus wholly transferred in the cert -- which corresponds to "a=b" and the cert is informational regarding such reference. However, the name's sense (i.e., Frege's Sinn or meaning) cannot be contained in the certificate at all and cannot thus be transferred.  If it could, then the cert would be self-referential to that referent, which is not informational and represents "a=a" -- "I am myself". Thus, sense cannot even be partially contained in a cert, otherwise that part would provide a self-reference to some part of the referent.

Which points out the futility of trying to devise name schemes however clever, with biometrics, bio-implants, GPS satellite-data and so on  -- which would try to allow a name's sense to be contained in a certificate. Further, it is neither possible to transfer the sense of a globally unique name (e.g., X.500) nor the sense of a local name (e.g.,  PGP, etc.) -- because the problem is not the name being local or global, but the lack of capacity to transfer sense in general.

The same reasoning can be applied to keys or any other symbol which may be contained in a certificate. One can never, even partially, transfer sense (i.e., meaning) in a certificate but one may wholly transfer references.

Thus, we must recognize that certificates can contain reference information in varying degrees of completeness, but not at all the corresponding sense information that can allow such references to be meaningful -- hence, which would be essential to the receiving party if some degree of reliance is to be placed on the usage of the transferred references. There is a missing essential connection between sense and reference.

However, since certificates represent a communication channel between entities, past and future, it can be recognized that the missing connection between sense and reference can then be provided by "that which is essential to the communication channel but which cannot be transferred through that channel" -- as trust is defined (both in the context of social trust as well as process trust) and hereafter understood to be qualified as "proper trust".

Further, proper trust (process or social) allows one to inverse the process and use the references transferred in the cert in order to reach back to sense -- thus making the received data not only cryptographically secure but also meaningful.

Moreover, I can perhaps say that the sense of a complete certificate is the proposition or "thought" expressed by it -- paraphrasing Frege -- where "thought" is not something private or psychological, but essentially communicable and which must include sense and reference.

As in [18] :

 "Speakers of the same natural language communicate with one another
-- they trade contents, not uninterpreted strings of symbols --  i.e.,
 there must be communicable content which is conveyed in discourse.
 The proposition expressed by a sentence, Frege maintained, is
 explicable in terms of the conditions under which it is true - its
 truth conditions.  To grasp the literal content of a sentence I must
 know under what conditions it is or would be true.",
which exemplifies that communication (i.e., "thought") is not just information (i.e., "what you do not expect"), of symbols (i.e., "references") without meaning (i.e., "sense"), but needs also trust (i.e., "what you know") in order to connect reference (i.e., form, syntactics) to sense (i.e.., content, semantics).

To practically illustrate these concepts, the reader may read the former paragraph in its various possible meaning combinations (as given inside the parentheses), forming different phrases which will need to be seen as a whole in order to convey the intended "thought" -- which could not be possibly done in just one modal form with a singular choice for each possible set of references. For example, a few derived phrases are:

The last phrase points out that Shannon's Information Theory fails to provide a theory for communication and explains why  Shannon's 10th Theorem [Ger97] breaks down when the abstract trust definition is applied to a communication process -- and, that is why I prefer to call Shannon's remarkable work "Information Theory" and not "Mathematical Theory of Communication" [Sha48] as he himself called it. The present trust theory belongs therefore into a larger "Communication Theory" which is then able to model and represent actual communication -- as that process which is able to trade contents between parties, "thoughts" in Frege's words. Which Theory (with all its syntactic and semantic parts) finds its need and expression in the Internet, as a prime medium to allow virtual synapses and virtual memory -- a living virtual collective brain if we follow the autopoietic definition. It was once said: "there is nothing more theoretical than a good practical problem". The practical problem of Internet certification is showing that. For example, we may go further into our metaphor of the Internet as a virtual macro-brain and consider how "macro-thoughts" can be represented and recognized in such structure,  what is the  collective "mind" that produces such "thoughts", how is the "mind" linked to the "brain", and so on. Which may allow cognitive theories to be tested and help improve the Internet  -- as a medium  for communication and not just for information transfer between cybernetic agents. Nowadays, cybernetic agents are beginning to wake up to sense acquisition -- autonomous vision and control, learning in neural nets, natural language processing, biometric interaction, etc. -- but they still need to develop mechanisms for sense transfer between different cybernetic agents, human agents and their respective environments -- which will possibly depend on the present concepts of trust in communication systems and its natural interplay with social trust (see items A.4.7 and A.4.1). To summarize all results obtained in this item: Thus, trust is not only essential for the cryptographic meaning (i.e., providing for valid origin authentication and data integrity authentication) of the certificate but also for the non-cryptographic meaning of each of its atomic parts -- for each of the names that it may contain, such as common names, keys, hashes, etc.  Clearly, given the general context of the present treatment, the same applies to any communication process, whether on the Internet, over the phone, postal mail or even person-to-person -- where trust is likewise essential to provide for collective as well as individual meaning.  For an application of these concepts to Internet semantic addressing, called the TSK/P system, see item A.4.1.
 
  No. This is sense, not conveyable in a cert. This question exemplifies a lawyer's office common situation and is particularly useful to motivate how one can distinguish between the Morning Star and the Evening Star (different lawyers in sense but equal in reference to the client, see Frege's example in A.4.3) that work in the same office and may legally sign the letter with the other's text -- however with one's key. This is similar to the confidence-leak problem mentioned in [5] -- which has no solution besides trust.
  No. These are also all sense.
  We need to abandon the referential model of meaning, as I commented above. So, a common name is a reference which may have varying degrees of meanings, even multiple and even none -- which is all perfectly fine and has to be handled, not artificially ironed out. Reference and sense must be treated as essentially overly-variable quantities, from the start. To suppose otherwise is to fall prey to a series of  fallacies.

The protocol issue is also important because a protocol can be seen as a means of expression -- a language. Which includes syntactics and semantics, but for complex expressions and not just for atomic names. Thus, both the protocol's syntactics and semantics must be expressive enough  -- i.e., must allow all possible variations and needs of sense and reference in a fair and secure way. The certificate itself, as a complex object formed by various names, can be seen both as a result of and as an input to the protocol, i.e., as being expressible and intelligible in that language. This connects directly with the question above because one must be very careful as to what the protocol biases or limitations might be -- as expressed in the protocol language. For example, if we invent a language where one can count only until 5 and thereafter it is just indicated as "many" (e.g., as in some tribes)  then, clearly, some actions might not be auditable at all in that language.

Protocol issues represent a further influence of pragmatics (a branch of semantics that deals with the relation between  references and observers, including the environment), besides the interpretive value associated with the observers (i.e., the dialogue parties) and which defines the semantical values of the expressions. So, the above question and others on the same vein have their roots in one question: Are the intended meanings (ie, from the designer, from the issuer,
from the standards, etc.) equivalent to the perceived meaning?
This question, clearly, has no knowable answer and has not a unique approximate answer either. It is heavily intersubjective in many linkages that include semiotics (syntactics, semantics, pragmatics), trust, cryptography, information theory, law, psychology, etc.  However, as the original question motivates, we need to approach its subject otherwise we just have a bag of bytes, references without sense.

This can take us to consider "effectiveness" in contrast to "correctness" (IPSEC) under a new light: trust and semantic effectiveness must be taken as first design considerations and not added on, as a modulation on a carrier. They make up the vehicle for information -- the carrier itself. They are not the final spices in a recipe. They are the assumptions.

But, some may ask, how can we objectively deal with something we can't completely measure?

In the same way that we deal with fingerprints, voice recognition, noise cancellation, and control in general. It can be argued that we can never precisely measure any control variable because not only the act of measuring itself interferes with it but also because of the finite time that must pass between measurement and the actual use of such measurement.

In that case, while some may wonder what would be the BEST token (e.g., biometrics, smart-cards, PINs, challenge-response queries, etc.) to be used for certification, we need to understand BEST as "By-Example Some Token" -- and not that any token is more significant than any other by itself. It all depends:

  1. on the meanings a token may have on both sides of the communication system,
  2. how similar both meanings are, as measured by some commonly agreed metric function, and
  3. how tamperproof and (possibly) private (1) and (2) are in the environment and presumed attacks.
In all that, the token itself was not cardinal.

The solution to the points mentioned in the original question is thus to recognize that while one needs reference and sense in order to communicate one's thoughts -- names and certificates can only convey reference. Thus, sense must come from another channel,  which can be a tertiary channel such as a CA or a binary channel such as to be provided by the MCS.

The main points are:

Application of the trust concepts to names shows further that common names can be repeated at will in certificates for different referents, as long as the recipient is able to calculate the right truth conditions for the name, not just the right truth values.

This further shows the irrelevance of the present discussions on how to guarantee unique names, whether global names are needed or desirable vis-a-vis privacy concerns or, if local names are better or worse than global names.
 

(The author acknowledges contributions from other MCG participants in this item)

The news is old: cyber-world and 3D-world will just converge -- as everything else!

Regarding cyber-world misconceptions, some think that by escaping names one can escape reality.  Others think that  credit-cards deals would not need names or any real-life id, just assets. Surely, the merchant gets paid regardless, even if you use a false name.  But this is not the end of id fraud. The bank still goes after the money...and uses the law against fraudulent practices to enforce the cardholder agreement, or criminal statues. If Mr. X uses his wife's credit-card, Mr. X is technically committing id fraud, and wire-fraud. Of course it works most of the time... But when it does not, and someone comes enforcing, someone will ask, did you Mr X, uses Mrs X's credit-card, and represent yourself thereby as Mrs X?  Some claim: Oh, but this is a brave new world! It's cyber-world!  New life!  However, history has taught us over and over again that the new has an uncanny resemblance to the old... The basic mechanisms will converge first and then the overlap will increase. Those on one side that defend yet newer laws and those on the other side that defend yet newer escapes from reality will see that truth lies in the middle. Don't people cry over fiction love-dramas on TV? Don't people get angry over e-mail? Do we know of any medium that is as emotional as the Internet? Much more than in phone conferences? Funny enough, when emotions are lacking, we fill them in...perhaps to a larger extent. We overshoot the control target, fearing that our voice might not carry enough strength to the other side!

The author's opinion is that success in cyber-space, either as a user or as a developer, will depend on our success to re-use and re-cycle what we have already done -- and when that is not possible, then by shamelessly mimicking what we already know.

And, this is easy to see.

In the cyber-world, sense is linked to reference by process trust. In the 3D-world, sense is linked to reference by social trust. [19]. Thus, in the cyber-world, transactions are based on your reliance on three quantities and their metric relationships:

 <crypto-strength, law, process trust and>, in the 3D-world transactions are based on your reliance on three quantities as well, and their metric relationships:  <genetic-strength, law, social trust>. It is perhaps to be expected that these separated spaces will gradually intermingle, as our lives move on to cyber-space and people get used to cyber-life in the same way that people got used to voice mail, for example. This means that the above mentioned metric relationships will also intermingle their dependencies, depending now on enlarged sets of four quantities each: <crypto-strength, law, process trust, social trust> and
<genetic-strength, law, process trust, social trust>
so that it will become more and more very important to have compatible models for the cyber-world and the 3D-world, because the borderline between a social physical encounter and a social cyber encounter will become very thin -- very thin indeed. The point here is that yes, we have strong credentials in both worlds...  but such credentials are "names" (ie, symbols in semiotics) and have no pre-defined sense to their references. Bottom-line: the link between sense and reference is missing in both worlds -- not just in the cyber-world as often expressed.  In other words, even outside the Internet your body characteristics per se are not useful -- you still need trust to link that "body of data"  (literally) to you ... even if you do not have a twin brother, or a clone.  Which (see also item A.4.3) represents an eloquent warning sign for the indiscriminate use of biometrics, today touted as a future self-secure certification method. Biometrics will still only provide references, not sense -- so biometrics still needs trust to link sense to reference, like any other certification system.  Biometrics is not self-secure. It will become less and less important to you if the deal was finalized in 3D-space or cyber-space -- as long as and insofar as you have enough <process trust, social trust> to allow you to rely on <law> to enforce the credentials provided by either <crypto> or <genetics> or both.

In other words, it is not so important if the credentials are social (i.e., reliable person to person contact, based on secure genetics such as physical appearance, voice intonation, etc.) or process (i.e., reliable entity to entity contact, based on secure crypto keys,  unforgeable certs, etc.) in their origin. Far more important is if you can trust those "secure keys" and (often forgotten) if you can trust that entity for the purpose you have in mind (e.g., business).  The entity may even be a machine or a pool of machines, a person or a pool of persons, for all you need to care.

This is what is meant by the perspective of both worlds intermingling. It is perhaps not far fetched to imagine that this is similar to the intermingling seen between the different worlds exemplified in a lawyer's office, where trainees answer letters and lawyers sign them -- and where clients trust such letters from "their"  lawyer.

The main point is that as cyber and 3D worlds converge -- the differences will decrease... not increase.. which will make it easier just to mimic what we already have, as much as possible.

And, what do we see on the 3D world? Do we see one giant ID? No, we  see several specific IDs, from county libraries to passports, several credit-cards, several phone cards, etc. -- all locally issued. And, there is good reason for it, privacy being not the most important for most cases -- but sheer need. That  is why centralized certificate issuance (i.e., CAs) and centralized data control (i.e., hierarchical PKIs) run counter to our experience of what works. That is also why it is more intuitive to think that certificates are trusted because they certify (the subjective stance) and not that certificates certify because they are trusted (the objective stance, wrong).

Of course, any transition is difficult, painful. Indeed, this seems  to be much more than just a question of investment strategy. However, the evolution of law and process will, perhaps, be swamped by the evolution of need.

Recently, to provide for more needed DNS real-state, one country decided to create first-level Internet DNS domains that would "express the owner's professional title" by three-letter abbreviations -- as if local three-letter DNS syntactics could self-certify the semantics of a DNS user to the whole world.  Further, since DNS names are not a taxonomy but a mereology,  DNS is hierarchical only in the sense that  the totality of names in the tree is a hierarchy, but there need not to be any meaningful relationship between names at any level of the DNS name tree. Thus, this just demonstrates a common parochial view, where  technical DNS details are ignored, a local  context is believed to be global and where spoofing, forgery, collusion, error, revocation, viruses and other global maladies are considered to be deterred by local law. We all share the same network. However, we have no global law. Thus,  we have, at least, to preserve global processes and semantics.

Semantic security (ie, meaning assurance) will perhaps take the  heaviest blows in this transition to a global cyber-life -- as  different parochial needs start crying for solutions. However, if we allow semantic security to decrease, no amount of syntactic security (e.g., certification)  will do.

Which is why one may think that <process trust, social trust> will become increasingly important -- because they provide for meaning. Not only which certified key you have but ...  what does it mean? Not only if the certificate is valid but if it has data which you can rely upon for some purpose you want.

Further,  the author takes the stance that as the Internet experiences increasing media and protocol convergence -- people and machines will suffer massive security risks!

What was safe in separated and sanitized environments, will become security risks when they are not separated anymore. The number of interfaces will increase, so that different systems that were never thought to be in communication will be. A hacker in Israel will find himself a way to a US Pentagon computer -- which was otherwise safe inside the Pentagon. Your files in your "trusted" computer will be suddenly snatched and sold for money, to information harvesters.

Thus, certificates, for person and machines, will be needed for a great number of deals. Your data will have to be at least origin authenticated, not to say about data integrity authentication and encryption -- for several things worth doing. The level of indirection will not decrease and semantic addressing with encryption can also provide a solution to ambiguous addresses and names (see A.4.1).

Bottom line: yes, more certificates, more issuers,  more subjects!  Certification needs will escalate and current systems cannot provide an answer.
 
 

What permits the operations to span both worlds is the real-world model of trust, where you can seamlessly move from one domain to the other and keep the same concepts while with a different focus. That is why it was important to have process and social trust as isomorphic concepts. In the question, yes, it is correct to talk about a reference for the metrication process and that is provided by the real-world model of trust -- while such reference has two different senses which are expressed by one metric for each world (i.e., cyber- and social-world).

Anything can be characterized by the distinction between sense and reference, not just names and metric functions. Specially interesting for the Internet are signatures, keys, authorizations and methods (i.e., algorithms and protocols) in general. So, it is important to discuss the sense of an authorization for example, and not just its reference as expressed in a certificate.
 

In a nutshell, certs can transfer reference, not sense. Even though sense may be dormant in a cert, so to say, it cannot be self-referenced in certs -- not even partially, not even for a very small part. So, there must be a another connection between sense and reference -- which must be knowledge, not information (see the lamb story in Item 4 of the main section). Such knowledge must be relied upon to some extent/purpose/time and can be proved to be trust (see item A.4.3  for the proof). So, trust allows sense to be linked to reference within some extent/purpose/time assumptions -- e.g., as defined by your trust multivector.

Regarding the TSK/P process (see item A.4.1), for Internet communication irrespective of ambiguous names and addresses, one can summarize the main  issues in a few points:

    1. Names contain reference and sense -- names are "common names", "keys", "key-hashes", anything in a cert, including the cert itself,
    2. Sense does not like to travel. Once you receive a name over the wire (e.g., a cert) then you have lost the sense -- you just have the reference,
    3. To relink the sense, references will not do -- no matter how excellently looking and how many,
    4. Trust is the only glue that allows you to relink reference to sense,
    5. How you acquire trust is entirely up to you -- but trust also does not like to travel,
    6. Since names in certs are just references without sense, then it is meaningless to insist on their pretended uniqueness and spend efforts on "clever and better" schemes which will nonetheless never work.
    7. Names in certs cannot identify you because they have no sense. To say otherwise is to contradict a mathematical fact known and accepted since 1910.
    8. By a mixture of "proper semantics", "proper trust" and "proper keys"  (TSK/P) it is possible to use any common name or e-mail address to achieve reliable communications irrespective of name collisions, with security and privacy.
    9. Crypto plays an essential role in TSK/P, by affording both certification tools (e.g., cryptographic signatures that provide for origin and data integrity authentication) and encryption -- which shows that crypto is a basic component of communication systems, not an optional luxus.
    10. Individuals can be identifiable worldwide, irrespective of name or address collisions. Pseudonyms are freely allowed.
    11. The TSK/P method does not treat keys as the one and only security barrier, neither assumes keys to be unique and valid a priori -- in fact, security is provided by an interplay between <semantics, trust, keys> and key uniqueness is also subject to the method's proofs and management.
Ye spake a truth!
 

Let's see 3 cases of it, each one chosen so as to try to illustrate an important aspect of such truth:

CASE A:

Consider a communication channel that needs as an essential part of it a property X [1] which cannot be transferred through it and which includes your modem. Then the definition applies equally well to your modem and your modem has trust -- ie, you are using your "trusted modem with property X"  [2]. What "trusted modem with property X" means? It means that for information transfer, the other party and/or you will need out-of-band information on property X of your modem for some essential property, as evaluated according to him and/or you.

Note: it's important to see that trust is always relative to the observer. So, if you use a trusted modem then it means that it may be trusted to you and/or to the receiving party, with possibly different connotations. In other words: who is to decide "what is essential to a communication channel but which cannot be transferred from a source to a destination using that channel"? -- the observer, which can be the source and/or the recipient.

[1] X: your modem and its properties, which can be anything that cannot be transferred using that channel and that is essential to it as judged by any or all of the parties (source and/or recipient), such as: the guarantee that your modem itself was used (not Peter William's for example), the
guaranteed noise limit levels of your modem, etc.

[2] i.e.,  your modem that has property X, in that channel, according to the source and/or the recipient.

CASE B:

Now, if your communication channel (we have to be a bit flexible here as to what one considers communication -- after all, neither Shannon nor the author have limited communication to use only electrons as carriers, or photons, etc.) uses doves as carriers, then probably the modem will not be essential to that communication channel. However, suppose that the other party decides that the doves need your modem's nice heat in order to always be warm and ready to fly on demand to him, without delay, and that he cannot rely on anything else for that function but that modem for that channel. In that case, he can also call your modem his "trusted modem with property X" [2], where now X is defined by [3].

[3] X: your modem that can reliably -- as judged by the recipient -- keep the doves warm and ready to fly on demand, without delay, at the source, for that channel.

Note: the example above was important also in that it highlighted the observer's role on trust: the recipient needed trust on your modem -- not you!
 

CASE C:

Now consider the case that you need to communicate with a recipient in the next building which happens to have a window that is 2 meters (7 feet) distant from your window. Suppose next that your only means of communication is to write your message on your modem and toss it over to the other side. Now, even though your modem is an essential part of that communication channel (unfortunately, you may say -- but this is just a Gedankenexperiment)  it can  (indeed, it must) nonetheless be transferred from source to destination using that channel. So, your modem needs zero trust regarding that channel -- ie, no trust is needed for your modem.

NOTE 1: This case is important not only for the fun of it (after all, the modem is not the author's...) but because it includes an example where no trust is needed.  What does "your modem needs zero trust regarding that channel" mean? Here, it means that when the modem arrives at the destination then the recipient can rely 100% upon its arrival and does not need any other channel to tell him that the modem has arrived.

NOTE 2: "Needs zero trust" or "needs no trust" is not the same as "has no trust". To say that "channel A has no trust for property X" is the same as to say that "channel A does not transfer trust for property X" -- so, if you need trust on property X you can't use channel A alone.  However, when channel A "needs zero trust for property X" it means that no other channel is needed in order to transfer property X, but channel A.

NOTE 3: There are two important facts here: (i) your modem is an objective reality and its subjective values are not important and, (ii) your modem had to be transferred. These facts eliminated all need for trust on your modem, which perhaps further illustrates the definition of trust.
 
 

Interesting experiment. I will answer in three scenarios, the first two with trust evaluated by you (the source) and the third by the person (the recipient).  The examples were also chosen with a purpose, to help illustrate how the trust definition can be applied.

Scenario A:

Trust being "that which is essential to a communication channel  but which cannot be transferred from a source to a destination using that channel",  then you must view the channel as a tool and first evaluate three things:

- what is your communication channel?

- what do you consider "essential" for that channel? This could be  mathematically defined by you as an expression of the relative certainty desired for your specific security problem and application context, given all available knowledge you have of the operational  vulnerabilities.

- what is essential for you and yet cannot be transferred using that channel?

So, suppose  (respectively):

- you verify that the e-mail channel goes over a fiber optic direct cable two point link between your computer and the computer of the other person you never met before and that the person you never met before presents  you a Verisign certificate class 1 which you always verify as valid in a 100% effective CRL list and  successfully challenge every time you send e-mail, always using S/MIME encryption with RSA/TripleDES.

- you consider essential that the channel transfers private information,  that is, information which cannot be eavesdropped within TripleDES  limits. Who the other party actually is, or if it is only one party,  or if it is a machine or person, is of no concern to you. Anyone that has the private-key associated with the certificate is the same for you.

Then, in this case, there is nothing you consider essential that is not being transferred.

To answer your question: for you, this channel needs zero trust for privacy. This is a good thing -- no surprises, as commented in the main section and above for CASE C. (note, again, that "needs zero trust" is not the same as "has no trust" or "has zero trust")

IMPORTANT: In this case you objectively know that the information you send in that channel is private within TripleDES limits, even in the case of a TEMPEST attack, so such trust does not need to be transferred to you out-of-band. In general, "If property X is essential to a channel, a party needs no trust for property X in that channel if and only if the party has self-trust on X".

Scenario B:

In the above example, if you would consider that trust for that channel would be your recipient's DNA pattern, then you would not have trust on that channel even after ten years.

Scenario C:

When you are exchanging e-mails you are using one communication channel.  But you actually have more than one communication channel -- you also have memory channels, a memory being that special case of a channel in which the sender transmits signals to itself at a later point in time (such as a 10-year mailbox). Memory channels can be used to provide for "learning" capabilities in [Ger97], which is what I will use them here for.

Let us take the same case as above, but from the viewpoint of the other person. Suppose that the recipient considers "essential" that the party at the source writes and reads English with proficiency. Of course, this information cannot be transferred using that channel, because that channel transfers information and information in Information Theory has nothing to do with knowledge or meaning -- it needs trust.

This is an example of the paradoxical breakdown of Shannon's Tenth Theorem when we consider the properties  of trust in communication channels, which leads to an enhanced Communication Theory (some parts discussed here, but to be fully published elsewhere). During those ten years the person will use many channels (i.e., memory channels of different messages) and test the source's English proficiency for reading and writing (e.g., by using double negatives, different verbal tenses, wide vocabulary, etc.). He will then develop trust that the source has English proficiency for reading and writing. The source could be a machine, you, another person, a group of persons or a visitor from Mars -- this is
irrelevant to his desired trust.

This example is also interesting in that it shows that trust did not exist in the beginning but could be built up using multiple channels.
 


References

[Ba27] Bacon, Sir F. in Sylva Sylvarum 337, 1627; [Bax98]

[Ball84] Ball, J. "Memes as Replicators", Ethology and Sociobiology, vol. 5, p.159, 1984; [Bax98]

[Bax98] Baxter, R. website with a collection of quotations pertaining to Occam's Factor, Complexity, Simplicity, and Inference, at http://www.cs.monash.edu.au/~rohan/occam.html

[Che90] Cheeseman, P.  "Finding the Most Probable Model", p.91, 1990; [Bax98]

[Har28] Hartley, R. V. L. "Transmission of Information", Bell Syst. Tech J., July 1928, p. 535.

[Nyq24] Nyquist, H. "Certain Factors Affecting Telegraph Speed", Bell Syst. Tech. J., April 1924, p. 324.

[Sha48] Shannon, C. A Mathematical Theory of Communication. Bell Syst. Tech. J., vol. 27, pp. 379-423, July 1948.
See also http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html for a WWW copy.

[Sha49] Shannon, C. Communication Theory of Secrecy Systems. Bell Syst. Tech. J., vol. 28, pp. 656-715, 1949.
See also http://www3.edgenet.net/dcowley/docs.html for readable scanned images of the complete original paper.

[Szi29] Szilard, L. "Über die Entropieverminderung in einem thermodynamischen System bei Eingriffen intelligenter Wesen." Zeitschrift für Physik 53 (1929): 840-856. "On the Decrease of Entropy in a Thermodynamic System by the Intervention of Intelligent Beings." Translation in Behavioral Science 9 (1964): 301-310.

[Ger97] Gerck, E. Certification: Intrinsic, Extrinsic and Combined. MCG, http://mcwg.org/mcg-mirror/cie.htm 1997.

[Ger98a] Gerck, E. Generalized Certification Theory. Yet to be published. 1998.

[Ger98b] Gerck, E.  "What is identification, that we can identify it?", MCG , http://mcwg.org/mcg-mirror/coherence.txt, 1998.

[Ger98c] Gerck, E.  "What is identification, that we can identify it?, Part II", MCG , http://mcwg.org/mcg-mirror/coherence2.txt, 1998.

[5] Gerck, E. Trust Properties. MCG http://mcwg.org/mcg-mirror/trustprop.txt 1997.

[6] Gerck, E. Re: On the Nature of Trust. MCG http://mcwg.org/mcg-mirror/cgi-bin/lwg-mcg/MCG-TALK/archives/mcg/date/article-334.html 1997.

[7] Bohm, N. Authentication, Reliability and Risks. MCG http://mcwg.org/mcg-mirror/auth_b1.htm 1997.

[8] 111229 Checking Validity. MCG http://mcwg.org/mcg-mirror/pub9x.txt 1997.

[9] Gerck, E., Overview of Certification Systems: X.509, CA, PGP and SKIP. MCG, http://mcwg.org/mcg-mirror/cert.htm 1997.

[10]  reportedly, in http://www.cl.cam.ac.uk/Research/Security/Trust-Register/book.html

[11] ftp://ftp.bull.com/pub/OSIdirectory/Certificates/

[12] http://www.abanet.org/scitech/ec/isc/dsgfree.html

[13] www-security mailing list archive, Re: Syncytial trust?  Sat, 12 Nov  94 03:31:06 -0500

[14]  http://mcwg.org/mcg-mirror/cgi-bin/lwg-mcg/MCG-TALK/archives/mcg/date/article-436.html

[15] http://mcwg.org/mcg-mirror/intrinsic.htm

[16] http://mcwg.org/mcg-mirror/exposition.txt

[17] W. Carl, "Frege's Theory of Sense and Reference", ISBN 0-521-39135-0, Cambridge Press. More information at http://www.cup.org/Titles/39/0521391350.html

[18] http://www.mdx.ac.uk/wwww/ai/samples/nlp/semantics.html

[19] Here, the metric relationships can provide for a partial-ordering of the sets, i.e., when you define an operation "=" (i.e., larger or equal) which is reflexive, anti-symmetric and transitive -- allowing one to quantitatively compare different truth conditions (i.e., sense) by ordering and comparing the values of their references, as both are linked by trust.

[S-F97] Kristin S. Shrader-Frechette, "Perceived Risks Versus Actual Risks: Managing Hazards Through Negotiation", in http://www.fplc.edu/RISK/vol1/fall/shraderF.htm

[McK96] McKnight, D. Harrison and Chervany, Norman L., "The Meanings of Trust", in http://www.misrc.umn.edu/wpaper/wp96-04.htm

[DS97] The objective here is not to apply Dempster-Schafer Theory or the Dempster Rule, but, to point out that the definition of
degree of belief is in general similar to the DS Theory. For a review of concepts and difficulties with DS Theory, see
http://yoda.cis.temple.edu:8080/UGAIWWW/lectures95/uncertainty/dempster.html

[SCUS] available at http://supct.law.cornell.edu/supct/html/94-967.ZO.html

[Tar44] Alfred Tarski, "The Semantic Conception of Truth and the Foundations of Semantics", Phil. and Phenom. Res., vol. 4, 1944, pp. 341-376.

[Mein98] Meinrath, P.J. personal communication to the author, based on the study of original historical documents.

[Wal91] Walley, P. "Statistical Reasoning with Imprecise Probabilities". Chapman and Hall, 1991. See also the webpage maintained by Russel Almond, in http://bayes.stat.washington.edu/almond/gb/bel.html, with the original definitions.

[Wan93] Wang, P. " Belief Revision in Probability Theory", in Technical Report No. 74 of CRCC. A revised version appears in Proceedings of the Ninth Conference of Uncertainty in Artificial Intelligence , 519-526. Eds. David Heckerman and Abe Mamdani. (San Mateo, CA: Morgan Kaufmann), 1993. A PostScript file of the paper is available at http://www.cogsci.indiana.edu/farg/peiwang/papers.html



 

Summary:

The concept of trust dates back to history beginnings. It is recognized by  many to be cardinal to information security, security policies, accountability, reliability, corporate management  models,  business relationships,  interpersonal relationships,  etc.  However ... what is trust? What are the conditions under which trust exists, its truth conditions? What does it denote, what are its truth-values? Still today, there are no satisfactory answers, no consensus and no well-defined models.  The paper shows first that the main problem is lack of  understanding of trust's truth conditions -- which does not allow trust's truth-values to be well-defined, notwithstanding current efforts specially in the area of Internet communication and certification, such as X.509 and PGP.   The paper initially focuses on the subject of trust in communication systems, beginning with Shannon's  Information Theory  framework.  A new abstract definition of trust is presented, for a generic communication system,  which is shown to lead to useful and upward compatible meanings of trust when used in communication processes as well as in social contexts, using several examples.  The paper shows that trust can afford an answer to the problem of measuring events that are important, significant but which are unreachable -- as strongly exemplified in the Internet, and which may have applications in other areas of  communication systems and science. From the discussion, trust emerges as the mathematics of subjective certainty and precision -- a concept to be further developed in the context of  non-boolean logic over a multivector space in Grassmann Algebra. The exposition  emphasizes Internet applications and exemplifies the developed trust theory with a series of new results, also linked to cryptography and certification -- such as semantic addressing, TSK/P system  with applications to intrinsic- and meta-certification,  names versus cryptographic-keys  classes,  a quantitative model for the transition from separated 3D and cyber worlds to an intermingled social-cyber-society,  the strong role played by Internet protocols as means of expression  akin to languages -- that may severely and even intentionally limit such expression,  the fallacies when considering biometrics as a self-secure certification method,  the use of trust to relink sense to reference and its application to certificates,  what is needed for a general solution to the  global PKI problem, etc.



WORK  DOCUMENT: This is a draft. This essay discusses a subject which summarizes and references some of the points mentioned by myself in the mcg-talk and in other fora, being also a result of discussions with several colleagues.This is a discussion paper, not a final work -- but very up-to-date. It is based on an initial e-mail reply, which the author has expanded with recent material from his other papers, e-mail messages and exchanges, but the original text or the additions were not significantly edited, so some of them still retain their e-mail style ... or, lack of.  The initial message is available at http://nma.com/mcg-mirror/trustdef.txt

Copyright © Dr.rer.nat. E. Gerck, 1998, All rights reserved worldwide.
ed@gerck.com


Meta-Certificate Group: http://nma.com/mcg-mirror/