The notion of trust in the context of this call relates to the notions of integrity, harmlessness/innocuousness, fitness for purpose, … Can I trust this data to act on it? Can I trust this treatment to let it “execute” in my system or on my data? Can I trust this entity to let it access those services and data? Can I (still) trust a subsystem (potentially my own, and potentially only a communication channel) to rely on it to run my operations and handle my data?
In the “good old days” of atomic enclosed and guarded information systems  , trust issues were (very) roughly reduced to the following question: are you (or your initiator) already in the system, or are you still out? Any entity inside the system (or process initiated from inside) was implicitly trusted to have the legitimate right to access, act on, act on behalf, or support the system . Every entity composing the system (hardware or software) was “vetted” through your procurement process involving some (varying) level of evaluation; data in your system was mostly produced by yourself; processes in your system were executed under your control; and, access to your system was mostly a (trusted) physical control problem (not an IT one), except for some well-identified points such as (early days) websites and email servers. You had (nearly) full control over (nearly) everything in a clearly defined perimeter. The game was to maintain trust inside this perimeter by maintaining untrusted entities or “resources” outside of this perimeter. This approach to securing such systems is called the Castle Security Model  .
Since then, information systems have evolved a lot. Information systems are becoming more and more decentralized. For the “simple” case of an information system made of multiple fully controlled and interconnected enclaves, using Virtual Private Networks (VPN) allows getting back to a setting compatible with the Castle Security Model (although it may not be relevant for today’s attacks, which among other differences involve more lateral movements than in the “good old days”). However, today’s information systems are usually more decentralized than that and have lost more control over their defenses and dependencies. They may have weaker physical controls of their enclaves perimeters, such as in the case of Remote work / Work from Home and Internet of Things (IoT). They rely more and more heavily on the cloud and, from Infrastructure as a Service (IaaS) to Platform as a Service (PaaS), lose more and more control over part of their interconnections, isolation from neighboring processes, and execution stack, loosing even control over their payload in the case of Software as a Service (SaaS). They may even accept the fact that some of their “supporting components” may not be administered at all, or at least not at an enterprise level, as is the case with the Bring Your Own Device (BYOD) trend. The decentralization process itself may even not be fully controlled, as in the case of Shadow IT which is one of the main cybersecurity risks according to 44% of respondents to a recent cybersecurity survey . Even if usage of the cloud is controlled, there are trust issues with it, such as lack of control over the access of the cloud provider administrators for 45% of the respondents, and no visibility on the cloud provider’s supply chain for 51% of the respondents. Overall, 86% of companies estimate that the tools provided by cloud providers do not allow to secure data and that other specific tools are required .
Zero Trust   is a security model that addresses part of the cybersecurity issues resulting from the decentralization of information systems. It is gaining more and more traction in the real world and is getting deployed in the industry   as well as public institutions  . Rather than a specific architecture or a set of methods and technologies, Zero Trust is a set of cybersecurity design principles and management strategies  . Its main principle is to never rely on implicit trust. In particular authorizations (not only for access but for any transactions) should never be given solely based on the location of its requester (from which network the request comes). It does not mean that the system should not rely on trust, but that trust must be gained and renewed . “[T]rust is never granted implicitly but must be continually evaluated”  prior (control) and posterior (audit) to granting it. This principle is not new and can be traced back to the Jericho Forum  in 2004 . Other principles, such as the least privilege principle  , are even older but became more pregnant with decentralization and easier to enforce with modern technologies. Another important principle of Zero Trust is to refine the granularity of controls toward a per transaction basis. The goal is to authorize the least privileges needed just-in-time of need .
Not all of the principles of Zero Trust are covered by C&ESAR 2022. Exact definitions of Zero Trust vary, but the NSA summarizes it to 4 main points : a) Coordinated and aggressive system monitoring, system management, and defensive operations capabilities; b) Assuming all requests for critical resources and all network traffic may be malicious; c) Assuming all devices and infrastructure may be compromised; d) Accepting that all access approvals to critical resources incur risk, and being prepared to perform rapid damage assessment, control, and recovery operations. In the scope of this Zero Trust definition, C&ESAR 2022 focuses on points b and c in a highly decentralized setting: at a fine granularity level, how to gain trust in requests for resources, network traffic, devices, and infrastructure? Implied by this question, but not equivalent, is the problem of authentication which is one of the main concerns for Zero Trust   , as well as in general  .
Though useful to address some of the problems related to trust in a decentralized system, some of the issues covered by C&ESAR 2022 may or may not be included in Zero Trust depending on the definition used.
Related to Zero Trust are the problem of transitive trust and trust propagation. For example, in the setting of a developer in a controlled enclave that pushes code to a version control SaaS, that pushes this code to a Continuous Integration / Continuous Deployment (CI/CD) SaaS of another provider, that pushes the resulting “binaries” to a web server SaaS of yet another provider, what are the potential solutions for the developer to trust (control and audit) SaaS not to abuse their privileges to push something different on your behalf? What are the potential solutions for the SaaS providers to trust other providers to faithfully act on behalf of the developer, including and beyond signature preserving versioning and compilation? More generally, how to trust a previously unknown or unvetted entity starting to interact with your system? How to rely on the trust of others to trust an interaction?
On a different subject, trust evaluation requires (meta)data. In a highly geographically decentralized system that may move payloads between enclaves, how to ensure the dissemination and synchronization of this (meta)data in a secured way compatible with the timing constraints of the system and the laws applicable to the owner of the (meta)data, the owner of the payload, and the location where the executing enclave resides?
 ANSSI, “Système d’Information Hybride et Sécurité : un Retour à la Réalité,” ANSSI, Note Blanche, Aug. 2021.
 ECSO’s Users Committee, “Survey Analysis Report: Chief Information Security Officers’ (CISO) Challenges & Priorities,” Apr. 2021.
 J. H. Saltzer, “Protection and the Control of Information Sharing in Multics,” Commun. ACM, vol. 17, no. 7, pp. 388–402, Jul. 1974, doi: 10.1145/361011.361067.