Internet-Draft | teep usecase for CC in network | December 2024 |
Yang, et al. | Expires 20 June 2025 | [Page] |
Confidential computing is the protection of data in use by performing computation in a hardware-based Trusted Execution Environment. Confidential computing could provide integrity and confidentiality for users who want to run applications and process data in that environment. When confidential computing is used in scenarios which need network to provision user data and applications, TEEP architecture and protocol could be used. This usecase illustrates the steps of how to deploy applications, containers, VMs and data in different confidential computing hardware in network. This document is a use case and extension of TEEP architecture and could provide guidance for cloud computing, MEC and other scenarios to use confidential computing in network.¶
This Internet-Draft is submitted in full conformance with the provisions of BCP 78 and BCP 79.¶
Internet-Drafts are working documents of the Internet Engineering Task Force (IETF). Note that other groups may also distribute working documents as Internet-Drafts. The list of current Internet-Drafts is at https://datatracker.ietf.org/drafts/current/.¶
Internet-Drafts are draft documents valid for a maximum of six months and may be updated, replaced, or obsoleted by other documents at any time. It is inappropriate to use Internet-Drafts as reference material or to cite them other than as "work in progress."¶
This Internet-Draft will expire on 20 June 2025.¶
Copyright (c) 2024 IETF Trust and the persons identified as the document authors. All rights reserved.¶
This document is subject to BCP 78 and the IETF Trust's Legal Provisions Relating to IETF Documents (https://trustee.ietf.org/license-info) in effect on the date of publication of this document. Please review these documents carefully, as they describe your rights and restrictions with respect to this document. Code Components extracted from this document must include Revised BSD License text as described in Section 4.e of the Trust Legal Provisions and are provided without warranty as described in the Revised BSD License.¶
The Confidential Computing Consortium defined the concept of confidential computing as the protection of data in use by performing computation in a hardware-based Trusted Execution Environment [CCC-White-Paper]. In detail, computing unit with confidential computing feature could generate an isolated hardware-protected area, in which data and applications could be protected from illegal access or tampering. When using network to provision confidential computing, users need to choose appropriate steps to deploy their data and applications. This network could be in a cloud, MEC [MEC] or other network that provide confidential computing resource to users. For example in MEC, the autonomous vehicles could deploy private applications and data in confidential computing device to calculate on-vehicle and destination road information without knowing by MEC platform.¶
The TEEP WG defined the standardization of an architecture and protocol for managing the lifecycle of trusted applications running inside a TEE. In confidential computing, the TEE can also be provisioned and managed by TEEP architecture [I-D.ietf-teep-architecture] and protocol [I-D.ietf-teep-protocol]. By referring TEEP architecture and protocol, applications and data could be provisioned in confidential process, confidential container and confidential VM in different hardware architecture. The intended audiences for this use case are network users and operators who are interested in using confidential computing in network.¶
The following terms are used in this document.¶
The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119 [RFC2119].¶
Figure 1 is the architecture of confidential computing in network. Two new components Network User and Network M/OC are introduced in this document. The connection between Network User and Network M/OC depends on the implementation of specific network. The connection between Network User and UA (Untrusted Application) or TA depends on the implementation of application. The connection between TAM, TEEP Broker and TEEP Agent refers to the TEEP protocol. Interactions of all components in this scenario are described in the Usecase section.¶
The basic process of how a Network User utilizes confidential computing is shown below. At present, the main confidential instances types exist in industry are confidential process, confidential container and confidential VM. The definition of these instances could be found at [CCC-White-Paper]. Since confidential computing is a hardware-based technology, different hardware could support different confidential instances. This document gathers the main hardware architectures that support confidential computing, which include [TrustZone], [SGX], [SEV-SNP], [CCA], [TDX] and [CSV]. The following use cases are possible packaging models and how to deploy them in different hardware architecture. In the following tables, the brace means the operation steps to deploy packages. The arrow means deploy package to a destination. The "att" means attestation challenge for the target. All these actions in the following use cases could be expressed by TEEP protocol.¶
In this case, UA, TA and PD are bundled as a package. This package is bundled by Network User and sends to TAM by specific netwwork. When TAM tries to deploy this package in confidential computing device, the process of TEEP is as follow.¶
Network User requests for confidential computing resource to Network M/OC.¶
M/OC orchestrates confidential computing device to undertake the request.¶
TAM requests remote attestation to TEEP Agent, TEEP Agent then sends the evidence to TAM. TAM works as Verifier in [RFC9334].¶
After verification, Network User works as Relying Party to receive the attestation result. If positive, Network User establishes secure channel [NIST-Special-Publication-800-133-V2] with TEEP Agent, and transfers this package to TEEP Agent.¶
TEEP Agent deploys TA and personalization data in TEE, then deploy UA in REE.¶
As for informing Network Users to develop their applications and data, the mapping of UA, TA and implementations are shown in figure 2.¶
In this use case, PD is a separate package, the UA and TA are integrated as a package. If Network User provides packages like this, the process of TEEP is as follow.¶
Network User requests for confidential computing resource to Network M/OC.¶
M/OC orchestrates confidential computing device to undertake the request.¶
Network User transfers UA and TA to confidential computing device via TAM. TAM then deploys these two applications in REE and TEE respectively. (In SGX, UA must be deployed first, then let UA to load TA in SGX.)¶
TAM requests remote attestation to TEEP Agent, TEEP Agent then sends the evidence to TAM. TAM works as Verifier in RATs architecture.¶
After verification, Network User works as Relying Party to receive the attestation result. If positive, Network User establishes secure channel with TA, and deploys personalization data to TA.¶
The mapping of UA, TA and implementations are shown in figure 3.¶
In this case, Network User provides TA and PD as separate packages with or without UA. The process of TEEP in this case is as follow.¶
Network User requests for confidential computing resource to Network M/OC.¶
TAM in M/OC orchestrates confidential computing device to undertake the request.¶
Network User transfers UA to TAM, then TAM deploys UA in REE.¶
Network User transfers TA to TAM, then TAM transfers TA to TEEP Agent.¶
TAM requests remote attestation to TEEP Agent, TEEP Agent then sends the evidence to TAM. TAM works as Verifier in RATs architecture.¶
After verification, Network User works as Relying Party to receive the attestation result. If positive, Network User establishes secure channel with TA and transfers PD to it.¶
In this case, the process of TEEP is as follow.¶
Network User requests for confidential computing resource to Network M/OC.¶
TAM in M/OC orchestrates confidential computing device to undertake the request.¶
If there has UA, Network User deploys UA in REE.¶
TAM requests remote attestation to TEEP Agent, TEEP Agent then sends the evidence to TAM. The TAM works as Verifier in RATs architecture.¶
After verification, Network User works as Relying Party to receive the attestation result. If positive, Network User establishes secure channel with TEEP Agent and transfers TA and PD package to TEEP Agent.¶
In this case, the process of TEEP is as follow.¶
Network User requests for confidential computing resource to Network M/OC.¶
TAM in M/OC orchestrates confidential computing device to undertake the request.¶
TAM requests remote attestation to TEEP Agent, TEEP Agent then sends the evidence to TAM. The TAM works as Verifier in RATs architecture.¶
After verification, Network User works as Relying Party to receive the attestation result. If positive, TAM deploys TA in TEE.¶
TA decrypts the PD and UA package inside TEE.¶
This document does not require actions by IANA.¶
Besides the security considerations in TEEP architecture, there is no more security and privacy issues in this document.¶
Many thanks to Dave Thaler, Eric Voit, Hannes Tschofenig, the CCC TAC meeting and other people who provide comments and suggestions.¶
The original design of TEEP only includes TEEP Agent and TA inside TEE. While in confidential computing implementation, other submodules may also be involved in the TEE. In TEEP, these submodules could be covered by TEEP Agent.¶
In SGX based confidential computing, submodule could provide convenient environment or API in which TA does not have to modify its source code to fit into SGX instructions. Submodules like Gramine and Occlum .etc are examples that could be included in TEEP Agent. If there is no submodule in TEEP Agent, the TA and UA need to be customized applications which fit into the SGX architecture.¶
In SEV and other architectures that support whole guest VM as a TEE, TEEP Agent doesn't have to use extra submodule to work as a middleware or API. However with some submodules like Enarx which works as a runtime JIT compiler, TA could be deployed in a hardware independent way. In this scenario, TA could be deployed in different hardware architecture without re-compiling.¶