Module blindai.client

Functions

def connect(self, addr: Optional[str] = None, server_name: str = 'blindai-srv', policy: Optional[str] = None, certificate: Optional[str] = None, simulation: bool = False, untrusted_port: int = 50052, attested_port: int = 50051, debug_mode=False, api_key: Optional[str] = None)

Connect to the server with the specified parameters. You will have to specify here the expected policy (server identity, configuration…) and the server TLS certificate, if you are using the hardware mode.

If you want to use Mithril Security Cloud, you don't need to specify the address, the policy and the certificate. Those informations will be automatically retreived.

If you're using the simulation mode, you don't need to provide a policy and certificate, but please keep in mind that this mode should NEVER be used in production as it doesn't have most of the security provided by the hardware mode.

Security & confidentiality warnings: policy: Defines the rules upon which enclaves are accepted (after quote data verification). Contains the hash of MRENCLAVE which helps identify code and data of an enclave. In the case of leakeage of this file, data & model confidentiality would not be affected as the information just serves as a verification check. For more details, the attestation info is verified against the policy for the quote. In case of a leakage of the information of this file, code and data inside the secure enclave will remain inaccessible. certificate: The certificate file, which is also generated server side, is used to assigned the claims the policy is checked against. It serves to identify the server for creating a secure channel and begin the attestation process.

Args

addr : str
The address of BlindAI server you want to reach. If you don't specify anything, you will be automatically connected to Mithril Security Cloud.
server_name : str, optional
Contains the CN expected by the server TLS certificate. Defaults to "blindai-srv".
policy : Optional[str], optional
Path to the toml file describing the policy of the server. Generated in the server side. Defaults to None. Will be ignored if you are in simulation mode or trying to connect to the Mithril Security Cloud. If left to none and if you are in hardware mode, the built-in policy (policy matching to the 0.5 version of the server, or the Mithril Security Cloud enclave) will be used.
certificate : Optional[str], optional
Path to the public key of the untrusted inference server. Generated in the server side. Defaults to None. Will be ignored if you are in simulation mode or trying to connect to the Mithril Security Cloud. If left to none and if you are in hardware mode, the certificate verification will be disabled
simulation : bool, optional
Connect to the server in simulation mode. If set to True, the args policy and certificate will be ignored. Defaults to False.
untrusted_port : int, optional
Untrusted connection server port. Defaults to 50052.
attested_port : int, optional
Attested connection server port. Defaults to 50051.
debug_mode : bool, optional
Prints debug message, will also turn on GRPC log messages.
api_key : str, optional
Key to upload and use your models on Mithril Security Cloud. This parameter is not needed if you want to use the public models, or if you want to deploy the server yourself.

Raises

AttestationError
Will be raised if the policy doesn't match the server configuration, or if the attestation is invalid.
NotAnEnclaveError
Will be raised if the enclave claims are not validated by the hardware provider, meaning that the claims cannot be verified using the hardware root of trust.
IdentityError
Will be raised if the enclave code signature hash does not match the signature hash provided in the policy.
DebugNotAllowedError
Will be raised if the enclave is in debug mode but the provided policy doesn't allow debug mode.
HardwareModeUnsupportedError
will be raised if the server is in simulation mode but an hardware mode attestation was requested from it.
ConnectionError
will be raised if the connection with the server fails.
VersionError
Will be raised if the version of the server is not supported by the client.
FileNotFoundError
will be raised if the policy file, or the certificate file is not found (in Hardware mode).
def dtype_to_numpy(dtype: ModelDatumType) ‑> str
def dtype_to_torch(dtype: ModelDatumType) ‑> str
def raise_exception_if_conn_closed(f)

Decorator which raises an exception if the Connection is closed before calling the decorated method

def translate_dtype(dtype)
def translate_tensors(tensors, dtypes, shapes)

Classes

class Connection (addr: Optional[str] = None, server_name: str = 'blindai-srv', policy: Optional[str] = None, certificate: Optional[str] = None, simulation: bool = False, untrusted_port: int = 50052, attested_port: int = 50051, debug_mode=False, api_key: Optional[str] = None)

An abstract base class for context managers.

Connect to the server with the specified parameters. You will have to specify here the expected policy (server identity, configuration…) and the server TLS certificate, if you are using the hardware mode.

If you want to use Mithril Security Cloud, you don't need to specify the address, the policy and the certificate. Those informations will be automatically retreived.

If you're using the simulation mode, you don't need to provide a policy and certificate, but please keep in mind that this mode should NEVER be used in production as it doesn't have most of the security provided by the hardware mode.

Security & confidentiality warnings: policy: Defines the rules upon which enclaves are accepted (after quote data verification). Contains the hash of MRENCLAVE which helps identify code and data of an enclave. In the case of leakeage of this file, data & model confidentiality would not be affected as the information just serves as a verification check. For more details, the attestation info is verified against the policy for the quote. In case of a leakage of the information of this file, code and data inside the secure enclave will remain inaccessible. certificate: The certificate file, which is also generated server side, is used to assigned the claims the policy is checked against. It serves to identify the server for creating a secure channel and begin the attestation process.

Args

addr : str
The address of BlindAI server you want to reach. If you don't specify anything, you will be automatically connected to Mithril Security Cloud.
server_name : str, optional
Contains the CN expected by the server TLS certificate. Defaults to "blindai-srv".
policy : Optional[str], optional
Path to the toml file describing the policy of the server. Generated in the server side. Defaults to None. Will be ignored if you are in simulation mode or trying to connect to the Mithril Security Cloud. If left to none and if you are in hardware mode, the built-in policy (policy matching to the 0.5 version of the server, or the Mithril Security Cloud enclave) will be used.
certificate : Optional[str], optional
Path to the public key of the untrusted inference server. Generated in the server side. Defaults to None. Will be ignored if you are in simulation mode or trying to connect to the Mithril Security Cloud. If left to none and if you are in hardware mode, the certificate verification will be disabled
simulation : bool, optional
Connect to the server in simulation mode. If set to True, the args policy and certificate will be ignored. Defaults to False.
untrusted_port : int, optional
Untrusted connection server port. Defaults to 50052.
attested_port : int, optional
Attested connection server port. Defaults to 50051.
debug_mode : bool, optional
Prints debug message, will also turn on GRPC log messages.
api_key : str, optional
Key to upload and use your models on Mithril Security Cloud. This parameter is not needed if you want to use the public models, or if you want to deploy the server yourself.

Raises

AttestationError
Will be raised if the policy doesn't match the server configuration, or if the attestation is invalid.
NotAnEnclaveError
Will be raised if the enclave claims are not validated by the hardware provider, meaning that the claims cannot be verified using the hardware root of trust.
IdentityError
Will be raised if the enclave code signature hash does not match the signature hash provided in the policy.
DebugNotAllowedError
Will be raised if the enclave is in debug mode but the provided policy doesn't allow debug mode.
HardwareModeUnsupportedError
will be raised if the server is in simulation mode but an hardware mode attestation was requested from it.
ConnectionError
will be raised if the connection with the server fails.
VersionError
Will be raised if the version of the server is not supported by the client.
FileNotFoundError
will be raised if the policy file, or the certificate file is not found (in Hardware mode).

Ancestors

  • contextlib.AbstractContextManager
  • abc.ABC

Class variables

var attestation : Optional[untrusted_pb2.GetSgxQuoteWithCollateralReply]
var client_info : securedexchange_pb2.ClientInfo
var closed : bool
var enclave_signing_key : Optional[bytes]
var input_specs : Optional[List[List[Any]]]
var output_specs : Optional[List[ModelDatumType]]
var policy : Optional[Policy]
var server_version : Optional[str]
var simulation_mode : bool

Methods

def close(self)

Close the connection between the client and the inference server. This method has no effect if the connection is already closed.

def delete_model(self, model_id: str) ‑> DeleteModelResponse

Delete a model in the inference server. This may be used to free up some memory. If you did not specify that you wanted your model to be saved on the server, please note that the model will only be present in memory, and will disappear when the server close.

Security & confidentiality warnings: model_id : If you are using this on the Mithril Security Cloud, you can only delete models that you uploaded. Otherwise, the deletion of a model does only relies on the model_id. It doesn't relies on a session token or anything, hence if the model_id is known, it's deletion is possible.

Args

model_id : str
The id of the model to remove.

Raises

ConnectionError
Will be raised if the client is not connected or if an error happens during the connection.
ValueError
Will be raised if the connection is closed.

Returns

DeleteModelResponse
The response object.
def predict(self, model_id: str, tensors: Union[List[List[Any]], List[Any]], dtype: Union[List[ModelDatumType], ModelDatumType, ForwardRef(None)] = None, shape: Union[List[List[int]], List[int], ForwardRef(None)] = None, sign: bool = False) ‑> PredictResponse

Send data to the server to make a secure inference.

The data provided must be in a list, as the tensor will be rebuilt inside the server.

Security & confidentiality warnings: model_id : hash of the Onnx model uploaded. the given hash is return via gRPC through the proto files. It's a SHA-256 hash that is generated each time a model is uploaded. tensors: protected in transit and protected when running it on the secure enclave. In the case of a compromised OS, the data is isolated and confidential by SGX design. sign: by enabling sign, DCAP attestation is enabled to verify the SGX attestation model. This attestation model relies on Elliptic Curve Digital Signature algorithm (ECDSA).

Args

model_id : str
If set, will run a specific model.
tensors (Union[List[Any], List[List[Any]]))): The input data. It must be an array of numpy, tensors or flat list of the same type datum_type specified in upload_model.
dtype : Union[List[ModelDatumType], ModelDatumType], optional
The type of data of the data you want to upload. Only required if you are uploading flat lists, will be ignored if you are uploading numpy or tensors (this info will be extracted directly from the tensors/numpys).
shape : Union[List[List[int]], List[int]], optional
The shape of the data you want to upload. Only required if you are uploading flat lists, will be ignored if you are uploading numpy or tensors (this info will be extracted directly from the tensors/numpys).
sign : bool, optional
Get signed responses from the server or not. Defaults to False.

Raises

ConnectionError
Will be raised if the client is not connected.
SignatureError
Will be raised if the response signature is invalid
ValueError
Will be raised if the connection is closed

Returns

PredictResponse
The response object.
def upload_model(self, model: str, input_specs: Optional[List[Tuple[List[int], ModelDatumType]]] = None, output_specs: Optional[List[ModelDatumType]] = None, shape: Tuple = None, dtype: ModelDatumType = None, dtype_out: ModelDatumType = None, sign: bool = False, model_id: Optional[str] = None, save_model: bool = True) ‑> UploadModelResponse

Upload an inference model to the server. The provided model needs to be in the Onnx format.

Security & confidentiality warnings: model: The model sent on a Onnx format is encrypted in transit via TLS (as all connections). It may be subject to inference Attacks if an adversary is able to query the trained model repeatedly to determine whether or not a particular example is part of the trained dataset model. sign : by enabling sign, DCAP attestation is verified by the SGX attestation model. This attestation model relies on Elliptic Curve Digital Signature algorithm (ECDSA).

Args

model : str
Path to Onnx model file.
input_specs : List[Tuple[List[int], ModelDatumType]], optional
The list of input fact and datum types for each input grouped together in lists, describing the different inputs of the model. Can be left to None most of the time, as the server will retreive that information directly from the model, if available.
output_specs : List[ModelDatumType], optional
The list of datum types describing the different output types of the model. Can be left to None most of the time, as the server will retreive that information directly from the model, if available.
shape : Tuple, optional
The shape of the model input. Ignored if you are using models with multiple inputs. Can be left to None most of the time, as the server will retreive that information directly from the model, if available.
datum_type : ModelDatumType, optional
The type of the model input data (f32 by default). Ignored if you are using models with multiple inputs. Can be left to None most of the time, as the server will retreive that information directly from the model, if available.
dtype_out : ModelDatumType, optional
The type of the model output data (f32 by default). Ignored if you are using models with multiple outputs. Can be left to None most of the time, as the server will retreive that information directly from the model, if available.
sign : bool, optional
Get signed responses from the server or not. Defaults to False.
model_id : Optional[str], optional
Name of the model. By default, the server will assign a random UUID. You can call the model with the name you specify here.
save_model : bool, optional
Whether or not the model will be saved to disk in the server. The model will be saved encrypted (sealed) so that only the server enclave can load it afterwards. Defaults to True.

Raises

ConnectionError
Will be raised if the client is not connected.
FileNotFoundError
Will be raised if the model file is not found.
SignatureError
Will be raised if the response signature is invalid.
ValueError
Will be raised if the connection is closed.

Returns

UploadModelResponse
The response object.
class DeleteModelResponse
class PredictResponse (input_tensors: Union[List[List[Any]], List[Any]] = None, input_datum_type: Union[List[ModelDatumType], ModelDatumType] = None, input_shape: Union[List[List[int]], List[int]] = None, response: securedexchange_pb2.RunModelReply = None, sign: bool = True, attestation: Optional[untrusted_pb2.GetSgxQuoteWithCollateralReply] = None, enclave_signing_key: Optional[bytes] = None, allow_simulation_mode: bool = False)

Contains the inference calculated by the server, alongside the data needed to verify the integrity of the data sent.

Ancestors

Class variables

var inference_time : int

Time spent to do the inference on the server. Will be set to 0 if the server does not share this data.

var model_id : str

Model ID of the model, on the server.

var output : List[Tensor]

Contains the inference calculated by the server. Act as an array. To extract the first prediction, please use [0]. Can be converted to a Torch Tensor, a numpy array of a flat list.

Methods

def validate(self, model_id: str, tensors: Union[List[List[Any]], List[Any]], dtype: Union[List[ModelDatumType], ModelDatumType] = None, shape: Union[List[List[int]], List[int]] = None, policy_file: Optional[str] = None, policy: Optional[Policy] = None, validate_quote: bool = True, enclave_signing_key: Optional[bytes] = None, allow_simulation_mode: bool = False)

Validates whether this response is valid. This is used for responses you have saved as bytes or in a file. This will raise an error if the response is not signed or if it is not valid.

Security & confidentiality warnings:
validate_quote and enclave_signing_key : in case where the quote validation is set, the enclave_signing_key is generated directly using the certificate and the policy file and assigned otherwise. The hash of the enclave_signing_key is then represented as the MRSIGNER hash. When the simulation mode is off, the attestation is verified, and only in that case, the data is processed while assigning data_list

Args

model_id : str
The model id to check against.
tensors : List[Any]
Input used to run the model, to validate against.
policy_file : Optional[str], optional
Path to the policy file. Defaults to None.
policy : Optional[Policy], optional
Policy to use. Use policy_file to load from a file directly. Defaults to None.
validate_quote : bool, optional
Whether or not the attestation should be validated too. Defaults to True.
enclave_signing_key : Optional[bytes], optional
Enclave signing key in case the attestation should not be validated. Defaults to None.
allow_simulation_mode : bool, optional
Whether or not simulation mode responses should be accepted. Defaults to False.

Raises

AttestationError
Attestation is invalid.
SignatureError
Signed response is invalid.
FileNotFoundError
Will be raised if the policy file is not found.

Inherited members

class ResponseProof (*args, **kwargs)

A ProtocolMessage

Ancestors

  • google.protobuf.pyext._message.CMessage
  • google.protobuf.message.Message

Class variables

var DESCRIPTOR

Instance variables

var attestation

Field proof_files.ResponseProof.attestation

var payload

Field proof_files.ResponseProof.payload

var signature

Field proof_files.ResponseProof.signature

class SignedResponse

Subclasses

Class variables

var attestation : Optional[untrusted_pb2.GetSgxQuoteWithCollateralReply]

Contains the attestation provided by the enclave, if connected to a server in hardware mode.

var payload : Optional[bytes]

Raw protobuf object of the response from the server. Used to verify if the request was not altered by a third party.

var signature : Optional[bytes]

Signature of the payload made by the server. Allows to verify if the object was not changed by a third party.

Methods

def as_bytes(self) ‑> bytes

Save the response as bytes. The response can later be loaded with:

res = SignedResponse()
res.load_from_bytes(data)

Returns

bytes
The data.
def is_signed(self) ‑> bool
def is_simulation_mode(self) ‑> bool
def load_from_bytes(self, b: bytes)

Load the response from bytes.

Args

b : bytes
The data.
def load_from_file(self, path: str)

Load the response from a file.

Args

path : str
Path of the file.
def save_to_file(self, path: str)

Save the response to a file. The response can later be loaded with:

res = SignedResponse()
res.load_from_file(path)

Args

path : str
Path of the file.
class Tensor (info: TensorInfo, bytes_data: bytes)

Class variables

var bytes_data : bytes
var infoTensorInfo

Instance variables

var datum_typeModelDatumType
var shape : tuple

Methods

def as_flat(self) ‑> list

Convert the prediction calculated by the server to a flat python list.

def as_numpy(self)

Convert the prediction calculated by the server to a numpy array.

def as_torch(self)

Convert the prediction calculated by the server to a Torch Tensor.

class TensorInfo (dims: List[int], datum_type: ModelDatumType, index: int, index_name: str)

Class variables

var datum_typeModelDatumType
var dims : List[int]
var index : int
var index_name : str
class UploadModelResponse

Ancestors

Class variables

var model_id : str

Methods

def validate(self, model_hash: bytes, policy_file: Optional[str] = None, policy: Optional[Policy] = None, validate_quote: bool = True, enclave_signing_key: Optional[bytes] = None, allow_simulation_mode: bool = False)

Validates whether this response is valid. This is used for responses you have saved as bytes or in a file. This will raise an error if the response is not signed or if it is not valid.

Security & confidentiality warnings:
validate_quote and enclave_signing_key : in case where the quote validation is set, the enclave_signing_key is generated directly using the certificate and the policy file and assigned otherwise. The hash of the enclave_signing_key is then represented as the MRSIGNER hash.

Args

model_hash : bytes
Hash of the model to verify against.
policy_file : Optional[str], optional
Path to the policy file. Defaults to None.
policy : Optional[Policy], optional
Policy to use. Use policy_file to load from a file directly. Defaults to None.
validate_quote : bool, optional
Whether or not the attestation should be validated too. Defaults to True.
enclave_signing_key : Optional[bytes], optional
Enclave signing key in case the attestation should not be validated. Defaults to None.
allow_simulation_mode : bool, optional
Whether or not simulation mode responses should be accepted. Defaults to False.

Raises

AttestationError
Attestation is invalid.
SignatureError
Signed response is invalid.
FileNotFoundError
Will be raised if the policy file is not found.

Inherited members