Module blindai.client
Functions
def connect(addr: str, unattested_server_port: int = 9923, attested_server_port: int = 9924, model_management_port: int = 9924, hazmat_manifest_path: Optional[pathlib.Path] = None, hazmat_http_on_unattested_port=False, simulation_mode: bool = False, use_cloud_manifest: bool = False) ‑> BlindAiConnection
-
Connect to a BlindAi server.
Args
addr
:str
- The address of BlindAI server you want to connect to. It can be a domain (such as "example.com" or "localhost") or an IP
unattested_server_port
:int
, optional- The unattested server port number. Defaults to 9923.
attested_server_port
:int
, optional- The attested server port number. Defaults to 9924.
model_management_port
:int
, optional- The model management port. Needs to be specified if the server only accepts model upload/deletion locally. Defaults to 9924.
hazmat_manifest_path
:Optional[pathlib.Path]
, optional- Path to the Manifest.toml which describes which enclave are to be accepted. Defaults to the built-in Manifest.toml provided by Mithril Security as part of the Python package. You can override the default by providing a path to your own Manifest.toml Caution: Changing the manifest can impact the security of the solution.
hazmat_http_on_unattested_port
:bool
, optional- If set to True, the client will request the attestation elements of the server using a plain HTTP connection instead of a more secure HTTPS connection. Defaults to False. Caution: This parameter should never be set to True in production. Using a HTTPS connection is critical to get a graceful degradation in case of a failure of the Intel SGX attestation.
simulation_mode
:bool
, optional- If set to True, BlindAI will work in simulation mode. Caution: In simulation, BlindAI does not provide any security since there is no SGX enclave. This mode SHOULD NEVER be enabled in production. Defaults to False (production mode)
use_cloud_manifest
:bool
, optional- If set to True, the manifest for the local model management version (aka the cloud version) will be used.
Raises: requests.exceptions.RequestException: If a network or server error occurs ValueError: raised when inputs sanity checks fail IdentityError: raised when the enclave signature does not match the enclave signature expected in the manifest EnclaveHeldDataError: raised when the expected enclave held data does not match the one in the quote QuoteValidationError: raised when the returned quote is invalid (TCB outdated, not signed by the hardware provider…). AttestationError: raised when the attestation is not valid (enclave settings mismatching, debug mode unallowed…)
Returns
BlindAiConnection
- An object representing an active connection to a BlindAi server
def deserialize_tensor(data: bytes, type: ModelDatumType) ‑> numpy.ndarray
def dtype_to_numpy(dtype: ModelDatumType) ‑> str
-
Convert a ModelDatumType to a numpy type.
Raises
ValueError
- if numpy doesn't support dtype
def dtype_to_torch(dtype: ModelDatumType) ‑> str
-
Convert a ModelDatumType to a torch type.
Raises
ValueError
- if torch doesn't support dtype
def serialize_tensor(tensor: numpy.ndarray, type: ModelDatumType) ‑> bytes
def translate_dtype(dtype: Any) ‑> ModelDatumType
-
Convert torch, numpy or litteral types to ModelDatumType
Raises
ValueError
- if dtype is erroneous or not supported
def translate_tensor(tensor: Any, or_dtype: ModelDatumType, or_shape: Tuple, name=None) ‑> Tensor
-
Put the flat/numpy/torch tensor into a Tensor object.
Args
tensor
- flat/numpy/torch tensor
or_dtype
- ignored if tensor isn't flat. dtype of the tensor.
or_shape
- ignored if tensor isn't flat. shape of the tensor.
Raises
ValueError
- if tensor format is not one of flat/numpy/torch
ValueError
- if tensor's dtype is not supported
Returns
Tensor
- the serialized tensor
def translate_tensors(tensors, dtypes, shapes) ‑> List[dict]
-
Put the flat/numpy/torch tensors into a list of Tensor objects.
Args
tensor
- list or dict of flat/numpy/torch tensors
dtypes
- ignored if tensors aren't flat. list or dict of dtypes of the tensors.
or_shape
- ignored if tensor aren't flat. list or dict of shapes of the tensors.
Returns
List[dict]
- the serialized tensors as a list of dicts.
Classes
class BlindAiConnection (addr: str, unattested_server_port: int, attested_server_port: int, model_management_port: int, hazmat_manifest_path: Optional[pathlib.Path], hazmat_http_on_unattested_port: bool, simulation_mode: bool, use_cloud_manifest: bool)
-
A class to represent a connection to a BlindAi server.
Connect to a BlindAi service.
Please refer to the connect function for documentation.
Args
addr (str): unattested_server_port (int): attested_server_port (int): model_management_port (int): hazmat_manifest_path (Optional[pathlib.Path]): hazmat_http_on_unattested_port (bool): simulation_mode (bool): Returns:
Ancestors
- contextlib.AbstractContextManager
- abc.ABC
Methods
def close(self)
def delete_model(self, model_id: str)
-
Delete a model in the inference server.
This may be used to free up some memory. If you did not specify that you wanted your model to be saved on the server, please note that the model will only be present in memory, and will disappear when the server close.
Security & confidentiality warnings: model_id: The deletion of a model does only relies on the
model_id
. It doesn't relies on a session token or anything, hence if themodel_id
is known, it's deletion is possible.Args
model_id
:str
- The id of the model to remove.
Raises
HttpError
- raised by the requests lib to relay server side errors
ValueError
- raised when inputs sanity checks fail
def run_model(self, model_id: str = '', model_hash: str = '', input_tensors: Union[List, Dict, ForwardRef(None)] = None, dtypes: Optional[List[ModelDatumType]] = None, shapes: Union[List[List[int]], List[int], ForwardRef(None)] = None) ‑> RunModelResponse
-
Send data to the server to make a secure inference.
The data provided must be in a list, as the tensor will be rebuilt inside the server.
Security & confidentiality warnings: model_id: hash of the Onnx model uploaded. the given hash is return via gRPC through the proto files. It's a SHA-256 hash that is generated each time a model is uploaded. tensors: protected in transit and protected when running it on the secure enclave. In the case of a compromised OS, the data is isolated and confidential by SGX design.
Args
model_id
:str
- If set, will run a specific model.
model_hash
:str
- hash of the Onnx model uploaded. If no uuid was provided, the server will try to find a model matching this hash
input_tensors (Union[List[Any], List[List[Any]]))): The input data. It must be an array of numpy,
tensors or flat list of the same type datum_type specified in
upload_model
. dtypes
:Union[List[ModelDatumType], ModelDatumType]
, optional- The type of data of the data you want to upload. Only required if you are uploading flat lists, will be ignored if you are uploading numpy or tensors (this info will be extracted directly from the tensors/numpys).
shapes
:Union[List[List[int]], List[int]]
, optional- The shape of the data you want to upload. Only required if you are uploading flat lists, will be ignored if you are uploading numpy or tensors (this info will be extracted directly from the tensors/numpys).
Raises
HttpError
- raised by the requests lib to relay server side errors
ValueError
- raised when inputs sanity checks fail
Returns
RunModelResponse
- The response object.
def upload_model(self, model: str, model_name: Optional[str] = None, optimize: bool = True) ‑> UploadResponse
-
Upload an inference model to the server.
The provided model needs to be in the Onnx format.
Security & confidentiality warnings: model: The model sent on a Onnx format is encrypted in transit via TLS (as all connections). It may be subject to inference Attacks if an adversary is able to query the trained model repeatedly to determine whether or not a particular example is part of the trained dataset model.
Args
model
:str
- Path to Onnx model file.
model_name
:Optional[str]
, optional- Name of the model. Used for you to identify the model, but won't be used by the server (a random UUID will be assigned to your model for the inferences).
optimize
:bool
- Whether tract (our inference engine) should optimize the model or not. Optimzing should only be turned off when you are encountering issues loading your model.
Raises
HttpError
- raised by the requests lib to relay server side errors
ValueError
- raised when inputs sanity checks fail
Returns
UploadResponse
- The response object.
class DeleteModel (model_id)
-
DeleteModel(model_id)
Class variables
var model_id : str
class ModelDatumType (value, names=None, *, module=None, qualname=None, type=None, start=1)
-
An enumeration.
Ancestors
- enum.IntEnum
- builtins.int
- enum.Enum
Class variables
var Bool
var F32
var F64
var I16
var I32
var I64
var I8
var U16
var U32
var U64
var U8
class RunModel (model_id, model_hash, inputs, client_info=None)
-
RunModel(model_id, model_hash, inputs, client_info=None)
Class variables
var client_info : Optional[blindai.client._ClientInfo]
var inputs : List[Tensor]
var model_hash : str
var model_id : str
class RunModelReply (**entries)
-
RunModelReply(**entries)
Class variables
var outputs : List[Any]
class RunModelResponse (output: List[Tensor])
-
RunModelResponse(output: List[blindai.client.Tensor])
Class variables
var output : List[Tensor]
class SendModelReply (**entries)
-
SendModelReply(**entries)
Class variables
var hash : bytes
var model_id : str
class SimulationModeWarning (*args, **kwargs)
-
Base class for warning categories.
Ancestors
- builtins.Warning
- builtins.Exception
- builtins.BaseException
class Tensor (info: Union[TensorInfo, dict], bytes_data: bytes)
-
Tensor class to convert serialized tensors into convenients objects.
Class variables
var bytes_data : bytes
var info : Union[TensorInfo, dict]
Instance variables
var datum_type : ModelDatumType
var shape : tuple
Methods
def as_flat(self) ‑> list
-
Convert the prediction calculated by the server to a flat python list.
def as_numpy(self)
-
Convert the prediction calculated by the server to a numpy array.
def as_torch(self)
-
Convert the prediction calculated by the server to a Torch Tensor.
As torch is heavy it's an optional dependency of the project, and is imported only when needed.
Raises: ImportError if torch isn't installed
class TensorInfo (fact, datum_type, node_name=None)
-
Class variables
var datum_type : ModelDatumType
var fact : List[int]
var node_name : str
class UploadModel (model, length, client_info, model_name='', optimize=True)
-
UploadModel(model, length, client_info, model_name='', optimize=True)
Class variables
var client_info : blindai.client._ClientInfo
var length : int
var model : List[int]
var model_name : str
var optimize : bool
class UploadResponse (model_id: str, hash: bytes)
-
UploadResponse(model_id: str, hash: bytes)
Class variables
var hash : bytes
var model_id : str