Example of BlindAI deployment with Facenet¶
This example shows how you can run a Facenet model to perform Facial Recognition with confidentiality guarantees.
By using BlindAI, people can send data for the AI to analyze their biometric data without having to fear privacy leaks.
Facenet is a state-of-the art ResNet model for Facial Recogntion. You can learn more about it on Facenet repository.
Installing dependencies¶
Install the dependencies this example needs.
!pip install -q transformers[onnx] torch
Install the Facenet-pytorch library.
!pip install facenet-pytorch
Install the latest version of BlindAI.
!pip install blindai
Preparing the model¶
The first step here is to prepare the model to perform facial recognition.
To make it simpler, we will do an example where we will hardcode the database of biometric templates in the neural network itself. This works if the database of people to identify is fixed. For more dynamic workload, BlindAI can be adapted to suit this use case but we will not cover it here
First we load the pretrained Facenet model.
from facenet_pytorch import InceptionResnetV1
import torch
resnet = InceptionResnetV1(pretrained='vggface2').eval()
We then download the people that will serve as our biometric database. The goal here is to use a neural network to see if a new person to be identified belongs to one of the three people registered.
!wget https://raw.githubusercontent.com/mithril-security/blindai/master/examples/facenet/woman_0.jpg
!wget https://raw.githubusercontent.com/mithril-security/blindai/master/examples/facenet/woman_1.jpg
!wget https://raw.githubusercontent.com/mithril-security/blindai/master/examples/facenet/woman_2.jpg
We can have a look at our dataset.
from PIL import Image
from IPython.display import display
files = [f"woman_{i}.jpg" for i in range(3)]
display(Image.open(files[0]), Image.open(files[1]), Image.open(files[2]))
Here we will do the enrollment phase, i.e. extract a template from each person, and store it. Those templates will be used as references to compute a similarity score when someone new comes in to be identified.
import numpy as np
embeddings = []
for file in files:
# We open each file and preprocess it
im = Image.open(file)
im = torch.tensor(np.asarray(im)).permute(2,0,1).unsqueeze(0) / 128.0 - 1
# We make the tensor go through the ResNet to extract a template
embedding = resnet(im)
embeddings.append(embedding.squeeze(0))
# We stack everything in a matrix
embeddings = torch.stack(embeddings)
Because the scoring will be done through a dot product of a new candidate template with the registered templates, we can implement this scoring as a matrix multiplication between the registered tempalte and the new template:
import torch.nn as nn
# Create the scoring layer with a matrix multiplication
scoring_layer = nn.Linear(512, 3, bias=False)
# Store the computed embeddings inside
scoring_layer.weight.data = embeddings
full_network = nn.Sequential(
resnet,
scoring_layer
)
Before sending our model to BlindAI, we will how it performs in practice.
Let's download a test set, containing a different picture of the second woman we registered.
!wget https://raw.githubusercontent.com/mithril-security/blindai/master/examples/facenet/woman_test.jpg
We can see below that the two pictures are indeed from the same person.
test_im = Image.open("woman_test.jpg")
display(test_im, Image.open("woman_1.jpg"))
We can now apply our full network, which will extract a template from the test image, and compute a dot product between the new templates and the registered templates.
test_im = torch.tensor(np.asarray(test_im)).permute(2,0,1).unsqueeze(0) / 128.0 - 1
scores = full_network(test_im)
We can see that the scores reflect the truth: the dot product of the embeddings of the test image with the first and third women are low, while the score is high with the second woman. This makes sense, as the neural network was trained to provide a high score for pictures of the same person, and make the score low for different people.
scores
Now we can export the model to be fed to BlindAI to deploy it with privacy guarantees.
torch.onnx.export(full_network, # model being run
test_im, # model input (or a tuple for multiple inputs)
"facenet.onnx", # where to save the model (can be a file or file-like object)
export_params=True, # store the trained parameter weights inside the model file
)
Deployment on BlindAI¶
Now we can upload the model to BlindAI Cloud. To upload of the model, make sure you have an API key.
You can get one on the Mithril Cloud.
You might get an error if the name you want to use is already taken, as models are uniquely identified by their model_id
. We will implement namespace soon to avoid that. Meanwhile, you will have to choose a unique ID. We provide an example below to upload your model with a unique name:
import blindai
import uuid
api_key = "YOUR_API_KEY" # Enter your API key here
model_id = "facenet-" + str(uuid.uuid4())
# Upload the ONNX file to the remote enclave
with blindai.Connection(api_key=api_key) as client:
response = client.upload_model("facenet.onnx", model_id=model_id)
This securely uploads the model to the Mithril Cloud. If you wish to run this example on premise, you should read the Deploy on Hardware documentation page.
Sending data for confidential prediction¶
Now it's time to check it's working live!
We will just prepare some input for the model inside the secure enclave of BlindAI to process it.
First we prepare our input data, the test image we used before.
from PIL import Image
import torch
test_im = Image.open("woman_test.jpg")
test_im = torch.tensor(np.asarray(test_im)).permute(2,0,1).unsqueeze(0) / 128.0 - 1
Now we can send the biometric data to be processed confidentially!
with blindai.Connection() as client:
response = client.predict(model_id, test_im)
As we can see below, the results are quite similar from the regular inference.
response.output[0].as_flat()