Skip to content

πŸ‘‹ Welcome to BlindAI!


An AI model deployment solution which ensures users' data remains private every step of the way.

What is BlindAI?


BlindAI is an AI inference server with an added privacy layer, protecting the data sent to models.

BlindAI facilitates privacy-friendly AI model deployment by letting AI engineers upload and delete models to their secure server instance using our Python API. Clients can then connect to the server, upload their data and run models on it without compromising on privacy.

Data sent by users to the AI model is kept confidential at all times. Neither the AI service provider nor the Cloud provider (if applicable), can see the data. Confidentiality is assured by hardware-enforced Trusted Execution Environments. We explain how they keep data and models safe in detail here.

BlindAi is an open-source project consisting of:

  • A privacy-friendly server coded in Rust πŸ¦€ using Intel SGX (Intel Software Guard Extensions) πŸ”’ to ensure your data stays safe.
  • An easy-to-use Python client SDK 🐍.

You can check out the code on our GitHub.

We’ll update the documentation as new features come in, so dive in!

Getting started


Getting help


How is the documentation structured?


  • Getting Started take you by the hand to install and run BlindAI. We recommend you start with the Quick tour and then move on to installation!

  • API Reference contains technical references for BlindAI’s API machinery. They describe how it works and how to use it but assume you have a good understanding of key concepts.

  • Security guides contain technical information for security engineers. They explain the threat models and other cybersecurity topics required to audit BlindAI's security standards.

  • Advanced guides are destined to developpers wanting to dive deep into BlindAI and eventually collaborate with us to the open-source code.

Who made BlindAI?

BlindAI was developed by Mithril Security. Mithril Security is a startup focused on confidential machine learning based on Intel SGX technology. We provide an open-source AI inference solution, allowing easy and fast deployment of neural networks. Confidential computing provides its strong security properties by performing the computation in a hardware-based Trusted Execution Environment (TEE), also called enclaves.