encloud is pleased to announce the imminent release of version 1 of its confidential ML software. With this release, our clients can test our software to leverage LLMs without data and IP leakage. With the impending AI regulation, there is an increased focus on privacy and security. This release will enable businesses to securely generate new proprietary insights and business optimizations while ensuring regulatory compliance.
Clients can get a sneak preview by requesting a demo, a video of which will be uploaded to the website soon.
Version 1 of the software allows users to run a “batch” job inside a secure enclave. Batch processing is a method of running high-volume, repetitive data jobs. It handles large amounts of non-continuous data and can process data quickly, minimizing or eliminating the need for user interaction and improving the efficiency of job processing. Batch processing automates most or all components of a processing job increasing security and simplicity. It can be ideal for managing database updates, transaction processing, and converting files from one format to another.
Stream processing, by contrast, is appropriate for continuous data and makes sense for systems or processes that depend on having access to data in real-time. If timeliness is critical to a process, stream processing is likely the best option. For example, companies working with connected devices such as medical equipment rely on stream processing to deliver real-time data.
With this first release, clients can query an LLM privately. This enables organizations to query an LLM with their own data without exposing the data or allowing the model to use the data to train itself. Multiple queries can be done efficiently and securely using batch processing.
Additionally, for larger and more complex data, organizations can use encloud’s software in a two-stage batch process. Such datasets can be used to run “embeddings” that capture the insights contained in the data through a series of “vectors” or relationships between different things. These embeddings can be stored in a vector database that can be deployed to query an LLM. Both the generation and capture of embeddings in a vector database and the subsequent querying of the LLM can be done with complete privacy assurance using encloud’s software.
In a subsequent release of encloud’s software, stream or continuous processing will be supported, enabling real-time querying and inference. In addition to the benefits highlighted above, this will enable multi-stage query refinement, for example with the use of a chatbot. Both batch and stream processing will be supported once encloud releases its privacy-assured LLM fine-tuning and then training capabilities.