About
We're a CS theory (security/AI) research team aimed at designing and engineering new verifiable computation protocols for use on securing the training and inference of machine learning models in the cloud.
Security for Machine Learning in the Cloud
We're a CS theory (security/AI) research team aimed at designing and engineering new verifiable computation protocols for use on securing the training and inference of machine learning models in the cloud.
We engineer our verifiable computation protocols from the ground up to optimize them for securing cloud-based machine learning.
We've published two papers chalk-full with theory, proofs, and empirical results, with more on the way.
Our ultimate goal is to scale these protocols from theory to production-level environments.
Whenever we can, we embody the principles of open-source software and open-access research.
We're very grateful to be funded with a $15,000 seed grant from
Emergent Ventures.
This initial funding opportunity allows us to acquire powerful compute resources for our experiments,
better understand how to scale to production, and cover all operational costs for our research.
Interested in funding opportunities, talking theory/code, or using our protocols in production?