Adaptable for sensitive workflows
1B-parameter size enables efficient training of custom models that are private by design
An open model trained from the ground up using differential privacy to prevent memorization and leaking of training data examples
1B-parameter size enables efficient training of custom models that are private by design
Trained with a sequence-level differential privacy guarantee of (Ξ΅ β€ 2.0, Ξ΄ β€ 1.1e-10)
Implements novel scaling law research that balances compute-privacy-utility tradeoffs