We introduce VaultGemma 1B, a 1 billion parameter model within the Gemma family, fully trained with differential privacy. Pretrained on the identical data mixture used for the Gemma 2 series, VaultGemma 1B represents a significant step forward in privacy-preserving large language models. We openly release this model to the community
翻译:我们介绍了VaultGemma 1B,这是Gemma系列中一个拥有10亿参数的模型,完全采用差分隐私进行训练。该模型在与Gemma 2系列相同的数据混合上进行预训练,代表了隐私保护大语言模型领域的重要进展。我们向社区开源发布了此模型。