Doubleword logo black
Product
Products
Doubleword API
NEW
Inference built for scale
Doubleword Inference Stack
High performance inference stack
Use Cases
Async Agents
Long running background agents
Synthetic Data Generation
Generate high volumes of data for fine- tuning
Data Processing
Apply intelligence to large volumes of data
Resources
Documentation
Technical docs and API reference
Workbooks
Ready-to-run examples
Seen in the Wild
Community content and projects
Resource Centre
All our blogs and guides
Technical Blog
Our blog on building inference systems
Al Dictionary
Key Al terms explained
Savings Calculator
See how much you save with Doubleword
Solutions
By Deployment Option
On-premiseCloudHybrid
By Team
AI, ML & Data SciencePlatform, DevOps & ITCompliance & Cyber
Pricing
Docs
Pricing
Get started - Free
Get started - Free
Resources
/
News
/
Announcing Support for Google's New Open-Source Gemma Models
February 22, 2024

Announcing Support for Google's New Open-Source Gemma Models

Meryem Arik
Share:
https://doubleword.ai/resources/announcing-support-for-googles-new-open-source-gemma-models
Copied
To Webinar
•

We are excited to announce support for Google's newly released open-source Gemma natural language models within TitanML's Takeoff inference server!


The Gemma models come in two sizes - a 2 billion parameter model and a 7 billion parameter model. The Gemma models can be freely used and distributed by being open-sourced.

Our customers can now leverage the power of Google's state-of-the-art natural language capabilities within their secure environments. Whether a healthcare organization, financial institution, or other industry dealing with sensitive data, they can deploy Gemma locally in their VPC, on-prem data center, or other private cloud infrastructure through Takeoff.

The multiple Gemma model sizes allow cost-effective deployment that meets your application performance needs. The smaller 2 billion parameter model is more affordable to run with low latency for real-time use cases. The 7 billion parameter model offers higher accuracy and capability for more complex applications.

We look forward to seeing what our customers build using Google's new Gemma models for conversational AI chatbots, content generation, search, analytics, and more through Takeoff's flexible deployment options.

Please reach out to explore how TitanML can power your AI applications.

Footnotes

Table of contents:

Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
"
Learn more about self-hosted AI Inference
Subscribe to our newsletter
Thanks you for subscription!
Oops! Something went wrong while submitting the form.

Stop overpaying for inference.

Teams use Doubleword to run low-cost, large-scale inference pipelines for async jobs.
‍
Free credits available to get started.

Get started - Free
Doubleword logo black
AI Inference, Built for Scale.
Products
Doubleword APIDoubleword Inference Stack
Use Cases
Async AgentsSynthetic Data GenerationData Processing
Resources
Seen in the WildDocumentationPricingAsync Pipeline BuilderResource CentreTechnical BlogAI Dictionary
Company
AboutPrivacy PolicyTerms of ServiceData Usage Policy
Careers
Hiring!
Contact
© 2026 Doubleword. All rights reserved.
We use cookies to ensure you get the best experience on our website.
Accept
Deny