It’s been really great seeing what people are starting to build with Doubleword - and a lot of the feedback we’ve received so far has directly shaped what we’re shipping next.
This is a quick update on a few things we’ve just released.
New model support: Qwen3-14B, Qwen3.5 & GPT-OSS 20B
We’ve added support for two new Qwen models and OpenAI’s GPT-OSS 20B, giving teams access to more frontier-level intelligence models.
Qwen3.5-397B-A17B
Pricing: $0.3 / $1.80 and $0.15 / $1.2
Qwen3.5 is a powerful step up for multimodal background agents and OpenClaw-style workflows. It handles richer reasoning and mixed-input tasks well, which makes it a strong choice for asynchronous agents that need to process context, images, or longer chains of work without requiring real-time responses.
One thing that’s been especially interesting here is the economics. Running Qwen3.5 via async inference dramatically changes cost profiles for large async workloads - while maintaining the same model quality.
For equivalent workloads, this can be:
- Up to 91% cheaper vs Anthropic
- Up to 83% cheaper vs OpenAI
Async inference with Doubleword changes the economics of agentic and asynchronous workloads - making frontier-level intelligence far more accessible.
This is a big part of why we believe async inference is becoming such an important design choice for production systems.
Qwen3-14B
Pricing: $0.03 / $0.3 and $0.02 / $0.2
This model has been performing extremely well for classification tasks, lightweight reasoning, and smaller sub-agent workflows. If you’re running background processes where speed and cost efficiency matter more than frontier-level reasoning, this is a strong default option. It’s especially useful for workloads that involve large volumes of structured decisions or simple task routing.
GPT-OSS 20B
Pricing: $0.03 / $0.2 and $0.02 / $0.15
This model designed for powerful reasoning, agentic tasks, and versatile developer use cases which increases the suite of models within Doubleword suited to agentic workloads.
Our goal has been to offer best-in-class models at every meaningful price point - so builders can choose the right capability level for each part of their system rather than overpaying across the board.
Let us know which models you'd like to see next - email support@doubleword.ai for your suggestions.
Webhooks + notifications
We’ve also shipped webhooks and notifications.
Until now, many users were polling to check whether a batch had completed. With webhooks, you can now provide an endpoint and we’ll automatically POST updates when a batch completes or fails.
In practice, this means:
- No more polling loops
- Cleaner async workflows
- Easier integration into existing pipelines
- Simpler orchestration for agent or background systems
Try it out
If you haven’t already, try running something with the new models and see how they perform on your real workloads. We’d love to hear what works well, what doesn’t, and which models you’d like to see us support next.


