Doubleword logo black
Product
Products
Doubleword API
NEW
Inference built for scale
Doubleword Inference Stack
High performance inference stack
Use Cases
Async Agents
Long running background agents
Synthetic Data Generation
Generate high volumes of data for fine- tuning
Data Processing
Apply intelligence to large volumes of data
Resources
Documentation
Technical docs and API reference
Workbooks
Ready-to-run examples
Seen in the Wild
Community content and projects
Resource Centre
All our blogs and guides
Technical Blog
Our blog on building inference systems
Al Dictionary
Key Al terms explained
Savings Calculator
See how much you save with Doubleword
Solutions
By Deployment Option
On-premiseCloudHybrid
By Team
AI, ML & Data SciencePlatform, DevOps & ITCompliance & Cyber
Pricing
Docs
Pricing
Get started - Free
Get started - Free
Resources
/
Blog
/
27/2 Weekly Update: Qwen3.5-35B-A3B (Higher Quality, Lower Cost)
February 27, 2026

27/2 Weekly Update: Qwen3.5-35B-A3B (Higher Quality, Lower Cost)

Meryem Arik
Share:
https://doubleword.ai/resources/doubleword-weekly-update---27-2
Copied
To Webinar
•

It’s been great seeing more teams run real workloads on Doubleword over the past week.

Last week, we added Qwen3.5-397B-A17B, Qwen3-14B, and GPT-OSS-20B to the platform - and lots of users have already been putting them to work across agent workflows, extraction pipelines, and async evaluation jobs.

We’ve seen very strong early results, particularly:

  • Qwen3.5-397B-A17B for multimodal background agents and OpenClaw-style systems
  • Qwen3-14B for high-volume classification, routing, and lightweight reasoning workloads

It’s been exciting to see how quickly these models are being tested in production-style pipelines.

This week, we’re expanding that lineup further.

New model support: Qwen3.5-35B-A3B

Pricing: $0.07 / $0.30 (High Priority) and $0.05 / $0.20 (Standard Priority)

Qwen3.5-35B-A3B is a high-intelligence, mid-sized model that hits a very compelling price/performance point for async workloads.

In Qwen’s published benchmarks, this model outperformed GPT-5-mini, GPT-OSS-120B, and Claude Sonnet 4.5.

Even more interesting: it delivers higher quality than our previous largest model, Qwen3-235B - at a fraction of the cost.

In practical terms, this means:

  • Stronger reasoning
  • More robust outputs
  • Better multimodal performance
  • High efficiency for async and batch jobs

Qwen3.5 represents a meaningful architectural step forward - combining improved reasoning, multimodal capability, and efficiency in a way that’s particularly well suited to large-scale background agents and evaluation workflows.

🥳 I’m also very proud to say Doubleword is offering the best pricing for these models on the market.

Should you migrate?

If you’re currently using Qwen3-30B or Qwen3-235B, we’d strongly recommend testing:

  • Qwen3.5-35B-A3B
  • Qwen3.5-397B-A17B

Both offer higher intelligence and improved reasoning quality for the price - particularly for:

  • Multi-step agents
  • Extraction pipelines
  • Large-scale evals
  • Background async processing

Qwen3-30B and Qwen3-235B remain supported. But for most users, the new Qwen3.5 series should now be the better default starting point.

Try it out

As always, the best evaluation is against your own workload.

Benchmark Qwen3.5-35B-A3B against your current setup and see how it performs. If you notice meaningful differences - positive or negative - we’d love to hear what you see.

We’ll continue expanding model coverage across performance tiers while keeping pricing aggressively competitive for async and batch inference.

More to come next week.

Footnotes

Table of contents:

Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
"
Learn more about self-hosted AI Inference
Subscribe to our newsletter
Thanks you for subscription!
Oops! Something went wrong while submitting the form.

Stop overpaying for inference.

Teams use Doubleword to run low-cost, large-scale inference pipelines for async jobs.
‍
Free credits available to get started.

Get started - Free
Doubleword logo black
AI Inference, Built for Scale.
Products
Doubleword APIDoubleword Inference Stack
Use Cases
Async AgentsSynthetic Data GenerationData Processing
Resources
Seen in the WildDocumentationPricingAsync Pipeline BuilderResource CentreTechnical BlogAI Dictionary
Company
AboutPrivacy PolicyTerms of ServiceData Usage Policy
Careers
Hiring!
Contact
© 2026 Doubleword. All rights reserved.
We use cookies to ensure you get the best experience on our website.
Accept
Deny