The Cheapest Embedding Model in 2026 Still Comes From OpenAI on List Price
Based on the public pricing sheets checked on March 15, 2026 in our broader AI token pricing comparison, the short answer is straightforward: OpenAI text-embedding-3-small still has the lowest clearly exposed list price in the current comparison.
That does not make this the universal best buy. It makes it the cleanest answer to one narrow question: which provider currently owns the raw text-embedding list-price floor. That distinction matters because a lot of teams still confuse the cheapest model row with the cheapest production stack.
The short answer
If you look only at public list price, OpenAI text-embedding-3-small at $0.02 per 1M tokens is cheaper than Mistral Embed at $0.10, Codestral Embed at $0.15, Gemini Embedding 001 at $0.15, and Gemini Embedding 2 Preview at $0.20 for text.
That is the clean price answer. It is not the whole retrieval answer, because embedding economics also depend on your vector store, re-embedding frequency, query latency, and whether you want the easiest future model migration.
The pricing rows that matter
| Embedding model | Price | Notes | | --- | --- | --- | --- | | OpenAI text-embedding-3-small | $0.02 per 1M tokens | Lowest clear text list price in the comparison. | | OpenAI text-embedding-3-large | $0.13 per 1M tokens | Higher-quality OpenAI tier. | | Mistral Embed | $0.10 per 1M tokens | Exportable and portable. | | Gemini Embedding 001 | $0.15 per 1M tokens | Batch halves the price. |
OpenAI wins the headline list-price comparison because its low-end embedding row is materially below the nearest clearly exposed rivals. For teams doing huge document ingestion, that price floor is not trivial.
Why the headline can mislead
Embeddings are only partly portable in practice. You can export vectors, but many teams re-embed when they change retrieval models so query and document embeddings stay aligned.
So the buying choice is not only “who is cheapest?” It is also “who gives me the retrieval quality, export posture, and operational simplicity I actually want?” Cheap ingestion is real. So is migration cost.
When this is the right pick
- you want the lowest text-embedding list price you can verify publicly
- your main cost driver is bulk ingestion volume
- you are comfortable keeping other retrieval infrastructure outside the model vendor
When to ignore the headline
- you are treating embedding price as a proxy for retrieval quality
- you want the easiest long-term portability story
- your real spend sits in hosted search, not embedding creation
Bottom line
If the question is raw list price, OpenAI still leads. If the question is the best retrieval architecture to live with for years, the answer gets more interesting very quickly.
If you want the wider market context, start with the full provider-by-provider pricing breakdown and, for media-specific workloads, the separate image and video generation API comparison.

Comments
Create your account or sign in in a modal, then join the discussion without leaving the article.
0 comments
Create an account or sign in before you comment
Start with your email. If you already have an account, you will sign in here. If not, you will create it here and stay on the article.
Loading comments...