Check Price and Token Limits in ZapGPT with OpenAI Provider

ZapGPT is a CLI and API tool to interact with LLMs like OpenAI, Groq, Claude, Perplexity, and more. If you’re using the OpenAI provider, you can easily check the price per token for each supported model β€” critical info if you’re optimizing for cost or want to dynamically switch models based on available context or budget.

Also, you can use the OpeNRouter provider to check the length of context if you are using some huge context like a big file as input :).

πŸ“Š How to Check Prices and Token Limits

ZapGPT maintains a built-in reference for most LLM providers including OpenAI (this is static information and may change). For OpenRouter the API is queried for this information so this is more upto date and correct.

Step 1: List All Models

Run:

1
2
zapgpt -lm -p openai
zapgpt -lm -p openrouter

to check for openai and openrouter providers.

This will show all models available via the active provider (default is github) along with their:

  • Ctx Length (context limit) for OpenRouter

Example output:

1
2
3
4
5
6
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ ID                                                   ┃ Created             ┃ Ctx Len ┃ Modality         ┃
┑━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
β”‚ mistralai/mistral-small-3.2-24b-instruct:free        β”‚ 2025-06-20 23:40:16 β”‚ 96 K    β”‚ text+image->text β”‚
β”‚ minimax/minimax-m1                                   β”‚ 2025-06-18 04:16:54 β”‚ 1000 K  β”‚ text->text       β”‚
β”‚ minimax/minimax-m1:extended                          β”‚ 2025-06-18 04:16:54 β”‚ 128 K   β”‚ text->text       β”‚

To check the price for the models, you can run:

1
2
zapgpt -lp -p openai
zapgpt -lp -p openrouter

Sample output:

1
2
3
4
5
6
7
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━┓
┃ Model                                                ┃ Prompt Cost (1K) ┃ Output Cost (1K) ┃ Total (1K)       ┃
┑━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━┩
β”‚ openrouter/auto                                      β”‚ -1000.0000000000 β”‚ -1000.0000000000 β”‚ -1000.0000000000 β”‚
β”‚ mistralai/mistral-small-3.2-24b-instruct:free        β”‚ 0.0000000000     β”‚ 0.0000000000     β”‚ 0.0000000000     β”‚
β”‚ moonshotai/kimi-dev-72b:free                         β”‚ 0.0000000000     β”‚ 0.0000000000     β”‚ 0.0000000000     β”‚
β”‚ deepseek/deepseek-r1-0528-qwen3-8b:free              β”‚ 0.0000000000     β”‚ 0.0000000000     β”‚ 0.0000000000     β”‚

Step 2: Choose a Model Based on Budget/Context

Based on your budget or the size of documents you’re working with:

  • Use gpt-4o for a good balance of cost and performance
  • Use gpt-3.5-turbo for high-volume, low-cost tasks
  • Use gpt-4-turbo only if you need advanced reasoning and don’t mind higher cost

You can switch models using:

1
2
zapgpt --model gpt-3.5-turbo
zapgpt -m mistralai/mistral-small-3.2-24b-instruct:free "Your query here"

πŸ”— Reference


author

Authored By Amit Agarwal

Amit Agarwal, Linux and Photography are my hobbies.Creative Commons Attribution 4.0 International License.

We notice you're using an adblocker. If you like our webite please keep us running by whitelisting this site in your ad blocker. We’re serving quality, related ads only. Thank you!

I've whitelisted your website.

Not now
This website uses cookies to ensure you get the best experience on our website. Learn more Got it