How do I make sure that my model uses gpt-4.1 (long context) instead of the standard gpt-4.1 that has lower TPM’s?
There’s no option in the individual PickAxe tools/settings to differentiate the two.
How do I make sure that my model uses gpt-4.1 (long context) instead of the standard gpt-4.1 that has lower TPM’s?
There’s no option in the individual PickAxe tools/settings to differentiate the two.
Hey @WorldMasterClass, you get a little over 1 million context window with the current version of 4.1.
Thanks for the reply Ned, we are supposed to have 200,000 TPM for long context in our tier, but it seems to be hitting the lower 30,000 TPM.
I have a similar question. The new models offer huge context windows, but in setting the input, output, and context memory limits, Pickaxe seems to artificially limit our total token capacity well below what the current models support. How can I balance providing my users with good length on results, flexibility on inputs, and a good context memory window, without exceeding what appears to be a Pickaxe cap that’s set below what GPT 5 and Claude Opus can handle?
This one is for the support team. Hey @abhi-support @lindsay_support check this out
@abhi-support @lindsay_support my users are running out of memory in chats. Will there be token features tailored for new models’ unique capabilities?
@lbdesign thanks for the question! To clarify, we do not do anything to ‘artificially limit’ token capacity - configuration setting limits vary based on the model selected and reflect the upper limits available for each (which you can find on their respective sites - for GPT-5, for example, see here: https://platform.openai.com/docs/models/gpt-5). If you can provide us with any documentation that showcases higher token limits by model providers than what we are offering, please follow up with us either here or in the support inbox (info@pickaxeproject.com) and we will look into it.
As to the initial question from @WorldMasterClass, we don’t currently offer 4.1 long context, just standard - but you can always add as a feature request!