Trying to use a personal Pickaxe with only 2 knowledge base documents.
Then, my question which was relatively quite basics was the following
Here is a job offer:
“160 words”
Please help draft a short email of intention for the proposal.
The model then returns;
“This model’s maximum context length is 8192 tokens. However, your messages resulted in 11540 tokens (11021 in the messages, 519 in the functions). Please reduce the length of the messages or functions.”
I’ve switched the model from GPT 4 to GPT-4o and it worked, but would like to understand why it was’nt with GPT-4 which I thought was superior.