Token Limits when adding in info

I just came across an instance where I was trying to get aa bunch of info back based upon an input.

I got an error saying that the input was to large and exceeded the tokens.

Is there somewhere that this can be adjusted up ?

@ihmunro you can adjust the token input / output length in the configure tab of the pickaxe.

Having said that be aware of the underlying model context limit. GPT-4-turbo and GPT-4o have a context length of 128k

1 Like

thanks AB

Will take a look