Controls randomness in responses. Higher values produce more creative but potentially less accurate output.
Maximum number of tokens the model will generate in a response.
Maximum conversation history tokens to include as context.
Add and manage your own Large Language Model providers. Ensure the API is compatible with OpenAI's chat completions format for best results.
No custom LLMs added yet.