Chat

Headers

X-Api-KeystringRequired

Request

This endpoint expects an object.
messageslist of objectsRequired
List of messages specifying the conversation so far.
modelstringRequired

See Available Models for possible values.

frequency_penaltydouble or nullOptional
Parameter which penalises tokens based on their frequency in the model's output so far. The larger the value, the higher the penalisation. 0.0 means no frequency penalty. Defaults to 0.0.
max_tokensinteger or nullOptional
The maximum number of new tokens to be generated by the model. Note that this is limited by the model's context length. Defaults to 1024.
presence_penaltydouble or nullOptional
Parameter which penalises tokens based on whether they have appeared in the model's output so far. The larger the value, the higher the penalisation. 0.0 means no presence penalty. Defaults to 0.0.
seedinteger or nullOptional
Random seed used for generations. The same value forces the model to sample the same output.
stoplist of strings or nullOptional
A list of stop strings used to control generation. If the model generates one of these, it will stop.
streambooleanOptionalDefaults to false

Set to true to enable streaming. See Chat Streaming

temperaturedouble or nullOptional

Positive number representing the temperature to use for generation. Higher values will make the output more unformly random or creative. 0.0 means greedy decoding. Defaults to 0.4.

tool_choiceenum or nullOptional
Controls how the model may use the provided tools. Set to 'auto' to let the model decide whether or not to invoke a tool. Set to 'none' to disable tool use. Set to 'tool' to force the model to invoke a tool.
Allowed values:
toolslist of objects or nullOptional
List of tools the model has access to.
top_kinteger or nullOptional

Parameter which forces the model to only consider the tokens with the top_k highest probabilities at the next step. Defaults to 1024.

top_pdouble or nullOptional

Parameter used to do nucleus sampling, i.e. only consider tokens comprising the top_p probability of the next token’s distribution. Defaults to 0.95.

use_search_enginebooleanOptionalDefaults to false

Whether to consider using search engine to complete the request. Note that even if this is set to True, the model might decide to not use search.

Response

Newest response from the model.
ChatResponseobject
OR
ChunkChatResponseobject

Errors