This documentation outlines the overall behavior of the endpoint.
/api/endpoint
This endpoint processes a chat/completion request using a specified AI model. It handles authentication, checks for sufficient user credit, logs the request, executes the call, and returns the generated response along with token usage and pricing details.
Once you have obtained an API key, you can access the API using the following example. Please, check the specific parameters below.
curl https://www.rebootml.com/api/endpoint \ -H "Content-Type: application/json" \ -H "Authorization: Bearer <YOUR_REBOOTML_API_KEY>" \ -d '{ "model": "<YOUR_CHOSEN_MODEL>", "messages": [ {"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Hello!"} ], "options": { "max_tokens":250, "stream":false } }'
Method: Bearer Token
Header: Authorization: Bearer YOUR_REBOOTML_API_KEY
Send the request payload as JSON. The following fields are supported.
Field | Type | Required | Description |
---|---|---|---|
model |
String | Yes | The name of the AI model to be used (e.g., "gpt-3.5-turbo" ). Must be a valid model available in the system (see list of models below). |
messages |
Array of objects | Yes | An array of message objects that make up the conversation. Each message typically includes a role (e.g., "system" , "user" , "assistant" ) and a content string. |
options |
Object | No | Optional parameters as shown below. |
The following models are available. In your request, use the model name as shown in the first column below.
Name | Provider | Actual model |
---|---|---|
openai-gpt-4o |
OpenAI | gpt-4o-2024-08-06 |
openai-gpt-4o-rt-preview |
OpenAI | gpt-4o-realtime-preview-2024-12-17 |
openai-gpt-4o-mini |
OpenAI | gpt-4o-mini-2024-07-18 |
openai-gpt-4o-mini-rt-preview |
OpenAI | gpt-4o-mini-realtime-preview-2024-12-17 |
openai-o1 |
OpenAI | o1-2024-12-17 |
openai-o3-mini |
OpenAI | o3-mini-2025-01-31 |
openai-o1-mini |
OpenAI | o1-mini-2024-09-12 |
a21-jamba-1.5-large |
AI21 | ai21.jamba-1-5-large-v1:0 |
a21-jamba-1.5-mini |
AI21 | ai21.jamba-1-5-mini-v1:0 |
a21-jurassic-2-mid |
AI21 | ai21.j2-mid-v1 |
a21-jurassic-2-ultra |
AI21 | ai21.j2-ultra-v1 |
a21-jamba-instruct |
AI21 | ai21.jamba-instruct-v1:0 |
amazon-nova-micro |
Amazon | amazon.nova-micro-v1:0 |
amazon-nova-lite |
Amazon | amazon.nova-lite-v1:0 |
amazon-nova-pro |
Amazon | amazon.nova-pro-v1:0 |
anthropic-claude-3-5-haiku |
Anthropic | anthropic.claude-3-5-haiku-20241022-v1:0 |
anthropic-claude-3-5-sonnet-v2 |
Anthropic | anthropic.claude-3-5-sonnet-20241022-v2:0 |
anthropic-claude-3-5-sonnet |
Anthropic | anthropic.claude-3-5-sonnet-20240620-v1:0 |
anthropic-claude-3-sonnet |
Anthropic | anthropic.claude-3-sonnet-20240229-v1:0 |
anthropic-claude-3-opus |
Anthropic | anthropic.claude-3-opus-20240229-v1:0 |
anthropic-claude-3-haiku |
Anthropic | anthropic.claude-3-haiku-20240307-v1:0 |
anthropic-claude-2-1 |
Anthropic | anthropic.claude-v2:1 |
anthropic-claude |
Anthropic | anthropic.claude-v2 |
anthropic-claude-instant |
Anthropic | anthropic.claude-instant-v1 |
amazon-titan-text-v1 |
Amazon | amazon.titan-text-premier-v1:0 |
amazon-titan-text-express-v1 |
Amazon | amazon.titan-text-express-v1 |
amazon-titan-text-lite-v1 |
Amazon | amazon.titan-text-lite-v1 |
cohere-command-text-v1 |
Cohere | cohere.command-text-v14 |
cohere-command-text-light-v1 |
Cohere | cohere.command-light-text-v14 |
cohere-command-rplus-v1 |
Cohere | cohere.command-r-plus-v1:0 |
cohere-command-r |
Cohere | cohere.command-r-v1:0 |
meta-llama3-3-70b |
Meta | meta.llama3-3-70b-instruct-v1:0 |
meta-llama3-2-1b |
Meta | us.meta.llama3-2-1b-instruct-v1:0 |
meta-llama3-2-3b |
Meta | us.meta.llama3-2-3b-instruct-v1:0 |
meta-llama3-2-11b |
Meta | us.meta.llama3-2-11b-instruct-v1:0 |
meta-llama3-2-90b |
Meta | us.meta.llama3-2-90b-instruct-v1:0 |
meta-llama3-1-8b |
Meta | us.meta.llama3-1-8b-instruct-v1:0 |
meta-llama3-1-70b |
Meta | us.meta.llama3-1-70b-instruct-v1:0 |
meta-llama3-8b |
Meta | meta.llama3-8b-instruct-v1:0 |
meta-llama3-70b |
Meta | meta.llama3-70b-instruct-v1:0 |
mistral-mistral-7b |
Mistral | mistral.mistral-7b-instruct-v0:2 |
mistral-mistral-8x7b |
Mistral | mistral.mixtral-8x7b-instruct-v0:1 |
mistral-mistral-small |
Mistral | mistral.mistral-small-2402-v1:0 |
mistral-mistral-large |
Mistral | mistral.mistral-large-2402-v1:0 |
deepseek-chat |
Deepseek | deepseek-chat |
deepseek-reasoner |
Deepseek | deepseek-reasoner |
google-gemini-2.0-flash |
gemini-2.0-flash | |
google-gemini-2.0-flash-lite |
gemini-2.0-flash-lite | |
google-gemini-1.5-flash |
gemini-1.5-flash | |
google-gemini-1.5-flash-8b |
gemini-1.5-flash-8b | |
google-gemini-1.5-pro |
gemini-1.5-pro |
Send the request payload as JSON. The following fields are supported.
All fields are optional.
Field | Type | Description |
---|---|---|
stream |
Boolean | Whether the output should be streamed. |
stats |
Boolean | Whether the output should include stats (i.e., token count and price). |
On success, the API returns a JSON object with status 1
and a result
object containing details of the processed request.
Field | Type | Description |
---|---|---|
response |
String | The generated response text from the model. |
tokens_input |
Number | The number of input tokens (estimated or as provided by the model). |
tokens_output |
Number | The number of tokens used in the generated output. |
price |
Number | The cost calculated for processing the request. |
model |
String | The name of the model that processed the request. |
request |
Array | The original messages sent in the request. |
{ "status": 1, "result": { "response": "Here's a joke: Why did the chicken cross the road? To get to the other side!", "tokens_input": 20, "tokens_output": 15, "price": 0.05, "model": "gpt-3.5-turbo", "request": [ { "role": "system", "content": "You are a helpful assistant." }, { "role": "user", "content": "Tell me a joke." } ] } }
If the options object includes "stream": true