chat-bot Ollama AI Chat

Uploaded Image Preview
Response:
Waiting for input...

ai Ollama API Documentation

1. List Models

Endpoint: GET /api/tags

Description: Returns a list of available models

Response:
{
  "models": [
    {
      "name": "llama2",
      "modified_at": "2024-01-01T12:00:00Z",
      "size": 4563416
    }
  ]
}

2. Generate Response

Endpoint: POST /api/generate

Description: Generate a response from the model

Request:
{
  "model": "llama2",
  "prompt": "What is artificial intelligence?",
  "stream": true,
  "images": ["base64_encoded_image"] // Optional
}
Response (Stream):
{
  "model": "llama2",
  "created_at": "2024-01-01T12:00:00Z",
  "response": "Artificial intelligence...",
  "done": false
}

3. Model Information

Endpoint: GET /api/show

Description: Get details about a specific model

Response:
{
  "license": "MIT",
  "modelfile": "FROM llama2\nPARAMETER temp 0.7",
  "parameters": "temp 0.7",
  "template": "{{ .Prompt }}",
  "system": "You are a helpful AI assistant."
}

4. Create Model

Endpoint: POST /api/create

Description: Create a new model from a Modelfile

Request:
{
  "name": "custom_model",
  "modelfile": "FROM llama2\nSYSTEM You are a helpful assistant",
  "stream": true
}

5. Copy Model

Endpoint: POST /api/copy

Description: Create a copy of a model

Request:
{
  "source": "llama2",
  "destination": "llama2-copy"
}

6. Delete Model

Endpoint: DELETE /api/delete

Description: Delete a model

Request:
{
  "name": "model_name"
}

7. Pull Model

Endpoint: POST /api/pull

Description: Download a model from a registry

Request:
{
  "name": "llama2",
  "stream": true
}

8. Push Model

Endpoint: POST /api/push

Description: Upload a model to a registry

Request:
{
  "name": "username/model:latest",
  "stream": true
}

9. Embeddings

Endpoint: POST /api/embeddings

Description: Generate embeddings from a model

Request:
{
  "model": "llama2",
  "prompt": "Here is some text to generate embeddings for"
}
Response:
{
  "embeddings": [0.1, 0.2, 0.3, ...],
}