Module ollama

Module ollama 

Source
Expand description

Ollama (local) provider Ollama (local LLM) AI provider adapter

Implements the AIProvider port using Ollama’s HTTP API for local inference. Supports any model installed in the local Ollama instance. JSON output is enforced via format: "json" parameter.

§Example

use stygian_graph::adapters::ai::ollama::{OllamaProvider, OllamaConfig};
use stygian_graph::ports::AIProvider;
use serde_json::json;

let provider = OllamaProvider::new();
let schema = json!({"type": "object", "properties": {"title": {"type": "string"}}});
// let result = provider.extract("<html>Hello</html>".to_string(), schema).await.unwrap();

Structs§

OllamaConfig
Configuration for the Ollama provider
OllamaProvider
Ollama local LLM provider adapter