oasis

package module
v0.9.0 Latest Latest
Warning

This package is not in the latest version of its module.

Go to latest
Published: Feb 25, 2026 License: AGPL-3.0 Imports: 22 Imported by: 0

README

Oasis

Build AI agents in Go that actually compose. Single agents, multi-agent networks, DAG workflows, graph-powered RAG, code execution — all as recursive primitives. No LLM SDKs. No vendor lock-in. Just interfaces.

import oasis "github.com/nevindra/oasis"

Why Oasis?

Most agent frameworks are wrappers around LLM SDKs with hardcoded abstractions. Oasis is different:

  • Everything is an interface. LLM providers, storage, tools, memory — swap any component without touching the rest. Write your own in 20 lines.
  • Agents compose recursively. An LLMAgent is an Agent. A Network of agents is an Agent. A Workflow containing both is an Agent. Nest them arbitrarily.
  • No LLM SDKs. Every provider uses raw net/http. You control the bytes. Zero vendor lock-in, minimal dependencies.
  • Go-native concurrency. Parallel tool dispatch, background agents via Spawn(), DAG workflows with automatic wave execution — all using goroutines and channels.
  • Production primitives, not demos. Rate limiting, retry with backoff, batch processing, persistent Graph RAG, semantic memory with decay, suspend/resume, code execution with tool bridge.

Quick Start

package main

import (
    oasis "github.com/nevindra/oasis"
    "github.com/nevindra/oasis/provider/gemini"
    "github.com/nevindra/oasis/tools/knowledge"
    "github.com/nevindra/oasis/tools/search"
)

func main() {
    // Use any provider — Gemini, OpenAI, Groq, Ollama, DeepSeek, Mistral, vLLM, etc.
    llm := gemini.New(apiKey, "gemini-2.5-flash")
    // Or: llm := openaicompat.NewProvider("sk-xxx", "gpt-4o", "https://api.openai.com/v1")
    // Or: llm := openaicompat.NewProvider("", "llama3", "http://localhost:11434/v1")
    embedding := gemini.NewEmbedding(apiKey, "text-embedding-004", 768)

    agent := oasis.NewLLMAgent("assistant", "Helpful research assistant", llm,
        oasis.WithTools(
            knowledge.New(store, embedding),
            search.New(embedding, braveKey),
        ),
        oasis.WithPrompt("You are a helpful research assistant."),
        oasis.WithConversationMemory(store, oasis.CrossThreadSearch(embedding)),
        oasis.WithUserMemory(memoryStore, embedding),
    )

    result, err := agent.Execute(ctx, oasis.AgentTask{Input: "What is quantum computing?"})
}

Features

Agent Primitives
  • LLMAgent — single LLM with tools. Runs a tool-calling loop until the model produces a final response. Multiple tool calls execute in parallel automatically.
  • Network — coordinates multiple agents via an LLM router. Subagents appear as callable tools (agent_<name>). Networks nest recursively.
  • Workflow — deterministic DAG-based orchestration with Step, AgentStep, ToolStep, ForEach, DoUntil/DoWhile. Steps without dependencies run concurrently. Compile-time validation (cycles, missing deps, duplicates).
  • Background agentsSpawn() launches agents in goroutines with AgentHandle for lifecycle tracking, cancellation, and select-based multiplexing.
Intelligence
  • Code execution — LLM writes and runs Python code in a sandboxed subprocess with full tool bridge access (call_tool, call_tools_parallel, set_result). Complex logic, loops, conditionals, error handling via try/except.
  • Plan execution — LLM batches multiple tool calls in a single turn via execute_plan. All steps run in parallel without re-sampling. Reduces latency and tokens for fan-out patterns.
  • Dynamic configuration — per-request resolution of prompt, model, and tool set via WithDynamicPrompt, WithDynamicModel, WithDynamicTools. Multi-tenant personalization, tier-based model selection, role-based tool gating.
  • Structured outputWithResponseSchema enforces JSON output at the agent level. SchemaObject typed builder for compile-time safety.
  • Suspend/Resume — pause agent or workflow execution to await external input, then continue from where it left off.
Memory & RAG
  • Conversation memory — load/persist history per thread with MaxHistory and MaxTokens trimming.
  • Cross-thread recall — semantic search across all threads with cosine similarity filtering.
  • User memory — LLM-extracted facts with semantic deduplication, confidence decay, and contradiction supersession. Runs automatically after each turn.
  • Graph RAG — LLM-based graph extraction during ingestion discovers 8 relationship types between chunks. GraphRetriever combines vector search with multi-hop BFS traversal. Persistent GraphStore in all three backends.
  • Hybrid retrievalHybridRetriever fuses vector search + FTS keyword search with Reciprocal Rank Fusion, parent-child chunk resolution, and optional LLM re-ranking.
  • Semantic chunking — embedding-based topic boundary detection alongside recursive and markdown-aware chunkers.
  • Skills — database-persisted instruction packages with semantic search. Agents discover and create skills for each other.
Streaming & Events
  • Structured streamingStreamEvent with 5 typed events: TextDelta, ToolCallStart, ToolCallResult, AgentStart, AgentFinish. Full visibility into agent execution.
  • Execution traces — every AgentResult includes Steps []StepTrace with per-tool timing, token usage, and input/output. No OTEL setup required.
  • SSE helperServeSSE streams agent responses as Server-Sent Events with zero boilerplate.
Resilience
  • RetryWithRetry wraps any provider with exponential backoff on 429/503.
  • Rate limitingWithRateLimit with sliding-window RPM and TPM accounting. Blocks requests until budget allows.
  • Batch processingBatchProvider and BatchEmbeddingProvider for async offline jobs at reduced cost.
  • Processor pipelinePreProcessor, PostProcessor, PostToolProcessor hooks for guardrails, PII redaction, logging. ErrHalt short-circuits execution.
  • Human-in-the-loopInputHandler lets agents pause and ask humans for input, both LLM-driven (ask_user tool) and programmatic.
Observability
  • Deep tracingTracer and Span interfaces in the root package. Span hierarchy: agent.executeagent.memory.load / agent.loop.iterationagent.memory.persist. Zero overhead when no tracer configured.
  • Structured logging — all framework logging uses slog. Pass WithLogger(*slog.Logger) to any agent.
  • OTEL integrationobserver.NewTracer() backed by the global TracerProvider.

Agents in Depth

LLMAgent
researcher := oasis.NewLLMAgent("researcher", "Searches the web", llm,
    oasis.WithTools(searchTool, knowledgeTool),
    oasis.WithPrompt("You are a research specialist."),
    oasis.WithMaxIter(5),
    oasis.WithCodeExecution(runner),     // let the LLM write and run code
    oasis.WithPlanExecution(),           // let the LLM batch tool calls
    oasis.WithTracer(observer.NewTracer()),
)
Network
researcher := oasis.NewLLMAgent("researcher", "Searches for information", llm,
    oasis.WithTools(searchTool),
)
writer := oasis.NewLLMAgent("writer", "Writes polished content", llm)

team := oasis.NewNetwork("team", "Research and writing team", router,
    oasis.WithAgents(researcher, writer),
    oasis.WithTools(knowledgeTool),
)

// Networks compose recursively — a Network is just another Agent
org := oasis.NewNetwork("org", "Full organization", ceo,
    oasis.WithAgents(team, opsTeam),
)
Workflow
pipeline, err := oasis.NewWorkflow("research-pipeline", "Research and write",
    oasis.Step("prepare", func(ctx context.Context, wCtx *oasis.WorkflowContext) error {
        wCtx.Set("query", "Research: "+wCtx.Input())
        return nil
    }),
    oasis.AgentStep("research", researcher, oasis.InputFrom("query"), oasis.After("prepare")),
    oasis.AgentStep("write", writer, oasis.InputFrom("research.output"), oasis.After("research")),
    oasis.WithOnError(func(step string, err error) { log.Printf("%s failed: %v", step, err) }),
)

result, err := pipeline.Execute(ctx, oasis.AgentTask{Input: "Go error handling"})

Step types: Step (function), AgentStep (delegate to Agent), ToolStep (call a tool), ForEach (iterate with concurrency), DoUntil/DoWhile (loop). Workflows can also be defined from JSON at runtime via FromDefinition for visual workflow builders.

Streaming
if sa, ok := agent.(oasis.StreamingAgent); ok {
    ch := make(chan oasis.StreamEvent)
    go func() {
        for event := range ch {
            switch event.Type {
            case oasis.EventTextDelta:
                fmt.Print(event.Content)
            case oasis.EventToolCallStart:
                fmt.Printf("\n[calling %s]\n", event.Name)
            case oasis.EventToolCallResult:
                fmt.Printf("[%s returned]\n", event.Name)
            }
        }
    }()
    result, err := sa.ExecuteStream(ctx, task, ch)
}
Background Agents
h := oasis.Spawn(ctx, agent, task)

fmt.Println(h.State()) // Running, Completed, Failed, Cancelled
result, err := h.Wait()
h.Cancel()

Core Interfaces

Interface Purpose
Provider LLM backend — Chat, ChatStream
EmbeddingProvider Text-to-vector embedding
Store Persistence with vector search, keyword search, graph storage
MemoryStore Long-term semantic memory (facts, confidence, decay)
Tool Pluggable capability for LLM function calling
Agent Composable work unit — LLMAgent, Network, Workflow, or custom
StreamingAgent Token streaming with structured events
InputHandler Human-in-the-loop — pause and request human input
Tracer / Span Tracing abstraction (zero OTEL imports in your code)
Retriever Composable retrieval with re-ranking
CodeRunner Sandboxed code execution with tool bridge

Included Implementations

Component Packages
Providers provider/gemini (Google Gemini), provider/openaicompat (OpenAI, Groq, Together, DeepSeek, Mistral, Ollama, vLLM, LM Studio, OpenRouter, Azure, and any OpenAI-compatible API)
Storage store/sqlite (local, pure-Go), store/libsql (Turso/remote), store/postgres (PostgreSQL + pgvector). All three support Store, MemoryStore, GraphStore, and KeywordSearcher
Tools tools/knowledge (RAG), tools/remember, tools/search (web), tools/schedule, tools/shell, tools/file, tools/http, tools/data (CSV/JSON transform), tools/skill (agent skill management)
Code code (sandboxed Python subprocess with tool bridge)
Retrieval HybridRetriever (vector + FTS + RRF), GraphRetriever (multi-hop BFS), ScoreReranker, LLMReranker
Ingestion ingest (HTML, Markdown, CSV, JSON, DOCX, PDF extractors; recursive, markdown, semantic chunkers; parent-child strategy)
Observability observer (OpenTelemetry-backed Tracer implementation)

Installation

go get github.com/nevindra/oasis

Requires Go 1.24+.

Project Structure

oasis/
|-- types.go, provider.go, tool.go     # Core interfaces and domain types
|-- store.go, memory.go
|-- agent.go, llmagent.go, network.go   # Agent primitives
|-- workflow.go                         # DAG orchestration
|-- processor.go                        # Processor pipeline
|-- input.go                            # Human-in-the-loop
|-- retriever.go                        # Retrieval pipeline
|-- handle.go                           # Spawn() + AgentHandle
|
|-- provider/gemini/                    # Google Gemini provider
|-- provider/openaicompat/              # OpenAI-compatible provider
|-- store/sqlite/                       # Local SQLite (pure-Go, no CGO)
|-- store/libsql/                       # Remote Turso store
|-- store/postgres/                     # PostgreSQL + pgvector
|-- code/                              # Sandboxed code execution
|-- observer/                           # OTEL observability
|-- ingest/                             # Document chunking pipeline
|-- tools/                              # Built-in tools
|
|-- cmd/bot_example/                    # Reference application

Configuration

Config loading order: defaults -> oasis.toml -> environment variables (env vars win).

See docs/configuration/reference.md for the full reference.

Documentation

  • Getting Started — installation, quick start, reference app
  • Concepts — architecture, interfaces, and primitives
  • Guides — how-to guides for building custom components
  • Configuration — all config options and environment variables
  • API Reference — complete interface definitions, types, and options
  • Contributing — engineering principles and coding conventions
  • Deployment — Docker, cloud deployment for the reference bot

MCP Docs Server

Oasis ships an MCP (Model Context Protocol) server that exposes framework documentation to AI assistants. Connect it to Claude Code, Cursor, Windsurf, or any MCP-compatible tool.

{
  "mcpServers": {
    "oasis": {
      "type": "stdio",
      "command": "go",
      "args": ["run", "github.com/nevindra/oasis/cmd/mcp-docs@latest"]
    }
  }
}

All docs are embedded at build time via //go:embed. No network access, no API keys — runs as a local subprocess.

License

AGPL-3.0 — commercial licensing available, contact nevindra for details

Documentation

Overview

Package oasis is an AI assistant framework for building conversational agents in Go.

It provides modular, interface-driven building blocks: LLM providers, embedding providers, vector storage, long-term memory, a tool execution system, a document ingestion pipeline, and messaging frontend abstractions.

Quick Start

Create an agent using the LLMAgent primitive:

provider := gemini.New(apiKey, model)
embedding := gemini.NewEmbedding(apiKey)
store := sqlite.New("oasis.db")
memoryStore := sqlitemem.New("memory.db")

agent := oasis.NewLLMAgent(
	"assistant",
	"You are a helpful assistant.",
	provider,
	oasis.WithTools(
		knowledge.New(store, embedding),
		search.New(),
	),
	oasis.WithConversationMemory(store, oasis.CrossThreadSearch(embedding)),
	oasis.WithUserMemory(memoryStore, embedding),
)

result, err := agent.Execute(ctx, "What's the weather like?")

Core Interfaces

The root package defines the contracts that all components implement:

Included Implementations

Providers: provider/gemini (Google Gemini), provider/openaicompat (OpenAI-compatible APIs). Storage: store/sqlite (local), store/libsql (Turso/remote). Tools: tools/knowledge, tools/remember, tools/search, tools/schedule, tools/shell, tools/file, tools/http.

See the cmd/bot_example directory for a complete reference application.

Index

Constants

This section is empty.

Variables

View Source
var ErrMaxIterExceeded = errors.New("step reached max iterations without meeting exit condition")

ErrMaxIterExceeded is returned by DoUntil/DoWhile steps when the loop cap is reached without the exit condition being met.

Functions

func ComputeNextRun

func ComputeNextRun(schedule string, nowUnix int64, tzOffset int) (int64, bool)

ComputeNextRun calculates the next UTC unix timestamp for a schedule string.

Schedule format is "HH:MM <recurrence>" where recurrence is one of:

  • once — fires once, then the action is disabled
  • daily — fires every day at the specified time
  • custom(mon,wed,fri) — fires on specific days of the week
  • weekly(monday) — fires once a week on the given day
  • monthly(15) — fires once a month on the given day number

The time component is in the user's local timezone. tzOffset is the offset from UTC in whole hours (e.g., 7 for WIB/Asia Jakarta, -5 for EST). The returned timestamp is always in UTC.

func ForEachIndex added in v0.2.0

func ForEachIndex(ctx context.Context) (int, bool)

ForEachIndex retrieves the current iteration index (0-based) inside a ForEach step function. Returns the index and true if called from within a ForEach step, or -1 and false otherwise.

func ForEachItem added in v0.2.0

func ForEachItem(ctx context.Context) (any, bool)

ForEachItem retrieves the current iteration element inside a ForEach step function. Returns the element and true if called from within a ForEach step, or nil and false otherwise. The element is carried via context.Context (not WorkflowContext), so it is safe for concurrent iterations — each goroutine sees its own element.

func FormatLocalTime

func FormatLocalTime(unix int64, tzOffset int) string

FormatLocalTime formats a UTC unix timestamp as "YYYY-MM-DD HH:MM" in the timezone specified by tzOffset (hours from UTC).

func NewID

func NewID() string

NewID generates a globally unique, time-sortable UUIDv7 (RFC 9562).

func NowUnix

func NowUnix() int64

NowUnix returns current time as Unix seconds.

func ParseRetryAfter added in v0.7.0

func ParseRetryAfter(value string) time.Duration

ParseRetryAfter parses a Retry-After header value into a duration. Supports both delay-seconds ("120") and HTTP-date ("Wed, 21 Oct 2015 07:28:00 GMT") formats per RFC 9110 §10.2.3. Returns zero on empty or unparseable values.

func ResumeData added in v0.3.0

func ResumeData(wCtx *WorkflowContext) (json.RawMessage, bool)

ResumeData retrieves resume data from the WorkflowContext. Returns the data and true if this step is being resumed, or nil and false on first execution. Safe to call with a nil WorkflowContext (returns nil, false).

func Suspend added in v0.3.0

func Suspend(payload json.RawMessage) error

Suspend returns an error that signals the workflow or network engine to pause execution. The payload provides context for the human (what they need to decide, what data to show).

func WithInputHandlerContext added in v0.2.0

func WithInputHandlerContext(ctx context.Context, h InputHandler) context.Context

WithInputHandlerContext returns a child context carrying the InputHandler.

func WithTaskContext added in v0.6.0

func WithTaskContext(ctx context.Context, task AgentTask) context.Context

WithTaskContext returns a child context carrying the AgentTask. Called automatically by LLMAgent and Network at Execute entry points. Tools and processors can retrieve the task via TaskFromContext.

func WriteSSEEvent added in v0.7.0

func WriteSSEEvent(w http.ResponseWriter, eventType string, data any) error

WriteSSEEvent writes a single Server-Sent Event to w and flushes. It validates that w implements http.Flusher, JSON-marshals data into the SSE data field, and flushes immediately. eventType is the SSE event name (e.g. "text-delta", "done").

Use this to compose custom SSE loops with [StreamingAgent.ExecuteStream]:

ch := make(chan oasis.StreamEvent, 64)
go agent.ExecuteStream(ctx, task, ch)
for ev := range ch {
    oasis.WriteSSEEvent(w, string(ev.Type), ev)
}
oasis.WriteSSEEvent(w, "done", customPayload)

Types

type Agent

type Agent interface {
	// Name returns the agent's identifier.
	Name() string
	// Description returns a human-readable description of what the agent does.
	// Used by Network to generate tool definitions for the routing LLM.
	Description() string
	// Execute runs the agent on the given task and returns a result.
	Execute(ctx context.Context, task AgentTask) (AgentResult, error)
}

Agent is a unit of work that takes a task and returns a result. Implementations range from single LLM tool-calling agents (LLMAgent) to multi-agent coordinators (Network).

type AgentHandle added in v0.2.0

type AgentHandle struct {
	// contains filtered or unexported fields
}

AgentHandle tracks a background agent execution. All methods are safe for concurrent use.

func Spawn added in v0.2.0

func Spawn(ctx context.Context, agent Agent, task AgentTask) *AgentHandle

Spawn launches agent.Execute(ctx, task) in a background goroutine. Returns immediately with a handle for tracking, awaiting, and cancelling. The parent ctx controls the agent's lifetime — cancelling it cancels the agent.

func (*AgentHandle) Agent added in v0.2.0

func (h *AgentHandle) Agent() Agent

Agent returns the agent being executed.

func (*AgentHandle) Await added in v0.2.0

func (h *AgentHandle) Await(ctx context.Context) (AgentResult, error)

Await blocks until the agent completes or ctx is cancelled. Returns the agent's result and error on completion. Returns zero AgentResult and ctx.Err() if ctx is cancelled before completion.

func (*AgentHandle) Cancel added in v0.2.0

func (h *AgentHandle) Cancel()

Cancel requests cancellation. Non-blocking. The agent receives a cancelled context. State transitions to StateCancelled once Execute returns.

func (*AgentHandle) Done added in v0.2.0

func (h *AgentHandle) Done() <-chan struct{}

Done returns a channel closed when execution finishes (any terminal state). Composable with select for multiplexing multiple handles.

func (*AgentHandle) ID added in v0.2.0

func (h *AgentHandle) ID() string

ID returns the unique execution identifier (xid-based, time-sortable).

func (*AgentHandle) Result added in v0.2.0

func (h *AgentHandle) Result() (AgentResult, error)

Result returns the result and error. Only meaningful after Done() is closed. Before completion, returns zero AgentResult and nil error.

func (*AgentHandle) State added in v0.2.0

func (h *AgentHandle) State() AgentState

State returns the current execution state. If the state is terminal, State blocks until Done() is closed (nanoseconds) to guarantee that Result() returns valid data when State().IsTerminal() is true.

type AgentOption

type AgentOption func(*agentConfig)

AgentOption configures an LLMAgent or Network.

func WithAgents

func WithAgents(agents ...Agent) AgentOption

WithAgents adds subagents to a Network. Ignored by LLMAgent.

func WithCodeExecution added in v0.6.0

func WithCodeExecution(runner CodeRunner) AgentOption

WithCodeExecution enables the built-in "execute_code" tool that lets the LLM write and execute Python code in a sandboxed subprocess. The code has access to all agent tools via call_tool(name, args) and call_tools_parallel(calls).

This complements WithPlanExecution: use execute_plan for simple parallel fan-out, use execute_code for complex logic (conditionals, loops, data flow).

func WithCompressModel added in v0.9.0

func WithCompressModel(fn ModelFunc) AgentOption

WithCompressModel sets a per-request provider for context compression. When the message history exceeds the compress threshold, older tool results are summarized using this provider. Falls back to the agent's main provider when nil.

func WithCompressThreshold added in v0.9.0

func WithCompressThreshold(n int) AgentOption

WithCompressThreshold sets the rune count at which context compression triggers. When the total message content exceeds this threshold, older tool results are summarized via an LLM call. Default is 200,000 runes (~50K tokens). Negative value disables compression.

func WithConversationMemory added in v0.2.1

func WithConversationMemory(s Store, opts ...ConversationOption) AgentOption

WithConversationMemory enables conversation history on the agent. When set and task.Context["thread_id"] is present, the agent loads recent messages before the LLM call and persists the exchange afterward.

Optional ConversationOption values enable additional features:

oasis.WithConversationMemory(store)                                                  // history only
oasis.WithConversationMemory(store, oasis.MaxHistory(30))                            // custom history limit
oasis.WithConversationMemory(store, oasis.CrossThreadSearch(embedding))              // + cross-thread recall
oasis.WithConversationMemory(store, oasis.CrossThreadSearch(embedding, oasis.MinScore(0.7))) // + custom threshold

func WithDynamicModel added in v0.6.0

func WithDynamicModel(fn ModelFunc) AgentOption

WithDynamicModel sets a per-request model selection function. When set, the function is called at the start of every Execute/ExecuteStream call, and its return value is used as the LLM provider for that execution. Overrides the construction-time provider when set.

func WithDynamicPrompt added in v0.6.0

func WithDynamicPrompt(fn PromptFunc) AgentOption

WithDynamicPrompt sets a per-request prompt resolution function. When set, the function is called at the start of every Execute/ExecuteStream call, and its return value is used as the system prompt for that execution. Overrides WithPrompt when set. If the function returns "", no system prompt is used (same as omitting WithPrompt).

func WithDynamicTools added in v0.6.0

func WithDynamicTools(fn ToolsFunc) AgentOption

WithDynamicTools sets a per-request tool selection function. When set, the function is called at the start of every Execute/ExecuteStream call, and its return value REPLACES the construction-time tools for that execution. To remove all tools for a request, return nil or an empty slice.

func WithInputHandler added in v0.2.0

func WithInputHandler(h InputHandler) AgentOption

WithInputHandler sets the handler for human-in-the-loop interactions. When set, the agent gains an "ask_user" tool (LLM-driven) and processors can access the handler via InputHandlerFromContext(ctx).

func WithLogger added in v0.6.0

func WithLogger(l *slog.Logger) AgentOption

WithLogger sets the structured logger for the agent. When set, replaces all log.Printf calls with structured slog output. If not set, a no-op logger is used (no output).

func WithMaxAttachmentBytes added in v0.9.0

func WithMaxAttachmentBytes(n int64) AgentOption

WithMaxAttachmentBytes sets the maximum total bytes of attachments accumulated from tool results during the execution loop. Default is 50 MB. Zero means use the default.

func WithMaxIter

func WithMaxIter(n int) AgentOption

WithMaxIter sets the maximum tool-calling iterations.

func WithMaxTokens added in v0.9.0

func WithMaxTokens(n int) AgentOption

WithMaxTokens sets the maximum output tokens for this agent. Passed to the provider on every LLM call via ChatRequest.GenerationParams.

func WithPlanExecution added in v0.5.0

func WithPlanExecution() AgentOption

WithPlanExecution enables the built-in "execute_plan" tool that batches multiple tool calls in a single LLM turn. The LLM can call execute_plan with an array of steps (each specifying a tool name and arguments), and the framework executes all steps in parallel without re-sampling the LLM between each call. Returns structured per-step results.

This reduces latency and token usage for fan-out patterns where the LLM needs to call the same or different tools multiple times with known inputs.

func WithProcessors

func WithProcessors(processors ...any) AgentOption

WithProcessors adds processors to the agent's execution pipeline. Each processor must implement at least one of PreProcessor, PostProcessor, or PostToolProcessor. Processors run in registration order at their respective hook points during Execute().

func WithPrompt

func WithPrompt(s string) AgentOption

WithPrompt sets the system prompt for the agent or network router.

func WithResponseSchema added in v0.5.0

func WithResponseSchema(s *ResponseSchema) AgentOption

WithResponseSchema sets the response schema for structured JSON output. When set, the provider enforces structured output matching the schema. Providers translate this to their native mechanism (e.g. Gemini responseSchema, OpenAI response_format).

func WithSuspendBudget added in v0.9.0

func WithSuspendBudget(maxSnapshots int, maxBytes int64) AgentOption

WithSuspendBudget sets per-agent limits on concurrent suspended snapshots. maxSnapshots caps the number of active suspensions. maxBytes caps total estimated memory held by snapshot closures. Defaults: 20 snapshots, 256 MB. When either limit is exceeded, new suspensions are rejected (the underlying processor error is returned instead of ErrSuspended).

func WithTemperature added in v0.9.0

func WithTemperature(t float64) AgentOption

WithTemperature sets the LLM sampling temperature for this agent. Passed to the provider on every LLM call via ChatRequest.GenerationParams. Nil (omitting this option) means "use provider default".

func WithTools

func WithTools(tools ...Tool) AgentOption

WithTools adds tools to the agent or network.

func WithTopK added in v0.9.0

func WithTopK(k int) AgentOption

WithTopK sets the top-K sampling parameter for this agent. Passed to the provider on every LLM call via ChatRequest.GenerationParams.

func WithTopP added in v0.9.0

func WithTopP(p float64) AgentOption

WithTopP sets the nucleus sampling probability for this agent. Passed to the provider on every LLM call via ChatRequest.GenerationParams.

func WithTracer added in v0.6.0

func WithTracer(t Tracer) AgentOption

WithTracer sets the tracer for the agent. When set, the agent emits spans for execution, memory, and loop operations. Use observer.NewTracer() for an OTEL-backed implementation.

func WithUserMemory added in v0.2.1

func WithUserMemory(m MemoryStore, e EmbeddingProvider) AgentOption

WithUserMemory enables the full user memory pipeline: read + write.

Read (every Execute call): embeds the input, retrieves relevant facts via BuildContext, and appends them to the system prompt.

Write (after each turn, background): uses the agent's own LLM to extract durable user facts from the conversation exchange and persists them via UpsertFact. Write requires WithConversationMemory — without it, extraction is silently skipped (logged as a warning at construction time).

type AgentResult

type AgentResult struct {
	// Output is the agent's final response text.
	Output string
	// Thinking carries the LLM's reasoning/chain-of-thought from the final response.
	// Populated when the provider returns thinking content (e.g. Gemini thought parts).
	// Empty when the provider does not support thinking or thinking is disabled.
	Thinking string
	// Attachments carries optional multimodal content (images, audio, etc.) from the LLM response.
	// Populated when the provider returns media alongside or instead of text.
	Attachments []Attachment
	// Usage tracks aggregate token usage across all LLM calls.
	Usage Usage
	// Steps records per-tool and per-agent execution traces in chronological order.
	// Populated by LLMAgent (tool calls) and Network (tool + agent delegations).
	// Nil when no tools were called.
	Steps []StepTrace
}

AgentResult is the output of an Agent.

func ServeSSE added in v0.5.0

ServeSSE streams an agent's response as Server-Sent Events over HTTP.

It validates that w implements http.Flusher, sets SSE headers, creates a buffered StreamEvent channel, runs the agent in a background goroutine, and writes each event as:

event: <event-type>
data: <json-encoded StreamEvent>

On completion it sends a final "done" event. If the agent returns an error, it is sent as an "error" event before returning.

Client disconnection propagates via ctx cancellation to the agent. Callers typically pass r.Context() as ctx.

type AgentState added in v0.2.0

type AgentState int32

AgentState represents the execution state of a spawned agent.

const (
	// StatePending indicates the agent has been spawned but Execute has not started.
	StatePending AgentState = iota
	// StateRunning indicates Execute is in progress.
	StateRunning
	// StateCompleted indicates Execute finished successfully.
	StateCompleted
	// StateFailed indicates Execute returned an error.
	StateFailed
	// StateCancelled indicates the agent was cancelled via Cancel() or parent context.
	StateCancelled
)

func (AgentState) IsTerminal added in v0.2.0

func (s AgentState) IsTerminal() bool

IsTerminal reports whether the state is a final state (completed, failed, or cancelled).

func (AgentState) String added in v0.2.0

func (s AgentState) String() string

String returns the state name.

type AgentTask

type AgentTask struct {
	// Input is the natural language task description.
	Input string
	// Attachments carries optional multimodal content (photos, PDFs, documents, etc.) to pass to the LLM.
	// Providers that support multimodal input will attach these to the user message as inline data.
	// Providers that don't support it will ignore this field.
	Attachments []Attachment
	// Context carries optional metadata (thread ID, user ID, etc.).
	// Use the With*ID builder methods to set values and the Task*ID accessors to read them.
	Context map[string]any
}

AgentTask is the input to an Agent.

func TaskFromContext added in v0.6.0

func TaskFromContext(ctx context.Context) (AgentTask, bool)

TaskFromContext retrieves the AgentTask from ctx. Returns the task and true if present, or zero AgentTask and false if not. Use this in Tool.Execute to access task metadata (user ID, thread ID, etc.) without changing the Tool interface.

func (AgentTask) TaskChatID added in v0.2.1

func (t AgentTask) TaskChatID() string

TaskChatID returns the chat ID from task context, or "" if absent.

func (AgentTask) TaskThreadID added in v0.2.1

func (t AgentTask) TaskThreadID() string

TaskThreadID returns the thread ID from task context, or "" if absent.

func (AgentTask) TaskUserID added in v0.2.1

func (t AgentTask) TaskUserID() string

TaskUserID returns the user ID from task context, or "" if absent.

func (AgentTask) WithChatID added in v0.9.0

func (t AgentTask) WithChatID(id string) AgentTask

WithChatID sets the chat/channel ID on the task and returns it.

func (AgentTask) WithThreadID added in v0.9.0

func (t AgentTask) WithThreadID(id string) AgentTask

WithThreadID sets the conversation thread ID on the task and returns it.

func (AgentTask) WithUserID added in v0.9.0

func (t AgentTask) WithUserID(id string) AgentTask

WithUserID sets the user ID on the task and returns it.

type Attachment added in v0.2.1

type Attachment struct {
	MimeType string `json:"mime_type"`
	URL      string `json:"url,omitempty"`
	Data     []byte `json:"-"`

	// Deprecated: use Data for inline bytes or URL for remote references.
	Base64 string `json:"base64,omitempty"`
}

Attachment represents binary content (image, PDF, audio, video, etc.) sent to a multimodal LLM. The MimeType determines how the provider interprets the data.

Populate URL for remote references (pre-uploaded to storage/CDN) or Data for transient inline bytes. Providers resolve the best transport: URL > Data > Base64.

func (Attachment) HasInlineData added in v0.6.0

func (a Attachment) HasInlineData() bool

HasInlineData reports whether inline bytes are available (Data or Base64).

func (Attachment) InlineData added in v0.6.0

func (a Attachment) InlineData() []byte

InlineData returns raw bytes from whichever inline source is populated. Priority: Data > Base64 (decoded). Returns nil if only URL is set.

type BatchEmbeddingProvider added in v0.3.2

type BatchEmbeddingProvider interface {
	// BatchEmbed submits multiple embedding requests as a single batch job.
	BatchEmbed(ctx context.Context, texts [][]string) (BatchJob, error)

	// BatchEmbedStatus returns the current state of a batch embedding job.
	BatchEmbedStatus(ctx context.Context, jobID string) (BatchJob, error)

	// BatchEmbedResults retrieves embedding vectors for a completed batch job.
	// Returns one vector per input text group.
	BatchEmbedResults(ctx context.Context, jobID string) ([][]float32, error)
}

BatchEmbeddingProvider extends EmbeddingProvider with batch embedding capabilities. Each element in the texts slice passed to BatchEmbed is a group of strings to embed.

type BatchJob added in v0.3.2

type BatchJob struct {
	ID          string     `json:"id"`
	State       BatchState `json:"state"`
	DisplayName string     `json:"display_name,omitempty"`
	Stats       BatchStats `json:"stats"`
	CreateTime  time.Time  `json:"create_time"`
	UpdateTime  time.Time  `json:"update_time"`
}

BatchJob represents an asynchronous batch processing job. Use BatchStatus to poll for state changes and BatchChatResults or BatchEmbedResults to retrieve completed output.

type BatchProvider added in v0.3.2

type BatchProvider interface {
	// BatchChat submits multiple chat requests as a single batch job.
	// Returns the created job with its ID for status tracking.
	BatchChat(ctx context.Context, requests []ChatRequest) (BatchJob, error)

	// BatchStatus returns the current state of a batch job.
	BatchStatus(ctx context.Context, jobID string) (BatchJob, error)

	// BatchChatResults retrieves chat responses for a completed batch job.
	// Returns error if the job has not yet succeeded.
	BatchChatResults(ctx context.Context, jobID string) ([]ChatResponse, error)

	// BatchCancel requests cancellation of a running or pending batch job.
	BatchCancel(ctx context.Context, jobID string) error
}

BatchProvider extends Provider with asynchronous batch chat capabilities. Batch requests are processed offline at reduced cost. Use BatchStatus to poll job progress and BatchChatResults to retrieve completed responses.

type BatchState added in v0.3.2

type BatchState string

BatchState represents the lifecycle state of a batch job.

const (
	BatchPending   BatchState = "pending"
	BatchRunning   BatchState = "running"
	BatchSucceeded BatchState = "succeeded"
	BatchFailed    BatchState = "failed"
	BatchCancelled BatchState = "cancelled"
	BatchExpired   BatchState = "expired"
)

type BatchStats added in v0.3.2

type BatchStats struct {
	TotalCount     int `json:"total_count"`
	SucceededCount int `json:"succeeded_count"`
	FailedCount    int `json:"failed_count"`
}

BatchStats holds aggregate counts for a batch job's requests.

type ChatMessage

type ChatMessage struct {
	Role        string          `json:"role"` // "system", "user", "assistant", "tool"
	Content     string          `json:"content"`
	Attachments []Attachment    `json:"attachments,omitempty"`
	ToolCalls   []ToolCall      `json:"tool_calls,omitempty"`
	ToolCallID  string          `json:"tool_call_id,omitempty"`
	Metadata    json.RawMessage `json:"metadata,omitempty"` // provider-specific (e.g. Gemini thoughtSignature)
}

func AssistantMessage

func AssistantMessage(text string) ChatMessage

func SystemMessage

func SystemMessage(text string) ChatMessage

func ToolResultMessage

func ToolResultMessage(callID, content string) ChatMessage

func UserMessage

func UserMessage(text string) ChatMessage

type ChatRequest

type ChatRequest struct {
	Messages         []ChatMessage     `json:"messages"`
	Tools            []ToolDefinition  `json:"tools,omitempty"`
	ResponseSchema   *ResponseSchema   `json:"response_schema,omitempty"`
	GenerationParams *GenerationParams `json:"generation_params,omitempty"`
}

type ChatResponse

type ChatResponse struct {
	Content     string       `json:"content"`
	Thinking    string       `json:"thinking,omitempty"`
	Attachments []Attachment `json:"attachments,omitempty"`
	ToolCalls   []ToolCall   `json:"tool_calls,omitempty"`
	Usage       Usage        `json:"usage"`
}

type Chunk

type Chunk struct {
	ID         string     `json:"id"`
	DocumentID string     `json:"document_id"`
	ParentID   string     `json:"parent_id,omitempty"`
	Content    string     `json:"content"`
	ChunkIndex int        `json:"chunk_index"`
	Embedding  []float32  `json:"-"`
	Metadata   *ChunkMeta `json:"metadata,omitempty"`
}

type ChunkEdge added in v0.6.0

type ChunkEdge struct {
	ID          string       `json:"id"`
	SourceID    string       `json:"source_id"`
	TargetID    string       `json:"target_id"`
	Relation    RelationType `json:"relation"`
	Weight      float32      `json:"weight"`
	Description string       `json:"description,omitempty"`
}

ChunkEdge represents a directed, weighted relationship between two chunks.

type ChunkFilter added in v0.5.0

type ChunkFilter struct {
	Field string
	Op    FilterOp
	Value any
}

ChunkFilter restricts which chunks are considered during vector search. Field names: "document_id", "source", "created_at", or "meta.<key>" for JSON metadata fields (e.g. "meta.section_heading", "meta.page_number").

func ByDocument added in v0.5.0

func ByDocument(ids ...string) ChunkFilter

ByDocument returns a filter matching chunks belonging to the given document IDs.

func ByExcludeDocument added in v0.9.0

func ByExcludeDocument(docID string) ChunkFilter

ByExcludeDocument returns a filter that excludes chunks belonging to the given document.

func ByMeta added in v0.5.0

func ByMeta(key, value string) ChunkFilter

ByMeta returns a filter matching chunks where metadata key equals value. Key corresponds to a ChunkMeta JSON field (e.g. "section_heading", "page_number").

func BySource added in v0.5.0

func BySource(source string) ChunkFilter

BySource returns a filter matching chunks from documents with the given source.

func CreatedAfter added in v0.5.0

func CreatedAfter(unix int64) ChunkFilter

CreatedAfter returns a filter matching chunks from documents created after unix timestamp.

func CreatedBefore added in v0.5.0

func CreatedBefore(unix int64) ChunkFilter

CreatedBefore returns a filter matching chunks from documents created before unix timestamp.

type ChunkMeta added in v0.3.0

type ChunkMeta struct {
	PageNumber     int     `json:"page_number,omitempty"`
	SectionHeading string  `json:"section_heading,omitempty"`
	SourceURL      string  `json:"source_url,omitempty"`
	Images         []Image `json:"images,omitempty"`
}

ChunkMeta holds optional chunk-level metadata produced during extraction. Stored as JSON in the database. Zero values are omitted.

type CodeFile added in v0.9.0

type CodeFile struct {
	// Name is the filename (e.g. "chart.png", "data.csv").
	Name string `json:"name"`
	// MIME is the media type (e.g. "image/png"). Set on output files.
	MIME string `json:"mime,omitempty"`
	// Data holds inline file bytes. Tagged json:"-" to avoid double-encoding;
	// wire format uses base64 in a separate field.
	Data []byte `json:"-"`
	// URL is an alternative to Data: the sandbox downloads via HTTP GET.
	// Future: not yet implemented by the reference sandbox.
	URL string `json:"url,omitempty"`
}

CodeFile represents a file transferred between app and sandbox.

For input: Name + Data (inline bytes) or Name + URL (sandbox downloads via HTTP GET). For output: Name + MIME + Data (always inline).

type CodeRequest added in v0.6.0

type CodeRequest struct {
	// Code is the source code to execute.
	Code string `json:"code"`
	// Runtime selects the execution environment ("python", "node").
	// Empty defaults to "python".
	Runtime string `json:"runtime,omitempty"`
	// Timeout is the maximum execution duration. Zero means use runner default.
	Timeout time.Duration `json:"-"`
	// SessionID enables workspace persistence across executions.
	// Same session ID = same workspace directory. Empty = isolated per execution.
	SessionID string `json:"session_id,omitempty"`
	// Files are placed in the workspace before execution.
	// For input: populate Name + Data (inline) or Name + URL (sandbox downloads).
	Files []CodeFile `json:"files,omitempty"`
}

CodeRequest is the input to CodeRunner.Run.

type CodeResult added in v0.6.0

type CodeResult struct {
	// Output is the structured result set via set_result() in code.
	Output string `json:"output"`
	// Logs captures print() output and stderr from the code execution.
	Logs string `json:"logs,omitempty"`
	// ExitCode is the process exit code (0 = success).
	ExitCode int `json:"exit_code"`
	// Error describes execution failure (timeout, syntax error, etc).
	Error string `json:"error,omitempty"`
	// Files are explicitly returned by the code via set_result(files=[...]).
	Files []CodeFile `json:"files,omitempty"`
}

CodeResult is the output of CodeRunner.Run.

type CodeRunner added in v0.6.0

type CodeRunner interface {
	// Run executes code and returns the result. The dispatch function
	// allows code to call agent tools via call_tool() from within the code.
	Run(ctx context.Context, req CodeRequest, dispatch DispatchFunc) (CodeResult, error)
}

CodeRunner executes code written by an LLM in a sandboxed environment. Implementations control the runtime (HTTP sandbox, container, Wasm). The dispatch function bridges code back to the agent's tool registry, enabling code to call any tool the agent has access to.

type ContentGuard added in v0.6.0

type ContentGuard struct {
	// contains filtered or unexported fields
}

ContentGuard enforces character length limits on input and output content. Implements PreProcessor (input check) and PostProcessor (output check). Returns ErrHalt when limits are exceeded. Safe for concurrent use.

Zero value for a limit means that check is skipped:

NewContentGuard(MaxInputLength(5000))  // only checks input
NewContentGuard(MaxOutputLength(10000)) // only checks output

func NewContentGuard added in v0.6.0

func NewContentGuard(opts ...ContentOption) *ContentGuard

NewContentGuard creates a guard that enforces content length limits.

func (*ContentGuard) PostLLM added in v0.6.0

func (g *ContentGuard) PostLLM(_ context.Context, resp *ChatResponse) error

PostLLM checks the LLM response length against maxOutputLen.

func (*ContentGuard) PreLLM added in v0.6.0

func (g *ContentGuard) PreLLM(_ context.Context, req *ChatRequest) error

PreLLM checks the last user message length against maxInputLen.

type ContentOption added in v0.6.0

type ContentOption func(*ContentGuard)

ContentOption configures a ContentGuard.

func ContentResponse added in v0.6.0

func ContentResponse(msg string) ContentOption

ContentResponse sets the halt response message. Default: "Content exceeds the allowed length."

func MaxInputLength added in v0.6.0

func MaxInputLength(n int) ContentOption

MaxInputLength sets the maximum rune count for the last user message. Zero (default) disables the input length check.

func MaxOutputLength added in v0.6.0

func MaxOutputLength(n int) ContentOption

MaxOutputLength sets the maximum rune count for LLM responses. Zero (default) disables the output length check.

type ConversationOption added in v0.2.1

type ConversationOption func(*agentConfig)

ConversationOption configures conversation memory behavior. Pass to WithConversationMemory to enable optional features like cross-thread search.

This is a separate type from AgentOption and SemanticOption to provide compile-time scoping: ConversationOption values are only accepted by WithConversationMemory, preventing accidental misuse in other contexts. The same pattern applies to SemanticOption (scoped to CrossThreadSearch) and SemanticTrimmingOption (scoped to WithSemanticTrimming).

func AutoTitle added in v0.7.0

func AutoTitle() ConversationOption

AutoTitle enables automatic thread title generation. When set, the agent generates a short title from the first user message and stores it on the thread. Titles are only generated once per thread (skipped if the thread already has a title). Runs in the background alongside message persistence.

func CrossThreadSearch added in v0.2.1

func CrossThreadSearch(e EmbeddingProvider, opts ...SemanticOption) ConversationOption

CrossThreadSearch enables semantic recall across all conversation threads. When the agent receives a message, it embeds the input and searches all stored messages for semantically similar content from other threads. The embedding provider is required (compile-time enforced) and is also used to embed messages before storing them for future recall.

Optional SemanticOption values tune recall behavior:

oasis.CrossThreadSearch(embedding)                    // default threshold (0.60)
oasis.CrossThreadSearch(embedding, oasis.MinScore(0.7)) // custom threshold

func MaxHistory added in v0.3.0

func MaxHistory(n int) ConversationOption

MaxHistory sets the maximum number of recent messages loaded from conversation history before the LLM call. The zero value (or omitting this option) uses a built-in default of 10.

func MaxTokens added in v0.5.0

func MaxTokens(n int) ConversationOption

MaxTokens sets a token budget for conversation history loaded before the LLM call. Messages are trimmed oldest-first until the total estimated tokens fit within n. Composes with MaxHistory — both limits apply, whichever triggers first. The zero value (or omitting this option) disables token-based trimming.

func WithSemanticTrimming added in v0.9.0

func WithSemanticTrimming(e EmbeddingProvider, opts ...SemanticTrimmingOption) ConversationOption

WithSemanticTrimming enables relevance-based history trimming. When the conversation exceeds MaxHistory or MaxTokens, instead of dropping oldest messages first, older messages are scored by cosine similarity to the current query. Lowest-scoring messages are dropped first. The most recent N messages (default 3) are always preserved regardless of score.

Requires an EmbeddingProvider. Falls back to oldest-first trimming if embedding fails (degrade, don't crash).

If CrossThreadSearch is also enabled, the query embedding is reused — no extra API call.

type DefinitionRegistry added in v0.5.0

type DefinitionRegistry struct {
	// Agents maps names to Agent implementations (for LLM nodes).
	Agents map[string]Agent
	// Tools maps names to Tool implementations (for Tool nodes).
	Tools map[string]Tool
	// Conditions maps names to custom condition functions (escape hatch for
	// complex logic that can't be expressed as a simple comparison).
	Conditions map[string]func(*WorkflowContext) bool
}

DefinitionRegistry maps string names in a WorkflowDefinition to concrete Go objects. Pass to FromDefinition.

type DispatchFunc added in v0.6.0

type DispatchFunc func(ctx context.Context, tc ToolCall) DispatchResult

DispatchFunc executes a single tool call and returns the result. LLMAgent provides one that calls ToolRegistry.Execute + ask_user. Network provides one that also routes to subagents via the agent_* prefix.

type DispatchResult added in v0.7.0

type DispatchResult struct {
	Content     string
	Usage       Usage
	Attachments []Attachment
	// IsError signals that Content represents an error message rather than
	// a successful tool result. This enables structural error detection
	// without relying on string-prefix heuristics.
	IsError bool
}

DispatchResult holds the result of a single tool or agent dispatch.

type Document

type Document struct {
	ID        string `json:"id"`
	Title     string `json:"title"`
	Source    string `json:"source"`
	Content   string `json:"content"`
	CreatedAt int64  `json:"created_at"`
}

type EdgeContext added in v0.9.0

type EdgeContext struct {
	FromChunkID string       `json:"from_chunk_id"`
	Relation    RelationType `json:"relation"`
	Description string       `json:"description"`
}

EdgeContext describes a graph edge that led to a chunk's discovery. Populated by GraphRetriever for graph-discovered (non-seed) chunks.

type EmbeddingProvider

type EmbeddingProvider interface {
	// Embed returns embedding vectors for the given texts.
	Embed(ctx context.Context, texts []string) ([][]float32, error)
	// Dimensions returns the embedding vector size.
	Dimensions() int
	// Name returns the provider name.
	Name() string
}

EmbeddingProvider abstracts text embedding.

type ErrHTTP

type ErrHTTP struct {
	Status     int
	Body       string
	RetryAfter time.Duration // parsed from Retry-After header; zero = not set
}

func (*ErrHTTP) Error

func (e *ErrHTTP) Error() string

type ErrHalt

type ErrHalt struct {
	Response string
}

ErrHalt signals that a processor wants to stop agent execution and return a specific response to the caller. The agent loop catches ErrHalt and returns AgentResult{Output: Response} with a nil error.

func (*ErrHalt) Error

func (e *ErrHalt) Error() string

type ErrLLM

type ErrLLM struct {
	Provider string
	Message  string
}

func (*ErrLLM) Error

func (e *ErrLLM) Error() string

type ErrSuspended added in v0.3.0

type ErrSuspended struct {
	// Step is the name of the step or processor hook that suspended.
	Step string
	// Payload carries context for the human (what to show, what to decide).
	Payload json.RawMessage
	// contains filtered or unexported fields
}

ErrSuspended is returned by Execute() when a workflow step or network processor suspends execution to await external input. Inspect Payload for context, then call Resume() or ResumeStream() with the human's response.

Retention: ErrSuspended holds closures that capture the full conversation message history (including tool call arguments, results, and attachments). This data remains in memory until Resume()/ResumeStream() is called, Release() is called, the TTL expires, or the ErrSuspended value is garbage-collected.

To prevent memory leaks in server environments, use WithSuspendTTL to set an automatic expiry. When the TTL elapses without Resume(), the snapshot is released automatically. Without a TTL, callers must call Release() explicitly when the resume window has passed (e.g. timeout, user abandonment). After release (manual or automatic), Resume()/ResumeStream() returns an error.

func (*ErrSuspended) Error added in v0.3.0

func (e *ErrSuspended) Error() string

func (*ErrSuspended) Release added in v0.9.0

func (e *ErrSuspended) Release()

Release nils out the resume closure, eagerly freeing the captured message snapshot and all referenced data (tool arguments, attachments, etc.). Call this when the suspend will not be resumed (timeout, user abandonment). After Release(), Resume() returns an error. Safe to call multiple times.

func (*ErrSuspended) Resume added in v0.3.0

func (e *ErrSuspended) Resume(ctx context.Context, data json.RawMessage) (AgentResult, error)

Resume continues execution with the human's response data. The data is made available to the step via ResumeData(). Resume is single-use: calling it more than once is undefined behavior. Returns an error if called on a released, expired, or externally constructed ErrSuspended.

func (*ErrSuspended) ResumeStream added in v0.9.0

func (e *ErrSuspended) ResumeStream(ctx context.Context, data json.RawMessage, ch chan<- StreamEvent) (AgentResult, error)

ResumeStream continues execution with the human's response data, emitting StreamEvent values into ch throughout. Like Resume but with streaming support. The channel is closed when streaming completes. Returns an error if called on a released, expired, or externally constructed ErrSuspended, or if the suspend was created in a non-streaming context (resumeStream is nil).

func (*ErrSuspended) WithSuspendTTL added in v0.9.0

func (e *ErrSuspended) WithSuspendTTL(d time.Duration)

WithSuspendTTL sets an automatic expiry on the suspended state. When the TTL elapses without Resume() being called, the resume closure is released automatically, freeing the captured message snapshot.

A default TTL of 30 minutes is applied automatically when ErrSuspended is created by the framework. Call this to override with a custom duration.

var suspended *oasis.ErrSuspended
if errors.As(err, &suspended) {
    suspended.WithSuspendTTL(5 * time.Minute)
    // ... store suspended for later resume ...
}

type ExtractedFact added in v0.2.1

type ExtractedFact struct {
	Fact       string  `json:"fact"`
	Category   string  `json:"category"`
	Supersedes *string `json:"supersedes,omitempty"`
}

ExtractedFact is a user fact extracted from a conversation turn. Returned by the auto-extraction pipeline and persisted to MemoryStore.

type Fact

type Fact struct {
	ID         string    `json:"id"`
	Fact       string    `json:"fact"`
	Category   string    `json:"category"`
	Confidence float64   `json:"confidence"`
	Embedding  []float32 `json:"-"`
	CreatedAt  int64     `json:"created_at"`
	UpdatedAt  int64     `json:"updated_at"`
}

type FilterOp added in v0.5.0

type FilterOp int

FilterOp is a comparison operator for chunk filters.

const (
	// OpEq matches when field equals value.
	OpEq FilterOp = iota
	// OpIn matches when field is in a set of values. Value must be []string.
	OpIn
	// OpGt matches when field is greater than value.
	OpGt
	// OpLt matches when field is less than value.
	OpLt
	// OpNeq matches when field does not equal value.
	OpNeq
)

type GenerationParams added in v0.9.0

type GenerationParams struct {
	Temperature *float64 `json:"temperature,omitempty"`
	TopP        *float64 `json:"top_p,omitempty"`
	TopK        *int     `json:"top_k,omitempty"`
	MaxTokens   *int     `json:"max_tokens,omitempty"`
}

GenerationParams controls LLM generation behavior. All fields are pointers — nil means "use provider default". A Temperature of 0.0 is a valid setting, so nil (not zero) signals "unset".

type GraphRetriever added in v0.6.0

type GraphRetriever struct {
	// contains filtered or unexported fields
}

GraphRetriever combines vector search with knowledge graph traversal. It performs an initial vector search to find seed chunks, then traverses stored chunk edges to discover contextually related content. If the Store does not implement GraphStore, it falls back to vector-only retrieval.

func NewGraphRetriever added in v0.6.0

func NewGraphRetriever(store Store, embedding EmbeddingProvider, opts ...GraphRetrieverOption) *GraphRetriever

NewGraphRetriever creates a Retriever that combines vector search with graph traversal for multi-hop contextual retrieval.

func (*GraphRetriever) Retrieve added in v0.6.0

func (g *GraphRetriever) Retrieve(ctx context.Context, query string, topK int) ([]RetrievalResult, error)

Retrieve searches the knowledge base using vector search + graph traversal.

type GraphRetrieverOption added in v0.6.0

type GraphRetrieverOption func(*graphRetrieverConfig)

GraphRetrieverOption configures a GraphRetriever.

func WithBidirectional added in v0.6.0

func WithBidirectional(b bool) GraphRetrieverOption

WithBidirectional enables traversal of both outgoing and incoming edges (default false).

func WithGraphFilters added in v0.6.0

func WithGraphFilters(filters ...ChunkFilter) GraphRetrieverOption

WithGraphFilters sets metadata filters passed to the initial vector search.

func WithGraphRetrieverLogger added in v0.6.0

func WithGraphRetrieverLogger(l *slog.Logger) GraphRetrieverOption

WithGraphRetrieverLogger sets the structured logger for a GraphRetriever.

func WithGraphRetrieverTracer added in v0.6.0

func WithGraphRetrieverTracer(t Tracer) GraphRetrieverOption

WithGraphRetrieverTracer sets the Tracer for a GraphRetriever.

func WithGraphWeight added in v0.6.0

func WithGraphWeight(w float32) GraphRetrieverOption

WithGraphWeight sets the weight for graph-derived scores in the final blend (default 0.3).

func WithHopDecay added in v0.6.0

func WithHopDecay(factors []float32) GraphRetrieverOption

WithHopDecay sets the score decay factor per hop level (default {1.0, 0.7, 0.5}). Length implicitly caps max hops.

func WithMaxHops added in v0.6.0

func WithMaxHops(n int) GraphRetrieverOption

WithMaxHops sets the maximum number of graph traversal hops (default 2).

func WithMinTraversalScore added in v0.6.0

func WithMinTraversalScore(s float32) GraphRetrieverOption

WithMinTraversalScore sets the minimum edge weight to follow during traversal (default 0).

func WithRelationFilter added in v0.6.0

func WithRelationFilter(types ...RelationType) GraphRetrieverOption

WithRelationFilter restricts graph traversal to the specified relationship types.

func WithSeedKeywordWeight added in v0.9.0

func WithSeedKeywordWeight(w float32) GraphRetrieverOption

WithSeedKeywordWeight sets the keyword search weight in the seed RRF merge (default 0, disabled). When > 0 and the Store implements KeywordSearcher, keyword results are merged with vector results to produce a more diverse seed set for graph traversal.

func WithSeedTopK added in v0.6.0

func WithSeedTopK(k int) GraphRetrieverOption

WithSeedTopK sets the number of seed chunks from initial vector search (default 10).

func WithVectorWeight added in v0.6.0

func WithVectorWeight(w float32) GraphRetrieverOption

WithVectorWeight sets the weight for vector scores in the final blend (default 0.7).

type GraphStore added in v0.6.0

type GraphStore interface {
	StoreEdges(ctx context.Context, edges []ChunkEdge) error
	GetEdges(ctx context.Context, chunkIDs []string) ([]ChunkEdge, error)
	GetIncomingEdges(ctx context.Context, chunkIDs []string) ([]ChunkEdge, error)
	PruneOrphanEdges(ctx context.Context) (int, error)
}

GraphStore is an optional Store capability for chunk relationship graphs. Store implementations that maintain a knowledge graph can implement this interface; callers discover it via type assertion.

type HybridRetriever added in v0.3.0

type HybridRetriever struct {
	// contains filtered or unexported fields
}

HybridRetriever composes vector search, keyword search (FTS), parent-child resolution, and optional re-ranking into a single Retrieve call.

func NewHybridRetriever added in v0.3.0

func NewHybridRetriever(store Store, embedding EmbeddingProvider, opts ...RetrieverOption) *HybridRetriever

NewHybridRetriever creates a Retriever that combines vector and keyword search using Reciprocal Rank Fusion, resolves parent-child chunks, and optionally re-ranks results. If the Store implements KeywordSearcher, keyword search is used automatically.

func (*HybridRetriever) Retrieve added in v0.3.0

func (h *HybridRetriever) Retrieve(ctx context.Context, query string, topK int) ([]RetrievalResult, error)

Retrieve searches the knowledge base using hybrid vector + keyword search, resolves parent-child chunks, optionally re-ranks, and returns the top results.

type Image added in v0.3.0

type Image struct {
	MimeType string `json:"mime_type"`
	Base64   string `json:"base64"`
	AltText  string `json:"alt_text,omitempty"`
	Page     int    `json:"page,omitempty"`
}

Image represents an extracted image from a document.

type InjectionGuard added in v0.6.0

type InjectionGuard struct {
	// contains filtered or unexported fields
}

InjectionGuard is a PreProcessor that detects prompt injection attempts in user messages using multi-layer heuristics:

  • Layer 1: Known injection phrases (~55 patterns, case-insensitive substring)
  • Layer 2: Role override detection (role prefixes, markdown headers, XML tags). Note: this layer may flag legitimate content containing patterns like "user:" at the start of a line. Use SkipLayers(2) if this causes false positives.
  • Layer 3: Delimiter injection (fake message boundaries, separator abuse)
  • Layer 4: Encoding/obfuscation (zero-width chars, NFKC normalization, base64-encoded payloads)
  • Layer 5: User-supplied custom patterns and regex

By default only the last user message is checked. Use ScanAllMessages() to scan all user messages in the conversation history.

Returns ErrHalt when injection is detected. Safe for concurrent use.

func NewInjectionGuard added in v0.6.0

func NewInjectionGuard(opts ...InjectionOption) *InjectionGuard

NewInjectionGuard creates a guard with built-in multi-layer injection detection. Options customize behavior: add patterns, add regex, change response, skip layers.

func (*InjectionGuard) PreLLM added in v0.6.0

func (g *InjectionGuard) PreLLM(_ context.Context, req *ChatRequest) error

PreLLM checks user messages for injection patterns. By default only the last user message is checked; enable ScanAllMessages() to check all user messages in the conversation history.

type InjectionOption added in v0.6.0

type InjectionOption func(*InjectionGuard)

InjectionOption configures an InjectionGuard.

func InjectionPatterns added in v0.6.0

func InjectionPatterns(patterns ...string) InjectionOption

InjectionPatterns adds custom string patterns (case-insensitive substring match). These are appended to the built-in Layer 1 phrases.

func InjectionRegex added in v0.6.0

func InjectionRegex(patterns ...*regexp.Regexp) InjectionOption

InjectionRegex adds custom regex patterns for Layer 5 detection.

func InjectionResponse added in v0.6.0

func InjectionResponse(msg string) InjectionOption

InjectionResponse sets the halt response message. Default: "I can't process that request."

func ScanAllMessages added in v0.8.0

func ScanAllMessages() InjectionOption

ScanAllMessages enables scanning all user messages in the conversation, not just the last one. Use this to detect injection placed in earlier messages (e.g., via multi-turn context poisoning). Default: only the last user message is scanned.

func SkipLayers added in v0.6.0

func SkipLayers(layers ...int) InjectionOption

SkipLayers disables specific detection layers (1-5). Use when a layer produces false positives for your use case.

type InputHandler added in v0.2.0

type InputHandler interface {
	RequestInput(ctx context.Context, req InputRequest) (InputResponse, error)
}

InputHandler delivers questions to a human and returns their response. Implementations bridge to the actual communication channel (Telegram, CLI, HTTP, etc). Must block until a response is received or ctx is cancelled.

func InputHandlerFromContext added in v0.2.0

func InputHandlerFromContext(ctx context.Context) (InputHandler, bool)

InputHandlerFromContext retrieves the InputHandler from ctx. Returns nil, false if no handler is set.

type InputRequest added in v0.2.0

type InputRequest struct {
	// Question is the natural language prompt shown to the human.
	Question string
	// Options provides suggested choices. Empty = free-form input.
	Options []string
	// Metadata carries context for the handler (agent name, tool being approved, etc).
	Metadata map[string]string
}

InputRequest describes what the agent needs from the human.

type InputResponse added in v0.2.0

type InputResponse struct {
	// Value is the human's text response.
	Value string
}

InputResponse is the human's reply.

type KeywordGuard added in v0.6.0

type KeywordGuard struct {
	// contains filtered or unexported fields
}

KeywordGuard is a PreProcessor that blocks messages containing specified keywords (case-insensitive substring) or matching regex patterns. Returns ErrHalt when a match is found. Safe for concurrent use.

func NewKeywordGuard added in v0.6.0

func NewKeywordGuard(keywords ...string) *KeywordGuard

NewKeywordGuard creates a guard that blocks messages containing any of the specified keywords. Keywords are matched case-insensitively as substrings.

func (*KeywordGuard) PreLLM added in v0.6.0

func (g *KeywordGuard) PreLLM(_ context.Context, req *ChatRequest) error

PreLLM checks the last user message for blocked keywords and regex matches.

func (*KeywordGuard) WithRegex added in v0.6.0

func (g *KeywordGuard) WithRegex(patterns ...*regexp.Regexp) *KeywordGuard

WithRegex adds regex patterns to the keyword guard. Returns the guard for builder-style chaining.

func (*KeywordGuard) WithResponse added in v0.6.0

func (g *KeywordGuard) WithResponse(msg string) *KeywordGuard

WithResponse sets the halt response message. Returns the guard for builder-style chaining.

type KeywordSearcher added in v0.3.0

type KeywordSearcher interface {
	SearchChunksKeyword(ctx context.Context, query string, topK int, filters ...ChunkFilter) ([]ScoredChunk, error)
}

KeywordSearcher is an optional Store capability for full-text keyword search. Store implementations that support FTS can implement this interface; callers discover it via type assertion.

type LLMAgent

type LLMAgent struct {
	// contains filtered or unexported fields
}

LLMAgent is an Agent that uses an LLM with tools to complete tasks. Optionally supports conversation memory, user memory, and cross-thread search when configured via WithConversationMemory, CrossThreadSearch, and WithUserMemory.

func NewLLMAgent

func NewLLMAgent(name, description string, provider Provider, opts ...AgentOption) *LLMAgent

NewLLMAgent creates an LLMAgent with the given provider and options.

func (*LLMAgent) Description

func (c *LLMAgent) Description() string

func (*LLMAgent) Drain added in v0.8.0

func (c *LLMAgent) Drain()

Drain waits for all in-flight background persist goroutines to finish. Call during shutdown to ensure the last messages are written to the store.

func (*LLMAgent) Execute

func (a *LLMAgent) Execute(ctx context.Context, task AgentTask) (AgentResult, error)

Execute runs the tool-calling loop until the LLM produces a final text response.

func (*LLMAgent) ExecuteStream added in v0.2.1

func (a *LLMAgent) ExecuteStream(ctx context.Context, task AgentTask, ch chan<- StreamEvent) (AgentResult, error)

ExecuteStream runs the tool-calling loop like Execute, but emits StreamEvent values into ch throughout execution. Events include text deltas during the final LLM response and tool call start/result during tool iterations. The channel is closed when streaming completes.

func (*LLMAgent) Name

func (c *LLMAgent) Name() string

type LLMReranker added in v0.3.0

type LLMReranker struct {
	// contains filtered or unexported fields
}

LLMReranker uses an LLM to score query-document relevance. It sends a prompt asking the model to rate each result 0-10, then normalizes and re-sorts. On LLM failure, results pass through unmodified (graceful degradation).

func NewLLMReranker added in v0.3.0

func NewLLMReranker(provider Provider) *LLMReranker

NewLLMReranker creates a Reranker that uses the given LLM provider to score relevance.

func (*LLMReranker) Rerank added in v0.3.0

func (r *LLMReranker) Rerank(ctx context.Context, query string, results []RetrievalResult, topK int) ([]RetrievalResult, error)

Rerank sends results to the LLM for relevance scoring, then re-sorts.

type MaxToolCallsGuard added in v0.6.0

type MaxToolCallsGuard struct {
	// contains filtered or unexported fields
}

MaxToolCallsGuard is a PostProcessor that limits the number of tool calls per LLM response. When the LLM returns more tool calls than the limit, the excess calls are silently dropped (first N are kept). This guard trims rather than halts — graceful degradation. Safe for concurrent use.

func NewMaxToolCallsGuard added in v0.6.0

func NewMaxToolCallsGuard(max int) *MaxToolCallsGuard

NewMaxToolCallsGuard creates a guard that limits tool calls per LLM response. Tool calls beyond max are silently trimmed.

func (*MaxToolCallsGuard) PostLLM added in v0.6.0

func (g *MaxToolCallsGuard) PostLLM(_ context.Context, resp *ChatResponse) error

PostLLM trims excess tool calls from the response.

type MemoryStore

type MemoryStore interface {
	UpsertFact(ctx context.Context, fact, category string, embedding []float32) error
	// SearchFacts returns facts semantically similar to the query embedding,
	// sorted by Score descending. Only facts with confidence >= 0.3 are returned.
	SearchFacts(ctx context.Context, embedding []float32, topK int) ([]ScoredFact, error)
	BuildContext(ctx context.Context, queryEmbedding []float32) (string, error)
	// DeleteFact removes a single fact by its ID.
	DeleteFact(ctx context.Context, factID string) error
	// DeleteMatchingFacts removes facts whose text contains the given substring.
	// Implementations must treat pattern as a plain substring match — never as
	// SQL LIKE, regex, or any other pattern language — to prevent injection.
	DeleteMatchingFacts(ctx context.Context, pattern string) error
	DecayOldFacts(ctx context.Context) error
	Init(ctx context.Context) error
}

MemoryStore provides long-term user memory with semantic deduplication. Optional — pass to WithUserMemory() to enable.

type Message

type Message struct {
	ID        string         `json:"id"`
	ThreadID  string         `json:"thread_id"`
	Role      string         `json:"role"` // "user" or "assistant"
	Content   string         `json:"content"`
	Metadata  map[string]any `json:"metadata,omitempty"`
	Embedding []float32      `json:"-"`
	CreatedAt int64          `json:"created_at"`
}

type ModelFunc added in v0.6.0

type ModelFunc func(ctx context.Context, task AgentTask) Provider

ModelFunc resolves the LLM provider per-request. When set via WithDynamicModel, it is called at the start of every Execute/ExecuteStream call. The returned Provider replaces the construction-time provider for that execution.

type Network

type Network struct {
	// contains filtered or unexported fields
}

Network is an Agent that coordinates subagents and tools via an LLM router. The router sees subagents as callable tools ("agent_<name>") and decides which primitives to invoke, in what order, and with what data. Optionally supports conversation memory, user memory, and cross-thread search when configured via WithConversationMemory, CrossThreadSearch, and WithUserMemory.

func NewNetwork

func NewNetwork(name, description string, router Provider, opts ...AgentOption) *Network

NewNetwork creates a Network with the given router provider and options.

func (*Network) Description

func (c *Network) Description() string

func (*Network) Drain added in v0.8.0

func (c *Network) Drain()

Drain waits for all in-flight background persist goroutines to finish. Call during shutdown to ensure the last messages are written to the store.

func (*Network) Execute

func (n *Network) Execute(ctx context.Context, task AgentTask) (AgentResult, error)

Execute runs the network's routing loop.

func (*Network) ExecuteStream added in v0.2.1

func (n *Network) ExecuteStream(ctx context.Context, task AgentTask, ch chan<- StreamEvent) (AgentResult, error)

ExecuteStream runs the network's routing loop like Execute, but emits StreamEvent values into ch throughout execution. Events include text deltas, tool call start/result, and agent start/finish for subagent delegation. The channel is closed when streaming completes.

func (*Network) Name

func (c *Network) Name() string

type NodeDefinition added in v0.5.0

type NodeDefinition struct {
	// ID is the unique identifier for this node within the workflow.
	ID string `json:"id"`
	// Type determines how the node executes.
	Type NodeType `json:"type"`

	// LLM node: agent registry key and input template.
	Agent string `json:"agent,omitempty"`
	Input string `json:"input,omitempty"` // template: "Summarize: {{search.output}}"

	// Tool node: tool registry key, function name, and argument templates.
	Tool     string         `json:"tool,omitempty"`
	ToolName string         `json:"tool_name,omitempty"`
	Args     map[string]any `json:"args,omitempty"` // values may contain {{key}} templates

	// Condition node: expression or registered function name, and branch targets.
	Expression  string   `json:"expression,omitempty"`
	TrueBranch  []string `json:"true_branch,omitempty"`
	FalseBranch []string `json:"false_branch,omitempty"`

	// Template node: template string to resolve.
	Template string `json:"template,omitempty"`

	// Common: override default output key, retry count.
	OutputTo string `json:"output_to,omitempty"`
	Retry    int    `json:"retry,omitempty"`
}

NodeDefinition describes a single node in a runtime workflow.

type NodeType added in v0.5.0

type NodeType string

NodeType identifies the kind of node in a WorkflowDefinition.

const (
	// NodeLLM delegates to a registered Agent.
	NodeLLM NodeType = "llm"
	// NodeTool calls a registered Tool function.
	NodeTool NodeType = "tool"
	// NodeCondition evaluates an expression and routes to true/false branches.
	NodeCondition NodeType = "condition"
	// NodeTemplate performs string interpolation via WorkflowContext.Resolve.
	NodeTemplate NodeType = "template"
)

type PostProcessor

type PostProcessor interface {
	PostLLM(ctx context.Context, resp *ChatResponse) error
}

PostProcessor runs after the LLM responds, before tool execution. Implementations can modify the response (transform content, filter tool calls) or return an error to halt execution. Return ErrHalt to short-circuit with a canned response. Must be safe for concurrent use.

type PostToolProcessor

type PostToolProcessor interface {
	PostTool(ctx context.Context, call ToolCall, result *ToolResult) error
}

PostToolProcessor runs after each tool execution, before the result is appended to the message history. Implementations can modify the result (redact content, transform output) or return an error to halt execution. Return ErrHalt to short-circuit with a canned response. Must be safe for concurrent use.

type PreProcessor

type PreProcessor interface {
	PreLLM(ctx context.Context, req *ChatRequest) error
}

PreProcessor runs before messages are sent to the LLM. Implementations can modify the request (add/remove/transform messages) or return an error to halt execution. Return ErrHalt to short-circuit with a canned response. Must be safe for concurrent use.

type ProcessorChain

type ProcessorChain struct {
	// contains filtered or unexported fields
}

ProcessorChain holds an ordered list of processors and runs them at each hook point. Processors are pre-bucketed by interface at Add() time, eliminating per-call type assertions in the hot path.

func NewProcessorChain

func NewProcessorChain() *ProcessorChain

NewProcessorChain creates an empty chain.

func (*ProcessorChain) Add

func (c *ProcessorChain) Add(p any)

Add appends a processor to the chain. The processor must implement at least one of PreProcessor, PostProcessor, or PostToolProcessor. Panics if p implements none of the three interfaces.

func (*ProcessorChain) Len

func (c *ProcessorChain) Len() int

Len returns the number of registered processors.

func (*ProcessorChain) RunPostLLM

func (c *ProcessorChain) RunPostLLM(ctx context.Context, resp *ChatResponse) error

RunPostLLM runs all PostProcessor hooks in registration order. Stops and returns the first non-nil error.

func (*ProcessorChain) RunPostTool

func (c *ProcessorChain) RunPostTool(ctx context.Context, call ToolCall, result *ToolResult) error

RunPostTool runs all PostToolProcessor hooks in registration order. Stops and returns the first non-nil error.

func (*ProcessorChain) RunPreLLM

func (c *ProcessorChain) RunPreLLM(ctx context.Context, req *ChatRequest) error

RunPreLLM runs all PreProcessor hooks in registration order. Stops and returns the first non-nil error.

type PromptFunc added in v0.6.0

type PromptFunc func(ctx context.Context, task AgentTask) string

PromptFunc resolves the system prompt per-request. When set via WithDynamicPrompt, it is called at the start of every Execute/ExecuteStream call. The returned string replaces the static WithPrompt value for that execution.

type Provider

type Provider interface {
	Chat(ctx context.Context, req ChatRequest) (ChatResponse, error)
	ChatStream(ctx context.Context, req ChatRequest, ch chan<- StreamEvent) (ChatResponse, error)
	Name() string
}

Provider abstracts the LLM backend.

Two methods handle all interaction patterns:

  • Chat: blocking request/response. When req.Tools is non-empty, the response may contain ToolCalls that the caller must dispatch.
  • ChatStream: like Chat but emits StreamEvent values into ch as content is generated. When req.Tools is non-empty, emits EventToolCallDelta events as tool call arguments are generated incrementally. The channel is NOT closed by the provider — the caller owns its lifecycle. Returns the final assembled ChatResponse with complete ToolCalls and Usage.

func WithRateLimit added in v0.5.0

func WithRateLimit(p Provider, opts ...RateLimitOption) Provider

WithRateLimit wraps p with proactive rate limiting. Compose with other wrappers:

chatLLM = oasis.WithRateLimit(provider, oasis.RPM(60))
chatLLM = oasis.WithRateLimit(provider, oasis.RPM(60), oasis.TPM(100000))
chatLLM = oasis.WithRateLimit(oasis.WithRetry(provider), oasis.RPM(60))

func WithRetry added in v0.2.1

func WithRetry(p Provider, opts ...RetryOption) Provider

WithRetry wraps p with automatic retry on transient HTTP errors (429, 503). Retries use exponential backoff with jitter. When the error includes a Retry-After duration (parsed from the HTTP header), the retry delay is at least that long. Compose with any Provider:

chatLLM = oasis.WithRetry(gemini.New(apiKey, model))
chatLLM = oasis.WithRetry(gemini.New(apiKey, model), oasis.RetryMaxAttempts(5))
chatLLM = oasis.WithRetry(gemini.New(apiKey, model), oasis.RetryTimeout(30*time.Second))

type RateLimitOption added in v0.5.0

type RateLimitOption func(*rateLimitProvider)

RateLimitOption configures a rateLimitProvider.

func RPM added in v0.5.0

func RPM(n int) RateLimitOption

RPM sets the maximum requests per minute.

func TPM added in v0.5.0

func TPM(n int) RateLimitOption

TPM sets the maximum tokens per minute (input + output combined). Token counts are recorded from ChatResponse.Usage after each request. This is a soft limit — the request that exceeds the budget completes, but subsequent requests block until the window slides.

type RelationType added in v0.6.0

type RelationType string

RelationType represents a named relationship between chunks in a knowledge graph.

const (
	RelReferences  RelationType = "references"
	RelElaborates  RelationType = "elaborates"
	RelDependsOn   RelationType = "depends_on"
	RelContradicts RelationType = "contradicts"
	RelPartOf      RelationType = "part_of"
	RelSimilarTo   RelationType = "similar_to"
	RelSequence    RelationType = "sequence"
	RelCausedBy    RelationType = "caused_by"
)

type Reranker added in v0.3.0

type Reranker interface {
	Rerank(ctx context.Context, query string, results []RetrievalResult, topK int) ([]RetrievalResult, error)
}

Reranker re-scores retrieval results for improved precision. Implementations may use cross-encoders, LLM-based scoring, or custom logic. The returned slice must be sorted by Score descending and trimmed to topK.

type ResponseSchema

type ResponseSchema struct {
	Name   string          `json:"name"`   // schema identifier (required by some providers)
	Schema json.RawMessage `json:"schema"` // JSON Schema object
}

ResponseSchema tells the provider to enforce structured JSON output. When set on a ChatRequest, the provider translates it to its native structured output mechanism (e.g. Gemini responseSchema, OpenAI response_format).

func NewResponseSchema added in v0.6.0

func NewResponseSchema(name string, s *SchemaObject) *ResponseSchema

NewResponseSchema creates a ResponseSchema by marshalling a SchemaObject. This provides a type-safe way to build JSON Schemas without raw JSON strings.

type RetrievalResult added in v0.3.0

type RetrievalResult struct {
	Content        string        `json:"content"`
	Score          float32       `json:"score"`
	ChunkID        string        `json:"chunk_id"`
	DocumentID     string        `json:"document_id"`
	DocumentTitle  string        `json:"document_title"`
	DocumentSource string        `json:"document_source"`
	GraphContext   []EdgeContext `json:"graph_context,omitempty"`
}

RetrievalResult is a scored piece of content from a knowledge base search. Score is in [0, 1]; higher means more relevant.

type Retriever added in v0.3.0

type Retriever interface {
	Retrieve(ctx context.Context, query string, topK int) ([]RetrievalResult, error)
}

Retriever searches a knowledge base and returns ranked results. Implementations may combine multiple search strategies (vector, keyword, hybrid) and optionally re-rank before returning.

type RetrieverOption added in v0.3.0

type RetrieverOption func(*retrieverConfig)

RetrieverOption configures a HybridRetriever.

func WithFilters added in v0.5.0

func WithFilters(filters ...ChunkFilter) RetrieverOption

WithFilters sets metadata filters passed to SearchChunks and SearchChunksKeyword.

func WithKeywordWeight added in v0.3.0

func WithKeywordWeight(w float32) RetrieverOption

WithKeywordWeight sets the relative weight for keyword search results in the RRF merge. Must be in [0, 1]. Default is 0.3 (vector gets 0.7).

func WithMinRetrievalScore added in v0.3.0

func WithMinRetrievalScore(score float32) RetrieverOption

WithMinRetrievalScore sets the minimum score threshold. Results below this score are dropped before returning. Default is 0 (no filtering).

func WithOverfetchMultiplier added in v0.3.0

func WithOverfetchMultiplier(n int) RetrieverOption

WithOverfetchMultiplier sets the multiplier for over-fetching candidates before re-ranking. Retrieve fetches topK * multiplier candidates, then re-ranks and trims to topK. Default is 3.

func WithReranker added in v0.3.0

func WithReranker(r Reranker) RetrieverOption

WithReranker sets an optional re-ranking stage that runs after hybrid merge.

func WithRetrieverLogger added in v0.6.0

func WithRetrieverLogger(l *slog.Logger) RetrieverOption

WithRetrieverLogger sets the structured logger for a HybridRetriever.

func WithRetrieverTracer added in v0.6.0

func WithRetrieverTracer(t Tracer) RetrieverOption

WithRetrieverTracer sets the Tracer for a HybridRetriever.

type RetryOption added in v0.2.1

type RetryOption func(*retryProvider)

RetryOption configures a retryProvider.

func RetryBaseDelay added in v0.2.1

func RetryBaseDelay(d time.Duration) RetryOption

RetryBaseDelay sets the initial backoff delay before the second attempt (default: 1s). Each subsequent delay doubles: baseDelay, 2×baseDelay, 4×baseDelay, …

func RetryMaxAttempts added in v0.2.1

func RetryMaxAttempts(n int) RetryOption

RetryMaxAttempts sets the maximum number of attempts (default: 3).

func RetryTimeout added in v0.7.0

func RetryTimeout(d time.Duration) RetryOption

RetryTimeout sets the overall timeout for the entire retry sequence. If the total time across all attempts exceeds this duration, the retry loop gives up and returns the last error. The zero value (default) disables the timeout.

type RunHook added in v0.2.1

type RunHook func(action ScheduledAction, result AgentResult, err error)

RunHook is called after each scheduled action completes — success or failure. Use it to route output without coupling Scheduler to a specific destination.

type ScheduledAction

type ScheduledAction struct {
	ID              string `json:"id"`
	Description     string `json:"description"`
	Schedule        string `json:"schedule"`
	ToolCalls       string `json:"tool_calls"`
	SynthesisPrompt string `json:"synthesis_prompt"`
	NextRun         int64  `json:"next_run"`
	Enabled         bool   `json:"enabled"`
	SkillID         string `json:"skill_id,omitempty"`
	CreatedAt       int64  `json:"created_at"`
}

Scheduled action (DB record)

type ScheduledToolCall

type ScheduledToolCall struct {
	Tool   string          `json:"tool"`
	Params json.RawMessage `json:"params"`
}

type Scheduler added in v0.2.1

type Scheduler struct {
	// contains filtered or unexported fields
}

Scheduler polls the store for due actions and executes them via the configured Agent. It is a framework primitive: compose it with any Agent implementation.

Usage:

scheduler := oasis.NewScheduler(store, agent,
    oasis.WithSchedulerInterval(time.Minute),
    oasis.WithSchedulerTZOffset(7),
    oasis.WithOnRun(func(action oasis.ScheduledAction, result oasis.AgentResult, err error) {
        if err != nil { log.Printf("failed: %v", err); return }
        frontend.Send(ctx, chatID, result.Output, nil)
    }),
)
g.Go(func() error { return scheduler.Start(ctx) })

func NewScheduler added in v0.2.1

func NewScheduler(store Store, agent Agent, opts ...SchedulerOption) *Scheduler

NewScheduler creates a Scheduler. store is the source of ScheduledAction records. agent executes each due action (LLMAgent, Network, Workflow, or custom).

func (*Scheduler) Start added in v0.2.1

func (s *Scheduler) Start(ctx context.Context) error

Start begins the polling loop. Blocks until ctx is cancelled. Returns nil on clean shutdown.

type SchedulerOption added in v0.2.1

type SchedulerOption func(*schedulerConfig)

SchedulerOption configures a Scheduler.

func WithOnRun added in v0.2.1

func WithOnRun(hook RunHook) SchedulerOption

WithOnRun registers a hook called after each action execution. Receives the action, agent result, and error (nil on success).

func WithSchedulerInterval added in v0.2.1

func WithSchedulerInterval(d time.Duration) SchedulerOption

WithSchedulerInterval sets the polling interval. Default: 1 minute.

func WithSchedulerTZOffset added in v0.2.1

func WithSchedulerTZOffset(hours int) SchedulerOption

WithSchedulerTZOffset sets the UTC offset in hours for schedule computation. Default: 0 (UTC).

type SchemaObject added in v0.6.0

type SchemaObject struct {
	Type        string                   `json:"type"`
	Description string                   `json:"description,omitempty"`
	Properties  map[string]*SchemaObject `json:"properties,omitempty"`
	Items       *SchemaObject            `json:"items,omitempty"`
	Enum        []string                 `json:"enum,omitempty"`
	Required    []string                 `json:"required,omitempty"`
}

SchemaObject is a typed builder for common JSON Schema constructs. Use with NewResponseSchema for a type-safe alternative to raw JSON:

oasis.NewResponseSchema("plan", &oasis.SchemaObject{
    Type: "object",
    Properties: map[string]*oasis.SchemaObject{
        "steps": {Type: "array", Items: &oasis.SchemaObject{Type: "string"}},
    },
    Required: []string{"steps"},
})

For schemas that need keywords beyond this subset, use ResponseSchema directly with json.RawMessage.

type ScoreReranker added in v0.3.0

type ScoreReranker struct {
	// contains filtered or unexported fields
}

ScoreReranker filters results below a minimum score and re-sorts by score descending. It makes no external calls — useful as a baseline or when no API-based reranker is available.

func NewScoreReranker added in v0.3.0

func NewScoreReranker(minScore float32) *ScoreReranker

NewScoreReranker creates a ScoreReranker that drops results below minScore.

func (*ScoreReranker) Rerank added in v0.3.0

func (r *ScoreReranker) Rerank(_ context.Context, _ string, results []RetrievalResult, topK int) ([]RetrievalResult, error)

Rerank filters results below the minimum score, sorts by score descending, and trims to topK.

type ScoredChunk added in v0.2.1

type ScoredChunk struct {
	Chunk
	Score float32
}

ScoredChunk is a Chunk paired with its cosine similarity score.

type ScoredFact added in v0.2.1

type ScoredFact struct {
	Fact
	Score float32
}

ScoredFact is a Fact paired with its cosine similarity score.

type ScoredMessage added in v0.2.1

type ScoredMessage struct {
	Message
	Score float32
}

ScoredMessage is a Message paired with its cosine similarity score from a semantic search. Score is in [0, 1]; higher means more relevant.

type ScoredSkill added in v0.2.1

type ScoredSkill struct {
	Skill
	Score float32
}

ScoredSkill is a Skill paired with its cosine similarity score.

type SemanticOption added in v0.2.1

type SemanticOption func(*agentConfig)

SemanticOption tunes semantic search parameters within CrossThreadSearch. Scoped type — only accepted by CrossThreadSearch, not by other option functions.

func MinScore added in v0.2.1

func MinScore(score float32) SemanticOption

MinScore sets the minimum cosine similarity score for cross-thread semantic recall. Messages with a score below this threshold are silently dropped before being injected into the LLM context. The zero value (or omitting this option) uses a built-in default of 0.60.

type SemanticTrimmingOption added in v0.9.0

type SemanticTrimmingOption func(*agentConfig)

SemanticTrimmingOption tunes semantic trimming behavior. Scoped type — only accepted by WithSemanticTrimming, not by other option functions.

func KeepRecent added in v0.9.0

func KeepRecent(n int) SemanticTrimmingOption

KeepRecent sets how many recent messages are always preserved during semantic trimming, regardless of their relevance score. Default: 3.

type Skill

type Skill struct {
	ID           string    `json:"id"`
	Name         string    `json:"name"`
	Description  string    `json:"description"`
	Instructions string    `json:"instructions"`
	Tools        []string  `json:"tools,omitempty"`
	Model        string    `json:"model,omitempty"`
	Tags         []string  `json:"tags,omitempty"`       // categorization labels
	CreatedBy    string    `json:"created_by,omitempty"` // origin: user ID or agent ID
	References   []string  `json:"references,omitempty"` // skill IDs this builds on
	Embedding    []float32 `json:"-"`
	CreatedAt    int64     `json:"created_at"`
	UpdatedAt    int64     `json:"updated_at"`
}

Skill is a stored instruction package for specializing the action agent.

type Span added in v0.6.0

type Span interface {
	// SetAttr adds attributes to the span after creation.
	SetAttr(attrs ...SpanAttr)
	// Event records a named event (annotation) on the span timeline.
	Event(name string, attrs ...SpanAttr)
	// Error records an error on the span and marks it as failed.
	Error(err error)
	// End completes the span. Must be called exactly once.
	End()
}

Span represents a traced operation. Callers must call End() when the operation completes to flush the span to the configured exporter.

type SpanAttr added in v0.6.0

type SpanAttr struct {
	Key   string
	Value any
}

SpanAttr is a key-value attribute attached to a span or event.

func BoolAttr added in v0.6.0

func BoolAttr(k string, v bool) SpanAttr

BoolAttr creates a bool-typed span attribute.

func Float64Attr added in v0.6.0

func Float64Attr(k string, v float64) SpanAttr

Float64Attr creates a float64-typed span attribute.

func IntAttr added in v0.6.0

func IntAttr(k string, v int) SpanAttr

IntAttr creates an int-typed span attribute.

func StringAttr added in v0.6.0

func StringAttr(k, v string) SpanAttr

StringAttr creates a string-typed span attribute.

type StepFunc added in v0.2.0

type StepFunc func(ctx context.Context, wCtx *WorkflowContext) error

StepFunc is the signature for custom function steps. The function receives the parent context (which is cancelled on workflow failure) and the shared WorkflowContext for reading/writing step data.

type StepOption added in v0.2.0

type StepOption func(*stepConfig)

StepOption configures an individual workflow step.

func After added in v0.2.0

func After(steps ...string) StepOption

After declares dependency edges: this step runs only after all named steps have completed successfully (or been skipped by condition).

func ArgsFrom added in v0.2.0

func ArgsFrom(key string) StepOption

ArgsFrom sets the context key whose value becomes the tool arguments for a ToolStep. The value should be json.RawMessage, a JSON string, or any value that can be marshalled to JSON.

func Concurrency added in v0.2.0

func Concurrency(n int) StepOption

Concurrency sets the maximum number of parallel iterations for a ForEach step. Defaults to 1 (sequential) if not specified.

func InputFrom added in v0.2.0

func InputFrom(key string) StepOption

InputFrom sets the context key whose value becomes the AgentTask.Input for an AgentStep. If not set, the original WorkflowContext.Input() is used.

func IterOver added in v0.2.0

func IterOver(key string) StepOption

IterOver sets the context key that contains a []any collection for a ForEach step. Each element is made available to the step function via the context key "{name}.item".

func MaxIter added in v0.2.0

func MaxIter(n int) StepOption

MaxIter sets the safety cap on loop iterations for DoUntil and DoWhile steps. Defaults to 10 if not specified.

func OutputTo added in v0.2.0

func OutputTo(key string) StepOption

OutputTo overrides the default output key for AgentStep ("{name}.output") or ToolStep ("{name}.result"). Has no effect on basic Step (which writes to context explicitly via wCtx.Set).

func Retry added in v0.2.0

func Retry(n int, delay time.Duration) StepOption

Retry configures the step to be retried up to n times on failure, with the given delay between attempts. The total attempts = 1 + n.

func Until added in v0.2.0

func Until(fn func(*WorkflowContext) bool) StepOption

Until sets the exit condition for a DoUntil step. The step repeats until the function returns true (evaluated after each iteration).

func When added in v0.2.0

func When(fn func(*WorkflowContext) bool) StepOption

When sets a condition function that is evaluated before the step runs. If the function returns false, the step is marked StepSkipped and its dependents treat it as satisfied.

func While added in v0.2.0

func While(fn func(*WorkflowContext) bool) StepOption

While sets the continuation condition for a DoWhile step. The step repeats as long as the function returns true (evaluated before each iteration after the first).

type StepResult added in v0.2.0

type StepResult struct {
	// Name is the step's unique identifier within the workflow.
	Name string
	// Status is the final execution state of the step.
	Status StepStatus
	// Output is the step's output string (from context), empty if the step failed or was skipped.
	Output string
	// Error is the error that caused the step to fail, nil on success or skip.
	Error error
	// Duration is the wall-clock time the step took to execute, including retries.
	Duration time.Duration
}

StepResult holds the outcome of a single step execution.

type StepStatus added in v0.2.0

type StepStatus string

StepStatus represents the execution state of a workflow step.

const (
	// StepPending indicates a step that has not started execution.
	StepPending StepStatus = "pending"
	// StepRunning indicates a step that is currently executing.
	StepRunning StepStatus = "running"
	// StepSuccess indicates a step that completed without error.
	StepSuccess StepStatus = "success"
	// StepSkipped indicates a step that was not executed because its
	// When() condition returned false or an upstream step failed.
	StepSkipped StepStatus = "skipped"
	// StepFailed indicates a step that returned an error after exhausting retries.
	StepFailed StepStatus = "failed"
)
const StepSuspended StepStatus = "suspended"

StepSuspended indicates a step that paused execution to await external input.

type StepTrace added in v0.5.0

type StepTrace struct {
	// Name is the tool or agent name (e.g. "web_search", "researcher").
	// For agent delegations, the "agent_" prefix is stripped.
	Name string `json:"name"`
	// Type is "tool" or "agent".
	Type string `json:"type"`
	// Input is the tool arguments or agent task, truncated to 200 characters.
	Input string `json:"input"`
	// Output is the result content, truncated to 500 characters.
	Output string `json:"output"`
	// Usage is the token usage for this individual step.
	Usage Usage `json:"usage"`
	// Duration is the wall-clock time for this step.
	Duration time.Duration `json:"duration"`
}

StepTrace records the execution of a single tool call or agent delegation. Collected automatically during the agent's tool-calling loop.

type Store

type Store interface {
	// --- Threads ---
	CreateThread(ctx context.Context, thread Thread) error
	GetThread(ctx context.Context, id string) (Thread, error)
	ListThreads(ctx context.Context, chatID string, limit int) ([]Thread, error)
	UpdateThread(ctx context.Context, thread Thread) error
	DeleteThread(ctx context.Context, id string) error

	// --- Messages ---
	StoreMessage(ctx context.Context, msg Message) error
	GetMessages(ctx context.Context, threadID string, limit int) ([]Message, error)
	// SearchMessages performs semantic similarity search across all messages.
	// Results are sorted by Score descending (cosine similarity in [0, 1]).
	SearchMessages(ctx context.Context, embedding []float32, topK int) ([]ScoredMessage, error)

	// --- Documents + Chunks ---
	StoreDocument(ctx context.Context, doc Document, chunks []Chunk) error
	ListDocuments(ctx context.Context, limit int) ([]Document, error)
	// DeleteDocument removes a document and all its chunks (cascade).
	DeleteDocument(ctx context.Context, id string) error
	// SearchChunks performs semantic similarity search over document chunks.
	// Results are sorted by Score descending.
	SearchChunks(ctx context.Context, embedding []float32, topK int, filters ...ChunkFilter) ([]ScoredChunk, error)
	GetChunksByIDs(ctx context.Context, ids []string) ([]Chunk, error)

	// --- Key-value config ---
	GetConfig(ctx context.Context, key string) (string, error)
	SetConfig(ctx context.Context, key, value string) error

	// --- Scheduled Actions ---
	CreateScheduledAction(ctx context.Context, action ScheduledAction) error
	ListScheduledActions(ctx context.Context) ([]ScheduledAction, error)
	GetDueScheduledActions(ctx context.Context, now int64) ([]ScheduledAction, error)
	UpdateScheduledAction(ctx context.Context, action ScheduledAction) error
	UpdateScheduledActionEnabled(ctx context.Context, id string, enabled bool) error
	DeleteScheduledAction(ctx context.Context, id string) error
	DeleteAllScheduledActions(ctx context.Context) (int, error)
	FindScheduledActionsByDescription(ctx context.Context, pattern string) ([]ScheduledAction, error)

	// --- Skills ---
	CreateSkill(ctx context.Context, skill Skill) error
	GetSkill(ctx context.Context, id string) (Skill, error)
	ListSkills(ctx context.Context) ([]Skill, error)
	UpdateSkill(ctx context.Context, skill Skill) error
	DeleteSkill(ctx context.Context, id string) error
	// SearchSkills performs semantic similarity search over stored skills.
	// Results are sorted by Score descending.
	SearchSkills(ctx context.Context, embedding []float32, topK int) ([]ScoredSkill, error)

	// --- Lifecycle ---
	Init(ctx context.Context) error
	Close() error
}

Store abstracts persistence with vector search capabilities.

type StreamEvent added in v0.4.0

type StreamEvent struct {
	// Type identifies the event kind.
	Type StreamEventType `json:"type"`
	// ID is the tool call ID for correlation across tool-call-delta,
	// tool-call-start, and tool-call-result events. Empty for non-tool events.
	ID string `json:"id,omitempty"`
	// Name is the tool or agent name (set for tool/agent events, empty for text-delta).
	Name string `json:"name,omitempty"`
	// Content carries the text delta (text-delta), tool result (tool-call-result),
	// or agent task/output (agent-start/agent-finish).
	Content string `json:"content,omitempty"`
	// Args carries the tool call arguments (tool-call-start only).
	Args json.RawMessage `json:"args,omitempty"`
	// Usage carries token counts for the completed step.
	// Set on agent-finish and tool-call-result events. Zero value otherwise.
	Usage Usage `json:"usage,omitempty"`
	// Duration is the wall-clock time for the completed step.
	// Set on agent-finish and tool-call-result events. Zero value otherwise.
	Duration time.Duration `json:"duration,omitempty"`
}

StreamEvent is a typed event emitted during agent streaming. Consumers receive these on the channel passed to ExecuteStream.

type StreamEventType added in v0.4.0

type StreamEventType string

StreamEventType identifies the kind of streaming event.

const (
	// EventInputReceived signals that a task has been received by an agent.
	// Name carries the agent name; Content carries the task input text.
	EventInputReceived StreamEventType = "input-received"
	// EventProcessingStart signals that the agent loop has begun processing
	// (after memory/context loading, before the first LLM call).
	// Name carries the loop identifier (e.g. "agent:name" or "network:name").
	EventProcessingStart StreamEventType = "processing-start"
	// EventTextDelta carries an incremental text chunk from the LLM.
	EventTextDelta StreamEventType = "text-delta"
	// EventToolCallStart signals a tool is about to be invoked.
	EventToolCallStart StreamEventType = "tool-call-start"
	// EventToolCallResult carries the result of a completed tool call.
	EventToolCallResult StreamEventType = "tool-call-result"
	// EventThinking carries the LLM's reasoning/chain-of-thought content.
	// Emitted after each LLM call when the provider returns thinking content.
	EventThinking StreamEventType = "thinking"
	// EventAgentStart signals a subagent has been delegated to (Network only).
	EventAgentStart StreamEventType = "agent-start"
	// EventAgentFinish signals a subagent has completed (Network only).
	EventAgentFinish StreamEventType = "agent-finish"
	// EventToolCallDelta carries an incremental chunk of tool call arguments.
	// Emitted by ChatStream when req.Tools is non-empty. ID carries the tool
	// call ID for correlation with the eventual tool-call-start/result events.
	EventToolCallDelta StreamEventType = "tool-call-delta"
	// EventToolProgress carries intermediate progress from a long-running tool.
	// Emitted by tools that implement StreamingTool. Name carries the tool name;
	// Content carries free-form progress JSON.
	EventToolProgress StreamEventType = "tool-progress"
	// EventStepStart signals a workflow step has begun execution.
	// Name carries the step name.
	EventStepStart StreamEventType = "step-start"
	// EventStepFinish signals a workflow step has completed.
	// Name carries the step name; Content carries the output (success) or
	// error message (failure); Duration carries the step wall-clock time.
	EventStepFinish StreamEventType = "step-finish"
	// EventStepProgress carries intermediate progress from a ForEach workflow step.
	// Name carries the step name; Content carries progress JSON
	// (e.g. {"completed":3,"total":10}).
	EventStepProgress StreamEventType = "step-progress"
	// EventRoutingDecision signals the Network router has decided which agents/tools
	// to invoke. Name carries the network name; Content carries a JSON summary
	// (e.g. {"agents":["researcher"],"tools":["search"]}).
	EventRoutingDecision StreamEventType = "routing-decision"
)

type StreamingAgent added in v0.2.1

type StreamingAgent interface {
	Agent
	// ExecuteStream runs the agent like Execute, but emits StreamEvent values
	// into ch throughout execution. Events include text deltas, tool call
	// deltas/start/result/progress, agent start/finish (Networks), step
	// start/finish/progress (Workflows), and routing decisions (Networks).
	//
	// Contract: implementations MUST close ch before returning. Callers
	// (including ServeSSE) use `for ev := range ch` to consume events,
	// which blocks until ch is closed. Failing to close ch causes a deadlock.
	ExecuteStream(ctx context.Context, task AgentTask, ch chan<- StreamEvent) (AgentResult, error)
}

StreamingAgent is an optional capability for agents that support event streaming. Check via type assertion: if sa, ok := agent.(StreamingAgent); ok { ... }

Implemented by LLMAgent, Network, and Workflow.

type StreamingTool added in v0.9.0

type StreamingTool interface {
	Tool
	ExecuteStream(ctx context.Context, name string, args json.RawMessage, ch chan<- StreamEvent) (ToolResult, error)
}

StreamingTool is an optional capability for tools that support progress streaming during execution. Check via type assertion:

if st, ok := tool.(StreamingTool); ok {
    result, err := st.ExecuteStream(ctx, name, args, ch)
}

Tools emit EventToolProgress events on ch to report intermediate progress. The channel is shared with the parent agent's stream — events appear inline with other agent events. The channel is NOT closed by the tool — the caller owns its lifecycle.

type Thread

type Thread struct {
	ID        string            `json:"id"`
	ChatID    string            `json:"chat_id"`
	Title     string            `json:"title,omitempty"`
	Metadata  map[string]string `json:"metadata,omitempty"`
	CreatedAt int64             `json:"created_at"`
	UpdatedAt int64             `json:"updated_at"`
}

type Tool

type Tool interface {
	Definitions() []ToolDefinition
	Execute(ctx context.Context, name string, args json.RawMessage) (ToolResult, error)
}

Tool defines an agent capability with one or more tool functions.

type ToolCall

type ToolCall struct {
	ID       string          `json:"id"`
	Name     string          `json:"name"`
	Args     json.RawMessage `json:"args"`
	Metadata json.RawMessage `json:"metadata,omitempty"`
}

type ToolDefinition

type ToolDefinition struct {
	Name        string          `json:"name"`
	Description string          `json:"description"`
	Parameters  json.RawMessage `json:"parameters"` // JSON Schema
}

type ToolRegistry

type ToolRegistry struct {
	// contains filtered or unexported fields
}

ToolRegistry holds all registered tools and dispatches execution.

func NewToolRegistry

func NewToolRegistry() *ToolRegistry

NewToolRegistry creates an empty registry.

func (*ToolRegistry) Add

func (r *ToolRegistry) Add(t Tool)

Add registers a tool and indexes its definitions for O(1) lookup.

func (*ToolRegistry) AllDefinitions

func (r *ToolRegistry) AllDefinitions() []ToolDefinition

AllDefinitions returns tool definitions from all registered tools.

func (*ToolRegistry) Execute

func (r *ToolRegistry) Execute(ctx context.Context, name string, args json.RawMessage) (ToolResult, error)

Execute dispatches a tool call by name using the pre-built index.

func (*ToolRegistry) ExecuteStream added in v0.9.0

func (r *ToolRegistry) ExecuteStream(ctx context.Context, name string, args json.RawMessage, ch chan<- StreamEvent) (ToolResult, error)

ExecuteStream dispatches a tool call with streaming support. If the resolved tool implements StreamingTool and ch is non-nil, it calls ExecuteStream. Otherwise falls back to Execute.

type ToolResult

type ToolResult struct {
	Content string `json:"content"`
	Error   string `json:"error,omitempty"`
}

ToolResult is the outcome of a tool execution.

type ToolsFunc added in v0.6.0

type ToolsFunc func(ctx context.Context, task AgentTask) []Tool

ToolsFunc resolves the tool set per-request. When set via WithDynamicTools, it is called at the start of every Execute/ExecuteStream call. The returned tools REPLACE (not append to) the construction-time tools for that execution.

type Tracer added in v0.6.0

type Tracer interface {
	// Start creates a new span with the given name and optional attributes.
	// Returns a child context carrying the span and the span itself.
	// Callers must call Span.End() when the operation completes.
	Start(ctx context.Context, name string, attrs ...SpanAttr) (context.Context, Span)
}

Tracer creates spans for tracing agent, workflow, retrieval, and ingest operations. The observer package provides an OTEL-backed implementation via NewTracer(). When no Tracer is configured, span creation is skipped (nil check).

type Usage

type Usage struct {
	InputTokens  int `json:"input_tokens"`
	OutputTokens int `json:"output_tokens"`
	CachedTokens int `json:"cached_tokens,omitempty"` // input tokens served from provider cache
}

type Workflow added in v0.2.0

type Workflow struct {
	// contains filtered or unexported fields
}

Workflow is a deterministic, DAG-based task orchestration primitive. Unlike Network (which uses an LLM to route between agents), Workflow follows explicit step sequences and dependency edges defined at construction time. Parallel execution emerges naturally when multiple steps share the same predecessor. Workflow implements both Agent and StreamingAgent, enabling recursive composition: Networks can contain Workflows, and Workflows can contain Agents (LLMAgent, Network, or other Workflows).

ExecuteStream emits EventStepStart/EventStepFinish for each step, and EventStepProgress during ForEach iterations.

func FromDefinition added in v0.5.0

func FromDefinition(def WorkflowDefinition, reg DefinitionRegistry) (*Workflow, error)

FromDefinition creates an executable *Workflow from a WorkflowDefinition and a registry of named agents, tools, and condition functions. The definition is validated at construction time: unknown agent/tool names, missing edge targets, condition nodes without branches, and cycles all produce errors.

The returned Workflow uses the same DAG execution engine as compile-time workflows built with NewWorkflow.

func NewWorkflow added in v0.2.0

func NewWorkflow(name, description string, opts ...WorkflowOption) (*Workflow, error)

NewWorkflow creates a Workflow with the given name, description, and options. Step definitions and workflow-level options are passed as WorkflowOption values. Returns an error if the step graph is invalid:

  • duplicate step names
  • After() references an unknown step
  • cycle detected in the dependency graph

Logs a warning for unreachable steps (steps that are not roots and have no incoming edges from reachable steps).

func (*Workflow) Description added in v0.2.0

func (w *Workflow) Description() string

Description returns a human-readable description of what the workflow does. Used by Network to generate tool definitions when a Workflow is used as a subagent.

func (*Workflow) Execute added in v0.2.0

func (w *Workflow) Execute(ctx context.Context, task AgentTask) (AgentResult, error)

Execute runs the workflow's step graph. Steps with satisfied dependencies are launched concurrently. The first step failure cancels all in-flight steps and marks downstream steps as StepSkipped. Returns an AgentResult with the last successful step's output.

func (*Workflow) ExecuteStream added in v0.9.0

func (w *Workflow) ExecuteStream(ctx context.Context, task AgentTask, ch chan<- StreamEvent) (AgentResult, error)

ExecuteStream runs the workflow like Execute, but emits StreamEvent values into ch throughout execution. Step start/finish events are emitted for each step. When an AgentStep delegates to a StreamingAgent, that agent's events are forwarded through ch. The channel is closed when streaming completes.

func (*Workflow) Name added in v0.2.0

func (w *Workflow) Name() string

Name returns the workflow's identifier.

type WorkflowContext added in v0.2.0

type WorkflowContext struct {
	// contains filtered or unexported fields
}

WorkflowContext is the shared state that flows between workflow steps. Steps read/write named keys to pass data between each other. All methods are safe for concurrent use.

func (*WorkflowContext) Get added in v0.2.0

func (c *WorkflowContext) Get(key string) (any, bool)

Get retrieves a named value from the context. Returns the value and true if found, or nil and false if the key does not exist.

func (*WorkflowContext) Input added in v0.2.0

func (c *WorkflowContext) Input() string

Input returns the original AgentTask.Input that started the workflow.

func (*WorkflowContext) Resolve added in v0.5.0

func (c *WorkflowContext) Resolve(template string) string

Resolve replaces {{key}} placeholders in template with values from the context's values map. Unknown keys resolve to empty strings. Values are converted to strings via stringifyValue. If the template contains no placeholders, it is returned as-is.

Security: Resolve performs a single pass — resolved values are NOT re-expanded even if they contain "{{...}}". However, callers must not build templates by concatenating untrusted input, as that could inject arbitrary placeholders before Resolve runs. Use Set/Get for untrusted data and keep templates as compile-time constants or definition-time strings.

func (*WorkflowContext) ResolveJSON added in v0.5.0

func (c *WorkflowContext) ResolveJSON(template string) json.RawMessage

ResolveJSON is like Resolve but returns json.RawMessage. If the template is a single placeholder (e.g. "{{key}}") and the value is not a string, the value is marshalled to JSON directly (preserving structure). Otherwise it behaves like Resolve and wraps the result as a JSON string.

func (*WorkflowContext) Set added in v0.2.0

func (c *WorkflowContext) Set(key string, value any)

Set writes a named value to the context, overwriting any previous value for that key.

type WorkflowDefinition added in v0.5.0

type WorkflowDefinition struct {
	Name        string           `json:"name"`
	Description string           `json:"description"`
	Nodes       []NodeDefinition `json:"nodes"`
	Edges       [][2]string      `json:"edges"` // [from, to] pairs
}

WorkflowDefinition is a JSON-serializable description of a workflow DAG. Use FromDefinition to convert it into an executable *Workflow.

type WorkflowError added in v0.2.1

type WorkflowError struct {
	// StepName is the name of the first step that failed.
	StepName string
	// Err is the underlying error from the failed step.
	Err error
	// Result is the full workflow result with per-step outcomes.
	Result WorkflowResult
}

WorkflowError is returned by Workflow.Execute when one or more steps fail. Callers can inspect per-step results via errors.As:

result, err := wf.Execute(ctx, task)
var wfErr *WorkflowError
if errors.As(err, &wfErr) {
    for name, step := range wfErr.Result.Steps { ... }
}

func (*WorkflowError) Error added in v0.2.1

func (e *WorkflowError) Error() string

func (*WorkflowError) Unwrap added in v0.2.1

func (e *WorkflowError) Unwrap() error

type WorkflowOption added in v0.2.0

type WorkflowOption func(*workflowConfig)

WorkflowOption configures a Workflow. Step definitions (Step, AgentStep, ToolStep, ForEach, DoUntil, DoWhile) and workflow-level settings (WithOnFinish, WithOnError, WithDefaultRetry) both implement this type.

func AgentStep added in v0.2.0

func AgentStep(name string, agent Agent, opts ...StepOption) WorkflowOption

AgentStep defines a workflow step that delegates to an Agent (LLMAgent, Network, or another Workflow). Input is read from the context key specified by InputFrom(), or from WorkflowContext.Input() if InputFrom is not set. Output is written to context as "{name}.output" (or the key specified by OutputTo()). Token usage from the agent is accumulated into the workflow's total Usage.

func DoUntil added in v0.2.0

func DoUntil(name string, fn StepFunc, opts ...StepOption) WorkflowOption

DoUntil defines a workflow step that repeats a StepFunc until the condition specified by Until() returns true. The condition is evaluated after each iteration. MaxIter() sets a safety cap (default 10).

func DoWhile added in v0.2.0

func DoWhile(name string, fn StepFunc, opts ...StepOption) WorkflowOption

DoWhile defines a workflow step that repeats a StepFunc while the condition function returns true. The condition is evaluated before each iteration after the first (the first iteration always runs). MaxIter() sets a safety cap (default 10). The condition is set via a dedicated StepOption — use the While() step option to provide the condition function.

func ForEach added in v0.2.0

func ForEach(name string, fn StepFunc, opts ...StepOption) WorkflowOption

ForEach defines a workflow step that runs a StepFunc once per element in a collection. The collection is read from the context key specified by IterOver(). Each iteration receives the current element via ForEachItem(ctx) and its index via ForEachIndex(ctx). These are carried on the Go context (not WorkflowContext) to ensure safety under concurrent iteration. Concurrency defaults to 1 (sequential); use Concurrency() to parallelize.

func Step added in v0.2.0

func Step(name string, fn StepFunc, opts ...StepOption) WorkflowOption

Step defines a workflow step that runs a StepFunc. The function receives the shared WorkflowContext and can read/write any keys. Unlike AgentStep and ToolStep, a basic Step does not automatically write output to context — the function is responsible for calling wCtx.Set() as needed.

func ToolStep added in v0.2.0

func ToolStep(name string, tool Tool, toolName string, opts ...StepOption) WorkflowOption

ToolStep defines a workflow step that calls a single tool function by name. Args are read from the context key specified by ArgsFrom(). If ArgsFrom is not set, empty JSON object ({}) is used. The tool result is written to context as "{name}.result" (or the key specified by OutputTo()).

func WithDefaultRetry added in v0.2.0

func WithDefaultRetry(n int, delay time.Duration) WorkflowOption

WithDefaultRetry sets the default retry count and delay for all steps that do not specify their own Retry() option.

func WithOnError added in v0.2.0

func WithOnError(fn func(string, error)) WorkflowOption

WithOnError registers a callback that is invoked when a step fails. The callback receives the step name and the error. Callback panics are recovered and logged.

func WithOnFinish added in v0.2.0

func WithOnFinish(fn func(WorkflowResult)) WorkflowOption

WithOnFinish registers a callback that is invoked after the workflow completes, regardless of success or failure. Callback panics are recovered and logged.

func WithWorkflowLogger added in v0.6.0

func WithWorkflowLogger(l *slog.Logger) WorkflowOption

WithWorkflowLogger sets the structured logger for the workflow. If not set, a no-op logger is used (no output).

func WithWorkflowTracer added in v0.6.0

func WithWorkflowTracer(t Tracer) WorkflowOption

WithWorkflowTracer sets the tracer for the workflow. When set, the workflow emits spans for execution and step lifecycle events.

type WorkflowResult added in v0.2.0

type WorkflowResult struct {
	// Status is StepSuccess if all steps succeeded (or were skipped by condition),
	// StepFailed if any step failed.
	Status StepStatus
	// Steps maps step name to its individual result.
	Steps map[string]StepResult
	// Context is the shared WorkflowContext after all steps have run.
	// Callers can inspect final values set by steps.
	Context *WorkflowContext
	// Usage is the aggregate token usage from all AgentStep executions.
	Usage Usage
}

WorkflowResult is the aggregate outcome of a full workflow execution. Returned via the onFinish callback; also used to build the AgentResult.

Directories

Path Synopsis
cmd
bot_example command
mcp-docs command
Binary mcp-docs is an MCP server that exposes Oasis framework documentation to AI assistants (Claude Code, Cursor, Windsurf, etc.) via the Model Context Protocol over stdio.
Binary mcp-docs is an MCP server that exposes Oasis framework documentation to AI assistants (Claude Code, Cursor, Windsurf, etc.) via the Model Context Protocol over stdio.
sandbox command
Command sandbox is a reference code execution service for the Oasis framework.
Command sandbox is a reference code execution service for the Oasis framework.
Package code provides CodeRunner implementations for LLM code execution.
Package code provides CodeRunner implementations for LLM code execution.
Package docs embeds the Oasis framework documentation for use by the MCP docs server and other tools that need access to documentation at runtime.
Package docs embeds the Oasis framework documentation for use by the MCP docs server and other tools that need access to documentation at runtime.
Package mcp implements a Model Context Protocol (MCP) server over stdio.
Package mcp implements a Model Context Protocol (MCP) server over stdio.
Package observer provides OTEL-based observability for Oasis LLM operations.
Package observer provides OTEL-based observability for Oasis LLM operations.
provider
gemini
Package gemini implements the Google Gemini LLM and embedding providers.
Package gemini implements the Google Gemini LLM and embedding providers.
openaicompat
Package openaicompat provides a ready-to-use Provider for any OpenAI-compatible API, plus shared helpers (types, body building, response parsing, SSE streaming) for building custom providers.
Package openaicompat provides a ready-to-use Provider for any OpenAI-compatible API, plus shared helpers (types, body building, response parsing, SSE streaming) for building custom providers.
store
libsql
Package libsql implements oasis.Store using libSQL (SQLite-compatible) with DiskANN vector extensions for Turso.
Package libsql implements oasis.Store using libSQL (SQLite-compatible) with DiskANN vector extensions for Turso.
postgres
Package postgres implements oasis.Store and oasis.MemoryStore using PostgreSQL with pgvector for native vector similarity search and tsvector for full-text keyword search.
Package postgres implements oasis.Store and oasis.MemoryStore using PostgreSQL with pgvector for native vector similarity search and tsvector for full-text keyword search.
sqlite
Package sqlite implements oasis.Store using pure-Go SQLite with in-process brute-force vector search.
Package sqlite implements oasis.Store using pure-Go SQLite with in-process brute-force vector search.
tools
data
Package data provides structured data transform tools for CSV/JSON processing.
Package data provides structured data transform tools for CSV/JSON processing.
skill
Package skill exposes skill management to agents through the standard Tool interface.
Package skill exposes skill management to agents through the standard Tool interface.

Jump to

Keyboard shortcuts

? : This menu
/ : Search site
f or F : Jump to
y or Y : Canonical URL