← Home

Integration Guide

How to integrate FaultLine into your AI agent system

Quick Start

1. Install SDK

Install from GitHub (Recommended)
npm install github:ashutosh887/FaultLine#packages/sdk
Or from npm (if published)
npm install @faultline/sdk

2. Initialize Tracer

import { Tracer } from "@faultline/sdk";
const tracer = new Tracer({
ingestUrl: "https://your-faultline-app.vercel.app"
});

3. Emit Events

// User input
tracer.emit({
type: "user_input",
payload: { text: "Book a flight..." }
});
// Tool call
tracer.emit({
type: "tool_call",
payload: {
tool_name: "flight_search",
input: { origin: "NYC", destination: "LAX" }
}
});

Event Types

user_input

Captures user requests/inputs

tracer.emit({ type: "user_input", payload: { text: string } })

tool_call

Captures tool/function calls

tracer.emit({ type: "tool_call", payload: {tool_name: string, input: object, output?: object, error?: string } })

model_output

Captures LLM responses

tracer.emit({ type: "model_output", payload: {text: string, token_count?: number } })

memory_op

Captures memory operations

tracer.emit({ type: "memory_op", payload: {operation: "store" | "retrieve" | "delete", key: string } })

system_state

Captures system state changes

tracer.emit({ type: "system_state", payload: {state: object } })

Integration Examples

LangChain Agent

const tracer = new Tracer({ ingestUrl: process.env.FAULTLINE_URL! });
tracer.emit({ type: "user_input", payload: { text: userInput } });
tracer.emit({ type: "tool_call", payload: {tool_name: "search_flights", input: { query: userInput } } });
tracer.emit({ type: "model_output", payload: {text: result.output } });

OpenAI Function Calling

tracer.emit({ type: "user_input", payload: { text: message } });
const response = await openai.chat.completions.create({...});
tracer.emit({ type: "model_output", payload: {text: response.choices[0].message.content } });

What Happens After Integration

1.Events are ingested → Stored in Redis
2.Job is enqueued → BullMQ queue processes trace
3.Worker processes → Gemini analyzes the trace
4.Report is generated → Verdict + causal graph stored
5.View in UI → Visit /runs to see analysis

Analysis typically takes 10-30 seconds depending on trace length.

Resources

📖 Full Integration Guide (INTEGRATION.md)

Complete documentation with all event types, examples, and best practices

🚀 Deployment Guide (DEPLOYMENT.md)

Step-by-step instructions for deploying FaultLine to production

📊 View Demo Traces

See example traces and root-cause analysis reports