Documentation

Add secure shell execution to your AI app in minutes

Help & Discord

What shellify handles for you: Sandboxed execution, security isolation, file artifact uploads, streaming output, session persistence, and timeout management. You just define the tool and call our API.

Vercel AI SDK Integration

The cleanest integration. Use the tool() helper with automatic execution—just define your tool and the SDK handles the rest.

How It Works

  1. Install dependencies: npm install ai @ai-sdk/openai zod @shellifyai/shell-tool
  2. Create a project and get your API key from the shellify console.
  3. Use shellifyTool from @shellifyai/shell-tool to define the tool.
  4. The SDK automatically calls shellifyTool when the model uses it; override adapterType to "local_shell" for bare sandbox runs and set structuredResponse: true to always return stdout/stderr/artifacts.

Quick Start

With Vercel AI SDK, tool execution is built-in. Use shellifyTool from @shellifyai/shell-tool and the SDK will invoke Shellify automatically.

typescript
1// app/api/chat/route.ts
2import { openai } from "@ai-sdk/openai";
3import { streamText, stepCountIs } from "ai";
4import { shellifyTool } from "@shellifyai/shell-tool";
5
6export async function POST(req: Request) {
7 const { messages } = await req.json();
8
9 const result = streamText({
10 model: openai("gpt-5.1"),
11 messages,
12 tools: {
13 // shellifyTool handles the execute call for you
14 shell: shellifyTool({
15 apiKey: process.env.SHELLIFYAI_API_KEY!,
16 }),
17 },
18 stopWhen: stepCountIs(5), // Allow multiple tool calls
19 });
20
21 return result.toDataStreamResponse();
22}

Use the ShellifyAI tool package

Skip the manual fetch and use the prebuilt shellifyTool helper from @shellifyai/shell-tool. The API key already encodes the project—no projectId needed.

typescript
1import { generateText, stepCountIs } from "ai";
2import { openai } from "@ai-sdk/openai";
3import { shellifyTool } from "@shellifyai/shell-tool";
4
5const { text } = await generateText({
6 model: openai("gpt-5.1"),
7 prompt: "Create hello.sh that echoes Hello World, run it, and show the output",
8 tools: {
9 shell: shellifyTool({
10 apiKey: process.env.SHELLIFYAI_API_KEY!,
11 }),
12 },
13 stopWhen: stepCountIs(4),
14});
15
16console.log(text);

Force direct sandbox + structured summary

Override adapterType to use the bare sandbox and enable structuredResponse so your UI can render stdout/stderr + artifacts reliably (including in streaming UIs).

typescript
1import { openai } from "@ai-sdk/openai";
2import { streamText, stepCountIs } from "ai";
3import { shellifyTool } from "@shellifyai/shell-tool";
4
5const result = streamText({
6 model: openai("gpt-5.1"),
7 messages,
8 tools: {
9 shell: shellifyTool({
10 apiKey: process.env.SHELLIFYAI_API_KEY!,
11 adapterType: "local_shell", // Force bare sandbox
12 structuredResponse: true, // Emit structured_log + summary with artifacts
13 }),
14 },
15 stopWhen: stepCountIs(5),
16});
17
18// Listen for { type: "structured_log" } events to render stdout/stderr + artifacts in your code interpreter UI.

Non-Streaming (generateText)

For server-side scripts or one-off tasks, use generateText instead of streamText.

typescript
1import { generateText, stepCountIs } from "ai";
2import { openai } from "@ai-sdk/openai";
3import { shellifyTool } from "@shellifyai/shell-tool";
4
5const { text, toolResults } = await generateText({
6 model: openai("gpt-5.1"),
7 prompt: "Create a Python script that calculates fibonacci numbers and run it",
8 tools: {
9 shell: shellifyTool({
10 apiKey: process.env.SHELLIFYAI_API_KEY!,
11 }),
12 },
13 stopWhen: stepCountIs(5),
14});
15
16console.log(text);
17// Access tool results: toolResults[0].result.stdout

Frontend Component

Build a chat UI that displays commands and results as they stream in.

typescript
1// app/page.tsx
2"use client";
3import { useChat } from "ai/react";
4
5export default function Chat() {
6 const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat();
7
8 return (
9 <div className="max-w-2xl mx-auto p-4">
10 {messages.map((m) => (
11 <div key={m.id} className="mb-4 p-4 rounded bg-gray-100">
12 <div className="font-bold">{m.role === "user" ? "You" : "AI"}</div>
13 <p>{m.content}</p>
14
15 {/* Show tool calls */}
16 {m.toolInvocations?.map((tool, i) => (
17 <div key={i} className="mt-2 p-2 bg-gray-800 text-green-400 rounded font-mono text-sm">
18 <div>$ {tool.args.command}</div>
19 {tool.state === "result" && (
20 <pre className="mt-1 text-gray-300 whitespace-pre-wrap">
21 {tool.result.stdout || tool.result.logs?.join("\n")}
22 </pre>
23 )}
24 </div>
25 ))}
26 </div>
27 ))}
28
29 <form onSubmit={handleSubmit} className="flex gap-2">
30 <input
31 value={input}
32 onChange={handleInputChange}
33 placeholder="Ask me to run a command..."
34 className="flex-1 p-2 border rounded"
35 />
36 <button type="submit" disabled={isLoading} className="px-4 py-2 bg-blue-500 text-white rounded">
37 Send
38 </button>
39 </form>
40 </div>
41 );
42}

Environment Variables

Add these to your .env file:

bash
1SHELLIFYAI_API_KEY=your_api_key

Get your credentials from the Projects page.

API Reference

Endpoint

POST $https://shellifyai.com/v1/execute

Headers

x-api-key: Your project API key (required)
Accept: application/jsonl (streaming, recommended for production)

Query Parameters

stream: true (alternative to Accept header)

Response formats

Default JSON: single object with an events array
Streaming NDJSON: one JSON object per line — best for real-time logs and artifact detection

Request Body

adapterType: "local_shell" | "openai_codex" | "claude_agent" (optional) - defaults to project setting; use "local_shell" to bypass managed agents
tool: "local_shell"
payload.command: string (required)
payload.intent: string (optional) - context for what agent is trying to do
payload.sessionId: string (optional) - for file persistence across calls
payload.timeoutMs: number (optional) - default: 120000
payload.workingDirectory: string (optional) - working directory for command
payload.env: object (optional) - environment variables as key-value pairs
payload.sdkLanguage: "python" | "typescript" (optional)
payload.systemMessage: string (optional) - custom system prompt; security policy always appended
structuredResponse: boolean (optional) - include structured summary (stdout, stderr, exitCode, artifacts) and emit a final structured_log event when streaming

Response (JSON)

requestId: Unique request ID
adapter: Adapter type used
events[]: Execution events

Streaming Response (application/jsonl)

Each line is a JSON object with real-time events:

{"type": "meta", "requestId": "...", "adapter": "..."}
{"type": "status", "status": "running"}
{"type": "log", "data": "...", "stream": "stdout"}
{"type": "artifact", "filename": "...", "url": "..."}
{"type": "structured_log", "data": {"stdout": "...", "stderr": "..."}}
{"type": "error", "error": "message", "status": "failed"}
{"type": "status", "status": "completed"}

SDK Language Support

You can override the SDK language using payload.sdkLanguage:

claude_agent: Supports both "python" (default) and "typescript"
openai_codex: Shell execution only - TypeScript not supported (returns 400 error)

Use the Claude adapter if you need TypeScript SDK support.