Documentation
Add secure shell execution to your AI app in minutes
What shellify handles for you: Sandboxed execution, security isolation, file artifact uploads, streaming output, session persistence, and timeout management. You just define the tool and call our API.
Vercel AI SDK Integration
The cleanest integration. Use the tool() helper with automatic execution—just define your tool and the SDK handles the rest.
How It Works
- Install dependencies: npm install ai @ai-sdk/openai zod @shellifyai/shell-tool
- Create a project and get your API key from the shellify console.
- Use shellifyTool from @shellifyai/shell-tool to define the tool.
- The SDK automatically calls shellifyTool when the model uses it; override adapterType to "local_shell" for bare sandbox runs and set structuredResponse: true to always return stdout/stderr/artifacts.
Quick Start
With Vercel AI SDK, tool execution is built-in. Use shellifyTool from @shellifyai/shell-tool and the SDK will invoke Shellify automatically.
1// app/api/chat/route.ts2import { openai } from "@ai-sdk/openai";3import { streamText, stepCountIs } from "ai";4import { shellifyTool } from "@shellifyai/shell-tool";56export async function POST(req: Request) {7 const { messages } = await req.json();89 const result = streamText({10 model: openai("gpt-5.1"),11 messages,12 tools: {13 // shellifyTool handles the execute call for you14 shell: shellifyTool({15 apiKey: process.env.SHELLIFYAI_API_KEY!,16 }),17 },18 stopWhen: stepCountIs(5), // Allow multiple tool calls19 });2021 return result.toDataStreamResponse();22}
Use the ShellifyAI tool package
Skip the manual fetch and use the prebuilt shellifyTool helper from @shellifyai/shell-tool. The API key already encodes the project—no projectId needed.
1import { generateText, stepCountIs } from "ai";2import { openai } from "@ai-sdk/openai";3import { shellifyTool } from "@shellifyai/shell-tool";45const { text } = await generateText({6 model: openai("gpt-5.1"),7 prompt: "Create hello.sh that echoes Hello World, run it, and show the output",8 tools: {9 shell: shellifyTool({10 apiKey: process.env.SHELLIFYAI_API_KEY!,11 }),12 },13 stopWhen: stepCountIs(4),14});1516console.log(text);
Force direct sandbox + structured summary
Override adapterType to use the bare sandbox and enable structuredResponse so your UI can render stdout/stderr + artifacts reliably (including in streaming UIs).
1import { openai } from "@ai-sdk/openai";2import { streamText, stepCountIs } from "ai";3import { shellifyTool } from "@shellifyai/shell-tool";45const result = streamText({6 model: openai("gpt-5.1"),7 messages,8 tools: {9 shell: shellifyTool({10 apiKey: process.env.SHELLIFYAI_API_KEY!,11 adapterType: "local_shell", // Force bare sandbox12 structuredResponse: true, // Emit structured_log + summary with artifacts13 }),14 },15 stopWhen: stepCountIs(5),16});1718// Listen for { type: "structured_log" } events to render stdout/stderr + artifacts in your code interpreter UI.
Non-Streaming (generateText)
For server-side scripts or one-off tasks, use generateText instead of streamText.
1import { generateText, stepCountIs } from "ai";2import { openai } from "@ai-sdk/openai";3import { shellifyTool } from "@shellifyai/shell-tool";45const { text, toolResults } = await generateText({6 model: openai("gpt-5.1"),7 prompt: "Create a Python script that calculates fibonacci numbers and run it",8 tools: {9 shell: shellifyTool({10 apiKey: process.env.SHELLIFYAI_API_KEY!,11 }),12 },13 stopWhen: stepCountIs(5),14});1516console.log(text);17// Access tool results: toolResults[0].result.stdout
Frontend Component
Build a chat UI that displays commands and results as they stream in.
1// app/page.tsx2"use client";3import { useChat } from "ai/react";45export default function Chat() {6 const { messages, input, handleInputChange, handleSubmit, isLoading } = useChat();78 return (9 <div className="max-w-2xl mx-auto p-4">10 {messages.map((m) => (11 <div key={m.id} className="mb-4 p-4 rounded bg-gray-100">12 <div className="font-bold">{m.role === "user" ? "You" : "AI"}</div>13 <p>{m.content}</p>1415 {/* Show tool calls */}16 {m.toolInvocations?.map((tool, i) => (17 <div key={i} className="mt-2 p-2 bg-gray-800 text-green-400 rounded font-mono text-sm">18 <div>$ {tool.args.command}</div>19 {tool.state === "result" && (20 <pre className="mt-1 text-gray-300 whitespace-pre-wrap">21 {tool.result.stdout || tool.result.logs?.join("\n")}22 </pre>23 )}24 </div>25 ))}26 </div>27 ))}2829 <form onSubmit={handleSubmit} className="flex gap-2">30 <input31 value={input}32 onChange={handleInputChange}33 placeholder="Ask me to run a command..."34 className="flex-1 p-2 border rounded"35 />36 <button type="submit" disabled={isLoading} className="px-4 py-2 bg-blue-500 text-white rounded">37 Send38 </button>39 </form>40 </div>41 );42}
Environment Variables
Add these to your .env file:
1SHELLIFYAI_API_KEY=your_api_key
Get your credentials from the Projects page.
API Reference
Endpoint
POST $https://shellifyai.com/v1/executeHeaders
x-api-key: Your project API key (required)Accept: application/jsonl (streaming, recommended for production)Query Parameters
stream: true (alternative to Accept header)Response formats
events arrayRequest Body
adapterType: "local_shell" | "openai_codex" | "claude_agent" (optional) - defaults to project setting; use "local_shell" to bypass managed agentstool: "local_shell"payload.command: string (required)payload.intent: string (optional) - context for what agent is trying to dopayload.sessionId: string (optional) - for file persistence across callspayload.timeoutMs: number (optional) - default: 120000payload.workingDirectory: string (optional) - working directory for commandpayload.env: object (optional) - environment variables as key-value pairspayload.sdkLanguage: "python" | "typescript" (optional)payload.systemMessage: string (optional) - custom system prompt; security policy always appendedstructuredResponse: boolean (optional) - include structured summary (stdout, stderr, exitCode, artifacts) and emit a final structured_log event when streamingResponse (JSON)
requestId: Unique request IDadapter: Adapter type usedevents[]: Execution eventsStreaming Response (application/jsonl)
Each line is a JSON object with real-time events:
SDK Language Support
You can override the SDK language using payload.sdkLanguage:
"python" (default) and "typescript"Use the Claude adapter if you need TypeScript SDK support.