Back to Blog
tutorialdirect-apiclaudeshellifycode-interpreter

Using ShellifyAI as a Code Interpreter

Run code directly with ShellifyAI: use the direct POST /v1/execute API for immediate execution, or configure claude_agent for complex agent-driven workflows. Examples and SDK usage from the docs.

SShellifyAI TeamNovember 29, 20257 min read

Using ShellifyAI as a Code Interpreter

Turn ShellifyAI into a code interpreter: run individual commands directly from your backend or CI, or connect a Claude agent for complex multi-step workflows. This post covers two common patterns:

  • Direct API usage: execute shell commands immediately via POST https://shellifyai.com/v1/execute (no AI model required). Great for automation, CI, and simple code interpretation.
  • Claude Agent integration: for complex agents that need programmatic control and multi-step logic, configure Shellify to use the claude_agent adapter and the Claude Code SDK. For a step-by-step integration, see our companion post "Integrating Claude Agent SDK with Shellify via Tool Calls".

All examples below come straight from the ShellifyAI docs (see /docs for full reference).

Why use ShellifyAI as a code interpreter?

  • Secure sandboxing: commands run in isolated environments with resource limits and security policies.
  • Session persistence: keep files across steps using session IDs for multi-step scripts.
  • Streaming output and artifacts: stream stdout/stderr for responsive UIs and access files via signed URLs.
  • Adapter flexibility: run direct commands or let an agent (OpenAI, Claude, etc.) orchestrate steps.

Direct API: execute a command now

The simplest way to use Shellify as a code interpreter is to call the execute endpoint directly. This doesn’t involve any AI model — you POST a command and receive stdout, stderr, exit code, and file artifacts.

Endpoint

POST https://shellifyai.com/v1/execute

Required header: x-api-key (your project API key from the Shellify console). Optionally set Accept: application/jsonl to enable streaming responses.

Simple examples

TypeScript (fetch)

typescript
1const response = await fetch("https://shellifyai.com/v1/execute", {
2 method: "POST",
3 headers: {
4 "Content-Type": "application/json",
5 "x-api-key": process.env.SHELLIFYAI_API_KEY!,
6 },
7 body: JSON.stringify({
8 adapterType: "local_shell",
9 tool: "local_shell",
10 payload: {
11 command: "echo 'Hello World' && python3 --version",
12 },
13 }),
14});
15
16const result = await response.json();
17console.log("stdout:", result.events?.find(e => e.type === "log")?.data);

Python (requests)

python
1import requests
2import os
3
4response = requests.post(
5 "https://shellifyai.com/v1/execute",
6 headers={
7 "Content-Type": "application/json",
8 "x-api-key": os.environ["SHELLIFYAI_API_KEY"],
9 },
10 json={
11 "adapterType": "local_shell",
12 "tool": "local_shell",
13 "payload": {
14 "command": "echo 'Hello World' && python3 --version",
15 },
16 },
17)
18
19result = response.json()
20for event in result.get("events", []):
21 if event["type"] == "log":
22 print("Output:", event["data"])
23 elif event["type"] == "status" and "exitCode" in event.get("data", {}):
24 print("Exit code:", event["data"]["exitCode"])

Curl

bash
1curl -X POST "https://shellifyai.com/v1/execute" \
2 -H "Content-Type: application/json" \
3 -H "x-api-key: $SHELLIFYAI_API_KEY" \
4 -d '{
5 "adapterType": "local_shell",
6 "tool": "local_shell",
7 "payload": {
8 "command": "echo Hello World && ls -la"
9 }
10 }'

Use the ShellifyClient for a nicer developer experience

If you prefer a small SDK over manual fetch/requests calls, use ShellifyClient from @shellifyai/shell-tool. It returns parsed summaries (stdout, stderr, exitCode) and supports streaming helpers.

typescript
1import { ShellifyClient } from "@shellifyai/shell-tool";
2
3const client = new ShellifyClient({ apiKey: process.env.SHELLIFYAI_API_KEY! });
4
5const result = await client.execute({ payload: { command: "python3 -c 'print(2+2)'" } });
6console.log("stdout:", result.summary.stdout); // "4"
7console.log("artifacts:", result.summary.artifacts);

Streaming output

For long-running commands, enable streaming either via Accept: application/jsonl or client.stream(). Streaming lets you process logs and artifacts as they appear instead of waiting for completion.

Example (fetch + jsonl streaming)

typescript
1const response = await fetch("https://shellifyai.com/v1/execute", {
2 method: "POST",
3 headers: {
4 "Content-Type": "application/json",
5 "x-api-key": process.env.SHELLIFYAI_API_KEY!,
6 "Accept": "application/jsonl",
7 },
8 body: JSON.stringify({
9 adapterType: "local_shell",
10 tool: "local_shell",
11 payload: { command: "for i in 1 2 3; do echo $i; sleep 1; done" },
12 }),
13});
14
15const reader = response.body!.getReader();
16const decoder = new TextDecoder();
17while (true) {
18 const { done, value } = await reader.read();
19 if (done) break;
20 const lines = decoder.decode(value).split("\n");
21 for (const line of lines) {
22 if (!line.trim()) continue;
23 const event = JSON.parse(line);
24 console.log(event.type, event.data || event.status);
25 }
26}

Session persistence for multi-step workflows

Direct API calls can use a sessionId to keep files across multiple commands. This is useful for multi-step code interpretation where one step writes a file and the next step runs it.

typescript
1// create a file in session
2await client.execute({ payload: { command: "echo 'print(\"Hi\")' > script.py", sessionId: "session-1" } });
3// run it in the same session
4const run = await client.execute({ payload: { command: "python3 script.py", sessionId: "session-1" } });
5console.log(run.summary.stdout);

File artifacts

Files created during execution are uploaded and exposed as signed URLs. The SDK returns artifact objects with filename, url, and contentType — download them or attach them to further processing steps.

Error handling

Always check exit codes and stderr. For network or API issues, catch exceptions and implement retries or fallbacks in your automation.

typescript
1try {
2 const res = await client.execute({ payload: { command: "nonexistent_cmd" }, timeoutMs: 30000 });
3 if (res.summary.exitCode !== 0) {
4 console.error("Failed:", res.summary.stderr);
5 }
6} catch (err) {
7 console.error("Execution error:", err);
8}

When to use the direct API

  • One-off command execution from a backend or CI job
  • Lightweight code interpretation where an AI model isn’t necessary
  • Batch jobs, testing, or automation scripts that need sandboxed environments

Claude Agent integration: for complex programmatic agents

For more complex workflows where you want programmatic control and decision-making, pair Shellify with a Claude agent (or other agent adapters). The docs support an adapterType named claude_agent and include options to control the SDK language for code generation and execution.

High-level pattern

  1. Configure your Shellify project (get API key) and set the project adapter to claude_agent or override with adapterType: "claude_agent" in the execute call.
  2. When your Claude agent decides to run shell commands, forward those tool calls to Shellify’s execute endpoint, optionally setting payload.sdkLanguage to "python" or "typescript" depending on the language the agent is generating.
  3. Use sessions for multi-step flows, stream logs for responsiveness, and collect artifacts for downstream tasks.

API example (explicitly setting the adapter)

bash
1curl -X POST "https://shellifyai.com/v1/execute" \
2 -H "Content-Type: application/json" \
3 -H "x-api-key: $SHELLIFYAI_API_KEY" \
4 -d '{
5 "adapterType": "claude_agent",
6 "tool": "local_shell",
7 "payload": {
8 "command": "python3 -c \"print('hello from claude agent')\"",
9 "sdkLanguage": "python"
10 }
11 }'

Notes and pointers

  • The docs include a full reference for payload options (intent, timeoutMs, workingDirectory, env, systemMessage) and show how security policies are always appended to system prompts.
  • If you need a step-by-step walkthrough for Claude, see our existing post: "Integrating Claude Agent SDK with Shellify via Tool Calls" (published on the blog). It demonstrates forwarding tool calls and returning results to the agent.
  • Use SHELLIFYAI_CLAUDE_LANGUAGE or payload.sdkLanguage to indicate whether the agent is producing Python or TypeScript code if your adapter expects a specific language.

Putting it together: a real-world flow

  • Build a web IDE where users type a short request like "Create a script that computes prime numbers and run it." Your backend:
    1. Uses a Claude agent to generate a multi-file project (or interactive steps).
    2. When the agent elects to run commands, forward each tool call to Shellify with adapterType: "claude_agent" and sessionId to persist files.
    3. Stream logs back to the web UI for immediate feedback.
    4. After completion, return links to artifact files for download.

Security reminders

  • Your SHELLIFYAI_API_KEY identifies the project — keep it secret.
  • Commands run in ephemeral sandboxes with policies appended; network access and persistence are controlled by the platform and session usage.
  • Always validate user input at your application layer before forwarding commands to the execution API.

References and next steps

  • Full API and code examples: /docs (the post used examples directly from the documentation)
  • Claude agent integration guide: see the blog post "Integrating Claude Agent SDK with Shellify via Tool Calls"

If you’d like, I can:

  • Publish this draft to the blog as a new post
  • Create a shorter quickstart one-page cheat sheet with curl and SDK snippets
  • Add a step-by-step Claude code sample using the exact code flow from the existing Claude integration post

Tags: tutorial, direct-api, claude, shellify, code-interpreter

Copy this article for AI agents

Ready to get started?

Start building powerful AI applications with secure, scalable execution environments.

Using ShellifyAI as a Code Interpreter (Direct API & Claude