Sample AI Agent Code: A Practical Guide for Builders
A comprehensive tutorial on building a sample ai agent codebase, covering architecture, a minimal runnable example, LLM integration, testing, deployment, and safe practices for developers and product teams.

Introduction to sample ai agent code
The phrase sample ai agent code refers to a compact, runnable blueprint that showcases the core agent loop: perception, reasoning, and action. For developers and product teams, starting from a clean, well-documented example accelerates learning and cross-team collaboration. According to Ai Agent Ops, pragmatic example code lowers barriers to adopting agentic AI workflows. This section presents a minimal Python implementation to establish the baseline for an agent that can perceive input, decide on a next action, and execute it. The resulting code is intentionally small, yet structured enough to be extended with planning, memory, and external integrations. The keyword sample ai agent code appears throughout to reinforce practical learning.
# Simple agent skeleton that accepts a goal and performs actions
class SimpleAgent:
def __init__(self, goal):
self.goal = goal
self.log = []
def perceive(self, input_data):
self.log.append(("perceive", input_data))
return input_data
def decide(self, observations):
# Very naive decision: if 'fetch' in goal, fetch data; otherwise do nothing
if "fetch" in self.goal:
return "fetch_data"
return "idle"
def act(self, decision):
self.log.append(("act", decision))
if decision == "fetch_data":
return {"status": "ok", "data": [1,2,3]}
return {"status": "idle"}
if __name__ == "__main__":
agent = SimpleAgent(goal="fetch latest metrics")
obs = agent.perceive("start")
decision = agent.decide(obs)
result = agent.act(decision)
print(result)Notes:
- This is a starting point. Extend perceive, decide, and act for real tasks.
- Add error handling and logging for production use.
Architecture of an AI agent: components and data flow
A robust sample ai agent code is built from a few core components: a perception module to collect inputs, a planner or decision-maker to select actions, and an action module to execute outcomes. Optional enhancements include memory, a planner that reasons over past observations, and a simple scheduler for task orchestration. Below is a compact blueprint that wires perception, decision, and action together, plus a lightweight configuration that can drive experimentation.
from typing import Any, List, Dict
class Perceiver:
def perceive(self, raw: Any) -> Dict[str, Any]:
# Normalize inputs into a consistent structure
return {"raw": raw, "length": len(str(raw))}
class Planner:
def decide(self, state: Dict[str, Any]) -> str:
# Simple rule-based planning using state
if state["length"] > 10:
return "summarize"
return "log"
class Actuator:
def act(self, decision: str) -> Dict[str, Any]:
if decision == "summarize":
return {"status": "ok", "action": "summarize_input"}
return {"status": "ok", "action": "log_input"}
class AgentCore:
def __init__(self):
self.perceiver = Perceiver()
self.planner = Planner()
self.actuator = Actuator()
def run_once(self, input_data: str) -> Dict[str, Any]:
state = self.perceiver.perceive(input_data)
decision = self.planner.decide(state)
return self.actuator.act(decision)
if __name__ == "__main__":
core = AgentCore()
print(core.run_once("hello world"))# Minimal configuration for the agent
agent:
name: simple_core
memory: false
logging:
level: info
formats: [json, plain]Why this design?
- Clear separation of concerns makes testing easier.
- Plain rules-based planning lets you iterate quickly before introducing probabilistic reasoning.
- The small surface area reduces cognitive load for new contributors.
Minimal viable sample ai agent code in Python
The MVCA (Minimal Viable Composable Agent) demonstrates the essential modules working together: perception, decision, and action. This single-file example provides a runnable baseline you can extend with memory, tools, or external APIs. It serves as a foundation for experimentation, enabling you to prototype agentic workflows without heavy dependencies. To keep the example approachable, no external services are required initially, but the code is structured so you can plug in a language model or tool integrations later.
# MVCA: minimal viableComposable agent
from typing import Any
class MVCA:
def __init__(self, goal: str):
self.goal = goal
def perceive(self, data: str) -> dict:
return {"input": data, "length": len(data)}
def decide(self, state: dict) -> str:
if state["length"] > 5:
return "process"
return "idle"
def act(self, decision: str) -> dict:
if decision == "process":
return {"status": "done", "result": state.get("input", "")}
return {"status": "idle"}
if __name__ == "__main__":
mvca = MVCA(goal="process small payload")
st = mvca.perceive("abcde12345")
dec = mvca.decide(st)
res = mvca.act(dec)
print(res)# Running MVCA example
python mvca.py
# Expected output example: {'status': 'idle'} or {'status': 'done', 'result': 'abcde12345'} depending on inputExtensions:
- Swap in a real planner, add memory, or hook to an LLM for richer decision-making.
Integrating with a language model (LLM) for enhanced reasoning
A common step in sample ai agent code is integrating a language model to enhance reasoning. The example below shows how to query a language model for a plan and then execute the resulting actions. Remember to set your OpenAI API key securely via environment variables or a secret manager. This approach keeps perception and action decoupled from the model, making debugging easier.
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
def ask_llm(prompt: str) -> str:
resp = openai.ChatCompletion.create(
model="gpt-4-turbo",
messages=[{"role": "user", "content": prompt}]
)
return resp.choices[0].message.content.strip()
class LLMPlanner:
def plan(self, context: dict) -> str:
prompt = f"Given the context: {context}, propose a short set of steps to achieve the goal."
return ask_llm(prompt)
if __name__ == "__main__":
planner = LLMPlanner()
plan = planner.plan({"goal": "summarize input", "input": "data payload"})
print(plan)# Quick test: ensure API key is set
export OPENAI_API_KEY=sk-REDACTED
python llm_planner_example.pyLocal testing without API calls:
- Use a mock function to return a deterministic plan during unit tests.
- Validate the plan structure before executing actions to prevent unexpected behavior.
Testing and debugging AI agents
Testing is essential when working with sample ai agent code to catch regressions and clarify behavior. Start with unit tests for perception, decision, and action modules. Then simulate end-to-end runs with mocked inputs. The goal is to verify edge cases, time constraints, and failure modes. This section includes a small test layout and example tests to demonstrate how to validate each component.
import unittest
from mvca import MVCA
class TestMVCA(unittest.TestCase):
def test_perceive(self):
a = MVCA(goal="test")
state = a.perceive("hello")
self.assertIn("input", state)
def test_decide_idle(self):
a = MVCA(goal="short")
st = a.perceive("hi")
self.assertEqual(a.decide(st), "idle")
def test_act_idle(self):
a = MVCA(goal="anything")
res = a.act("idle")
self.assertEqual(res["status"], "idle")
if __name__ == "__main__":
unittest.main()# Run tests
pytest -qDebugging tips:
- Add verbose logging around perceive/decide/act to trace state changes.
- Use small, synthetic inputs to reproduce bugs quickly.
- Validate external integrations with mock objects during unit tests.
Packaging and deployment considerations
To move from a local prototype to a reproducible artifact, package the sample ai agent code with a minimal, portable container and a requirements file. This makes it easy to share the baseline with teammates and deploy in staging. The following Dockerfile and a minimal requirements.txt illustrate a repeatable build that runs the basic agent script.
FROM python:3.11-slim
WORKDIR /app
COPY . /app
RUN pip install --no-cache-dir -r requirements.txt
CMD ["python", "agent.py"]# Minimal dependencies for the agentDeployment steps:
- Build the image and run locally with docker or docker compose.
- Use a bind mount for logs and configuration to ease iteration.
- Consider a lightweight orchestrator if you plan multiple agents.
Security, safety, and best practices for sample ai agent code
Even in a learning context, security and safety matter. Never hardcode API keys or secrets in code. Prefer environment variables or secret managers. Sanitize inputs to avoid code injection and limit the agent's permissions to only what's necessary. For production-grade agents, implement circuit breakers, input validation, and rate limiting. The sample ai agent code shown here should be treated as a teaching artifact, not a production-ready system.
import os
OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
if not OPENAI_API_KEY:
raise RuntimeError("OPENAI_API_KEY is not set in the environment")
print("API key loaded securely from environment.")# Secret handling best practice
export OPENAI_API_KEY="sk-REDACTED"
python agent.pyTakeaways:
- Centralize secrets; rotate credentials regularly.
- Validate all external inputs and implement timeouts to avoid hangs.
Extending with variations: web-scraping agent and task orchestrator
A sample ai agent code base can be extended with variations to tackle real-world tasks, such as a web-scraping agent or a task orchestrator that coordinates multiple agents. The code below illustrates a simple orchestrator that delegates tasks to worker agents and aggregates results. This setup demonstrates how to scale the baseline into a small multi-agent system.
class WorkerAgent:
def __init__(self, id):
self.id = id
def perform(self, task):
return {"worker": self.id, "task": task, "result": f"done-{task}"}
class Orchestrator:
def __init__(self, workers):
self.workers = [WorkerAgent(i) for i in range(workers)]
def dispatch(self, tasks):
results = []
for w, t in zip(self.workers, tasks):
results.append(w.perform(t))
return results
if __name__ == "__main__":
o = Orchestrator(3)
print(o.dispatch(["taskA", "taskB", "taskC"]))# Simple web-scraping agent (conceptual, without real requests)
import random
class WebScraperAgent:
def fetch(self, url):
# Placeholder for actual HTTP fetch
return {"url": url, "content": "mocked content", "len": random.randint(100, 1000)}
if __name__ == "__main__":
a = WebScraperAgent()
print(a.fetch("https://example.com"))As you extend, keep modular interfaces, adopt tests for each component, and consider adding a memory store to improve decision quality over time.