AI Agent Example Code: Practical Patterns for Developers
Explore practical ai agent example code in Python and JavaScript, featuring observe-decide-act loops, environment interaction, and robust error handling for agentic workflows.

An ai agent example code demonstrates how an autonomous agent thinks, acts, and learns within a simple task. A minimal pattern shows the observe-decide-act loop, with a lightweight planner and a goal state. The example emphasizes environment interaction, state updates, and error handling to illustrate agentic behavior.
What is an AI agent and why example code matters\n\nAn ai agent is software that perceives its environment, reasons about goals, and takes actions to achieve them. The phrase ai agent example code often appears in tutorials because it shows how to structure perception, decision, and action into a compact loop. According to Ai Agent Ops, starting with a clean observe-decide-act pattern reduces brittle behavior and accelerates experimentation with agentic AI workflows. Below are practical skeletons you can adapt to your domain.\n\npython\nclass SimpleAgent:\n def __init__(self, goal):\n self.goal = goal\n self.state = {}\n\n def perceive(self, env):\n return env.status()\n\n def decide(self, perception):\n # simple heuristic: if goal available, pick it; otherwise explore\n if self.goal in perception.get('options', []):\n return 'take'\n return 'explore'\n\n def act(self, decision, env):\n if decision == 'take':\n env.take(self.goal)\n elif decision == 'explore':\n env.explore()\n else:\n env.idle()\n\n\npython\nclass Env:\n def __init__(self):\n self.options = ['goal1','goal2']\n def status(self):\n return {'blocked': False, 'options': self.options}\n def take(self, goal):\n print(f'took {goal}')\n def explore(self):\n print('exploring')\n def idle(self):\n print('idling')\n\n\nWhy this helps:\n- This minimal scaffold clarifies where the perception, decision, and action code lives.\n- It highlights how a simple state machine can drive agent behavior.\n- You can swap in a richer environment or a richer decision function without rewriting the loop.
Practical Python example: a tiny agent in a grid world\n\nWe extend the simple agent into a tiny grid world to illustrate perception, decision, and action in a deterministic setting. The environment exposes a small grid with a goal at the bottom-right. The agent uses a straightforward policy to move toward the goal, while logging each step for debugging. This demonstrates how to wire perception (status), decision (direction), and action (step) in a repeatable loop.\n\npython\nclass GridEnv:\n def __init__(self, width=3, height=3, goals=None):\n self.width = width\n self.height = height\n self.goals = goals or {(2, 2): 'goal'}\n self.pos = (0, 0)\n def status(self):\n return {'pos': self.pos, 'goals': list(self.goals.keys())}\n def step(self, action):\n x, y = self.pos\n if action == 'up' and y > 0: y -= 1\n if action == 'down' and y < self.height - 1: y += 1\n if action == 'left' and x > 0: x -= 1\n if action == 'right' and x < self.width - 1: x += 1\n self.pos = (x, y)\n def at_goal(self):\n return self.pos in self.goals\n\nclass GridAgent:\n def __init__(self, env):\n self.env = env\n def perceive(self):\n return self.env.status()\n def decide(self, perception):\n if perception['pos'] == (2, 2):\n return 'idle'\n x, y = perception['pos']\n if x < 2: return 'right'\n if y < 2: return 'down'\n return 'idle'\n def act(self, decision):\n if decision in ('right', 'down', 'left', 'up'):\n self.env.step(decision)\n\n# Run a short demo\nenv = GridEnv()\nagent = GridAgent(env)\nfor _ in range(6):\n s = env.status()\n d = agent.decide(s)\n agent.act(d)\n if env.at_goal():\n print('Reached goal')\n break\n\n\n- The GridEnv and GridAgent illustrate a practical implementation of the observe-decide-act loop in a simple, testable scenario.\n- You can swap the grid size, goals, or policy without changing the overall loop architecture.\n- Logging at each step helps diagnose failures and verify agent behavior.
Extending with async processing and logging\n\nIn real-world AI agents, asynchronous perception and action can improve throughput. This section shows how to adapt the previous example to an async pattern and embed structured logging for observability. We keep the core loop intact while delegating perception and action to async tasks.\n\npython\nimport asyncio\nimport logging\n\nlogging.basicConfig(level=logging.INFO)\n\nclass AsyncEnv:\n def __init__(self):\n self.state = 0\n async def status(self):\n await asyncio.sleep(0.01)\n return {'state': self.state}\n async def step(self, amount):\n self.state += amount\n return self.state\n\nclass AsyncAgent:\n def __init__(self, env):\n self.env = env\n async def perceive(self):\n return await self.env.status()\n async def decide(self, perception):\n return 'increase' if perception['state'] < 5 else 'idle'\n async def act(self, decision):\n if decision == 'increase':\n return await self.env.step(1)\n return None\n\nasync def main():\n env = AsyncEnv()\n agent = AsyncAgent(env)\n for _ in range(6):\n p = await agent.perceive()\n d = await agent.decide(p)\n await agent.act(d)\n logging.info('perception=%s, decision=%s', p, d)\n\nasyncio.run(main())\n\n\n- Async patterns help decouple sensing from acting, enabling higher throughput in environments with latency or external services.\n- Add a structured logger (e.g., INFO, DEBUG) to trace decisions, actions, and errors for faster debugging.
Cross-language patterns: JS/TS agent skeleton\n\nAgents exist in many languages; here is a compact JavaScript/TypeScript skeleton that mirrors the Python examples above. It demonstrates the observe-decide-act loop in Node.js with a tiny grid-like environment. You can adapt the same design to web workers or serverless functions.\n\njavascript\nclass SimpleAgent {\n constructor(goal) {\n this.goal = goal;\n this.state = {};\n }\n perceive(env) { return env.status(); }\n decide(perception) {\n if (perception.options.includes(this.goal)) return 'take';\n return 'explore';\n }\n act(decision, env) {\n if (decision === 'take') env.take(this.goal);\n else if (decision === 'explore') env.explore();\n else env.idle();\n }\n}\n\nclass Env {\n constructor() { this.options = ['goal1','goal2']; }\n status() { return { pos: [0,0], options: this.options }; }\n take(goal) { console.log(`taken ${goal}`); }\n explore() { console.log('exploring') }\n idle() { console.log('idle') }\n}\n\nconst env = new Env();\nconst agent = new SimpleAgent('goal2');\nfor (let i = 0; i < 3; i++) {\n const p = env.status();\n const d = agent.decide(p);\n agent.act(d, env);\n}\n\n\n- JavaScript patterns align with the Python examples, reinforcing how the same architecture translates across ecosystems.\n- Consider using TypeScript types for clearer interfaces and better editor support.
Steps
Estimated time: 45-60 minutes
- 1
Define goals and environment
Clarify the agent's objective and design a minimal environment that can report its state. This first step ensures the observe-decide-act loop has concrete inputs and outputs.
Tip: Use a deterministic environment to verify loop correctness before adding stochastic behavior. - 2
Prototype the Python skeleton
Create a simple agent class with perceive, decide, and act methods. Wire it to a tiny Env stub to simulate perception and action.
Tip: Add small, removable features (e.g., prints) to trace behavior during development. - 3
Run a basic loop
Execute a short run that steps through perception, decision, and action. Validate that the agent progresses toward a goal or stable state.
Tip: Capture logs for each iteration to identify early stopping conditions. - 4
Add error handling
Wrap calls to perception and action with guards and try/except blocks to demonstrate resilience to unexpected environment states.
Tip: Return to a safe default when an error occurs to prevent agent crashes. - 5
Extend to JS/TS or async patterns
Translate the core loop into another language or introduce async perception to mimic real-time behavior.
Tip: Use type annotations (TypeScript) to improve maintainability and reduce runtime surprises. - 6
Evaluate and iterate
Compare different decision policies and environments. Iterate on planners and heuristics to improve reliability.
Tip: Record metrics such as steps to goal, failure rate, and time per iteration.
Prerequisites
Required
- Required
- Required
- Basic command line knowledgeRequired
Optional
- Optional
- Familiarity with AI concepts (agents, plans, goals)Optional
Keyboard Shortcuts
| Action | Shortcut |
|---|---|
| Open command paletteAccess editor commands and extensions | Ctrl+⇧+P |
| Duplicate lineCopy current line or selection | Ctrl+Alt+↓ |
| CopyCopy selected text or code block | Ctrl+C |
| PastePaste into editor or terminal | Ctrl+V |
| Run current scriptCompile/build actions in many editors | Ctrl+⇧+B |
Questions & Answers
What is meant by an AI agent in this guide?
An AI agent is software that perceives its environment, reasons about goals, and takes actions to achieve those goals. The guide uses a minimal observe-decide-act loop to illustrate the core components and their interactions.
An AI agent perceives, decides, and acts to reach goals, shown here with a simple observe-decide-act loop.
Why include code examples in AI agent tutorials?
Code examples make abstract agent concepts concrete, letting developers experiment with perception, decision policies, and actions in a repeatable loop. They also help validate correctness and reveal edge cases.
Code examples help you see how to implement perception, decision making, and actions step by step.
Which languages are suitable for agent examples?
Python and JavaScript (or TypeScript) are common for agent examples due to readability and wide library support, but the same patterns apply in many languages with proper interfaces.
Python and JavaScript are popular for agent examples, with patterns that translate to other languages.
How do you test an AI agent effectively?
Use deterministic environments first, then introduce randomness. Write unit tests for perceive, decide, and act, and measure progress toward goals over multiple runs.
Test in simple, predictable environments first, then gradually add randomness and measure progress.
What are common pitfalls when prototyping agents?
Overfitting a policy to a single environment, ignoring edge cases, and failing to handle errors gracefully can derail an agent early. Modular design helps.
Watch out for overfitting policies, missing edge cases, and weak error handling.
Key Takeaways
- Define a clear observe-decide-act loop
- Use a simple environment to validate agent logic
- Port patterns across Python and JavaScript consistently
- Add logging and error handling early
- Experiment with incremental complexity