Python Code for AI Foundations of Computational Agents

A practical, code-first guide showing Python techniques for AI agent foundations—perception, beliefs, intentions, and lightweight decision loops—with runnable examples and best practices for building agentic workflows.

Ai Agent Ops
Ai Agent Ops Team
·5 min read
Quick AnswerDefinition

This guide demonstrates Python code for artificial intelligence foundations of computational agents by framing agents as stateful decision makers, outlining a minimal perception-decision-action loop, and delivering runnable examples. You’ll explore core concepts like perception, belief bases, intent, and plan execution, with actionable Python snippets you can adapt for your projects.

Introduction to Python-based AI agents

This section introduces how to translate AI agent theory into practical Python implementations. We cover a lightweight agent skeleton and a minimal loop that captures the core cycle: perception, decision, action. The goal is to provide a clear starting point you can extend for real projects. According to Ai Agent Ops, starting with a transparent, modular agent design reduces complexity and accelerates experimentation while remaining faithful to foundational concepts like perception, beliefs, intentions, and planning.

Python
# Minimal agent skeleton in Python class Agent: def __init__(self, name: str): self.name = name self.state = {} def perceive(self, environment: dict) -> dict: # Gather observations from the environment return environment.get("observation", {}) def decide(self, percept: dict) -> dict: # Simple rule-based decision if percept.get("goal"): return {"action": "pursue", "goal": percept["goal"]} return {"action": "idle"} def act(self, decision: dict) -> str: return f"{decision['action']} -> {decision.get('goal','none')}"
Python
# Simple agent loop runner def run_agent(agent: Agent, environment: dict, steps: int = 5): for i in range(steps): percept = agent.perceive(environment) decision = agent.decide(percept) action = agent.act(decision) environment["last_action"] = action # In a real setup, environment would update based on action print(f"Step {i+1}: {action}")

wordCountSection1OverrideInvalidFormatField

Core primitives: perception, belief base, intention, and plan

AI agents rely on a few enduring primitives that you can implement in Python. Perception gathers data from the environment, beliefs store knowledge about the world, intentions express goals, and plans outline sequences of actions. This section provides compact definitions and practical code sketches to organize these concerns.

Python
from dataclasses import dataclass from typing import Any, Dict, List @dataclass class Perception: data: Dict[str, Any] class BeliefBase: def __init__(self): self.facts: Dict[str, Any] = {} def update(self, facts: Dict[str, Any]): self.facts.update(facts) class Intent: def __init__(self, goal: str, priority: int = 1): self.goal = goal self.priority = priority class Plan: def __init__(self, steps: List[str]): self.steps = steps
Python
# Example usage of primitives beliefs = BeliefBase()eliefs.update({"location": "room A", "goal": "reach door"}) intent = Intent("reach_door", priority=1) plan = Plan(["move_forward", "turn_left", "open_door"])

wordCountSection2OverrideInvalidFormatField

Lightweight decision loop and environment bridge

The decision loop connects perception, beliefs, intent, and planning to an environment. This section demonstrates a compact bridge that feeds perceptions into a decision engine and consumes actions produced by the agent. The loop is intentionally simple to keep the model extensible for more complex environments.

Python
class Environment: def observe(self, agent): return {"goal": "exit"} def apply(self, action: str): print(f"Environment received: {action}") class SimpleAgent(Agent): def __init__(self, name: str): super().__init__(name) def tick(agent: SimpleAgent, env: Environment): percept = agent.perceive(env.__dict__) decision = agent.decide(percept) action = agent.act(decision) env.apply(action) return action
Python
# Run a few ticks env = Environment() ag = SimpleAgent("Demo") for _ in range(3): tick(ag, env)

wordCountSection3OverrideInvalidFormatField

Perception, beliefs, and planning in detail

In practice, separating concerns makes the code more maintainable. Perception translates raw observations into structured data. A BeliefBase stores persistent facts across ticks, while Intent captures the current goal. A Plan translates an intended outcome into a sequence of concrete steps. The following block shows a compact coordination pattern that you can reuse across projects.

Python
beliefs = BeliefBase() beliefs.update({"enemy_visible": False, "battery": 90}) intent = Intent("patrol", priority=2) plan = Plan(["scan", "move_to_waypoint", "report"]) # simple plan
Python
# Decision function sketch def select_next_action(beliefs: BeliefBase, intent: Intent, plan: Plan) -> str: if beliefs.facts.get("battery", 100) < 20: return "return_to_base" if beliefs.facts.get("enemy_visible"): return "engage_enemy" return plan.steps[0] if plan.steps else "idle"

wordCountSection4OverrideInvalidFormatField

Building a practical grid-world example

A tiny grid-world example helps put the primitives into a concrete setting. We implement a grid environment, a simple agent loop, and a tiny reward signal to illustrate how perception, belief, intention, and planning interact in a compact scenario. This section includes a full runnable snippet you can test locally.

Python
import random class GridEnv: def __init__(self, width=5, height=5, goal=(4,4)): self.width = width self.height = height self.agent_pos = [0, 0] self.goal = goal def observe(self, agent): dx = self.goal[0] - self.agent_pos[0] dy = self.goal[1] - self.agent_pos[1] return {"agent": tuple(self.agent_pos), "goal_delta": (dx, dy)} def step(self, action: str): x, y = self.agent_pos if action == "up": y = max(0, y-1) if action == "down": y = min(self.height-1, y+1) if action == "left": x = max(0, x-1) if action == "right": x = min(self.width-1, x+1) self.agent_pos = [x, y] done = tuple(self.agent_pos) == self.goal reward = 1.0 if done else -0.01 return self.observe(None), reward, done # Simple interaction loop env = GridEnv() agent = Agent("GridAgent") for step in range(10): obs = env.observe(agent) decision = agent.decide(obs) action = agent.act(decision) _, reward, done = env.step(action.split()[0] if isinstance(action, str) else "idle") if done: print("Reached goal!", step+1) break
Python
# Simple evaluation printout print("Final position:", env.agent_pos)

wordCountSection5OverrideInvalidFormatField

Learning signals: rewards and lightweight policy updates

An essential aspect of AI agents is learning from experience. This section shows a simple reward function and a placeholder for a learning step. You don’t need a full RL stack to get started; a tiny reward signal can help you test how agents adjust behavior over time. The code snippets illustrate how to assign rewards and plan a minimal policy improvement hook.

Python
class Learner: def __init__(self): self.memory = [] def remember(self, episode): self.memory.append(episode) def learn(self): # Placeholder for a learning pass if not self.memory: return # In a real setup, update model/policy here self.memory.clear() def reward(state, action, outcome): if outcome == "success": return 1.0 if outcome == "failure": return -0.5 return 0.0

wordCountSection6OverrideInvalidFormatField

Step-by-step practical guide to a minimal agent

Follow these steps to build a runnable Python-based agent skeleton from scratch:

  1. Initialize a small environment and a basic Agent class.
  2. Implement perception, belief updates, intent, and a simple plan.
  3. Wire a loop that ticks the agent against the environment and prints outcomes.
  4. Add a rudimentary reward signal and a tiny learning hook to emulate improvement.
  5. Run code, observe results, and iterate on the design. Ai Agent Ops emphasizes starting simple and iterating toward modularity.
Bash
# Shell recipe to bootstrap a project python -m venv venv source venv/bin/activate # macOS/Linux venv\Scripts\activate # Windows pip install --upgrade pip pip install numpy
Python
# End-to-end skeleton usage (pseudo-implementation) from environment import GridEnv from agent import Agent env = GridEnv() ag = Agent("EndToEnd") for i in range(20): obs = env.observe(ag) decision = ag.decide(obs) action = ag.act(decision) _, reward, done = env.step(action) if done: break

wordCountSection7OverrideInvalidFormatField

Debugging tips and portability notes

Debugging AI agents requires disciplined logging, deterministic tests, and clear interfaces. This section provides practical debugging tips and notes on portability across environments. You’ll find examples for printing state summaries, implementing a small test suite, and keeping environment-specific code isolated from agent logic.

Python
import logging logging.basicConfig(level=logging.INFO) def debug_tick(agent, env): percept = agent.perceive(env.__dict__) decision = agent.decide(percept) action = agent.act(decision) logging.info(f"Percept={percept}, Decision={decision}, Action={action}")
Bash
# Simple unit test scaffold pytest -q

Steps

Estimated time: 60-90 minutes

  1. 1

    Define agent primitives

    Outline perception, belief base, intention, and plan as separate components. Create minimal Python classes to represent each primitive and establish clear interfaces between them.

    Tip: Keep interfaces minimal and explicit to ease testing.
  2. 2

    Build a tiny environment

    Implement a simple environment with observe() and step() methods. Use a grid or abstract state to enable deterministic tests.

    Tip: Prefer deterministic observations for initial debugging.
  3. 3

    Wire perception to decision

    Connect Perception outputs to BeliefBase updates and instantiate an Intent and Plan based on goals.

    Tip: Decouple perception parsing from decision logic.
  4. 4

    Create the agent loop

    Implement a tick loop that calls perceive -> decide -> act and applies actions to the environment.

    Tip: Add logging to trace the loop step-by-step.
  5. 5

    Add a reward signal

    Introduce a simple reward structure to evaluate outcomes and seed learning hooks for future extension.

    Tip: Start with a small, interpretable reward rule.
Pro Tip: Modularize components: perception, belief, intent, and plan to simplify testing.
Warning: Avoid hard-coding behavior; prefer interfaces that can be swapped for experiments.
Note: Use type hints to improve readability and maintainability of agent primitives.
Pro Tip: Write small, testable code blocks and document interfaces for agents and environments.

Prerequisites

Required

Commands

ActionCommand
Create a Python virtual environmentAll platforms
Activate virtual environmentWindows vs macOS/Linux
Install dependenciesRequirements: basic ML libs
Run agent scriptFrom project root
Run testsUnit tests for agent components

Questions & Answers

What is a computational agent in AI?

A computational agent is a software construct that senses its environment, maintains internal state (beliefs), selects goals (intent), and executes actions to achieve those goals. It can be simple or part of a larger agent architecture. This article grounds those ideas in Python with runnable examples.

A computational agent senses, thinks, and acts to achieve goals. This article shows Python code to implement those ideas in a lightweight way.

Why use Python for AI agents?

Python provides clear syntax, rich libraries, and rapid iteration for prototyping AI agents. It helps you model perception, beliefs, and planning without boilerplate, enabling you to focus on the agent's decision logic and environment interactions.

Python is great for quickly prototyping agents because it's simple and has lots of AI libraries.

Beliefs vs goals in an agent?

Beliefs are the agent's knowledge about the world, updated from perception. Goals (intent) express desired outcomes. Plans convert goals into concrete steps. Distinguishing these roles clarifies how changes in perception affect decision making and action.

Beliefs are what the agent thinks; goals are what it wants to achieve, and plans tell it how to get there.

How do I test AI agents in this setup?

Start with unit tests for perception parsing, belief updates, and basic decision rules. Then run end-to-end tests against a deterministic environment to verify the perceive-decide-act loop works as intended. Use simple assertions and print traces during development.

Test components separately and then run end-to-end tests to ensure the loop behaves as expected.

Can these patterns scale to real AI frameworks?

Yes. These primitives map to components in larger frameworks where perception feeds neural or symbolic models, beliefs are stored in a knowledge base, and plans are composed by planners. Start with the lightweight pattern and progressively replace components with scalable implementations.

The basic ideas scale up: perception, beliefs, intent, and planning can be wired to more powerful tools and models.

What about safety and reliability?

Safety requires explicit constraints, validation of inputs, and safe action boundaries. Start with deterministic tests and guardrails in the loop. As agents grow, incorporate monitoring, fail-safes, and human-in-the-loop review.

Safety comes from explicit constraints, tests, and monitoring as you scale.

Key Takeaways

  • Define agent primitives: perception, beliefs, intent, plan
  • Build a simple perception-decision-action loop in Python
  • Bridge the agent to a minimal environment with a reward signal
  • Structure code to enable future learning and extension
  • Test components in isolation before integration

Related Articles