
When you watch an AI agent edit files, run commands, recover from errors, and solve tasks step by step, it feels like magic.But it isn’t.
The truth is simple: an agent is just a large language model running in a loop, with tools it can choose to use.
If you can write a while loop in Python, you can build an agent.
This guide walks through the process step by step. We start with a basic Gemini 3 call, then progressively add memory, tools, and control flow until we end up with a fully working agent.
Traditional software follows fixed paths:
Step A → Step B → Step C
Agents are different. An agent uses a language model to decide the control flow dynamically based on a goal.
At a minimum, every agent has four parts:
Nearly every agent follows the same loop:
Before we build an agent, we start with a simple abstraction that maintains conversation state. This is not an agent yet. It has memory, but no ability to act.
from google import genai
class Agent:
def __init__(self, model: str):
self.model = model
self.client = genai.Client()
self.contents = []
def run(self, message: str):
self.contents.append({
"role": "user",
"parts": [{"text": message}]
})
response = self.client.models.generate_content(
model=self.model,
contents=self.contents
)
self.contents.append(response.candidates[0].content)
return response.text
At this stage, the model can remember prior messages, but it has no hands and no eyes. It cannot interact with the environment.
To turn this into an agent, we add tools.
A tool has two parts:
read_file_definition = {"name": "read_file",
"description": "Reads a file and returns its contents.",
"parameters": {
"type": "object",
"properties": {
"file_path": {
"type": "string",
"description": "Path to the file to read"
}
},
"required": ["file_path"]
}
}
Clear naming and precise descriptions are critical. The model relies on these descriptions to decide when to use a tool.
def read_file(file_path: str) -> str:
with open(file_path, "r") as f:
return f.read()
Now we register tools and pass them to the model.
from google.genai import types
tools = {
"read_file": {
"definition": read_file_definition,
"function": read_file
}
}
Inside the agent:
config = types.GenerateContentConfig(
tools=[
types.Tool(
function_declarations=[tool["definition"] for tool in tools.values()]
)
]
)
At this point, the model can request a tool call, but nothing happens yet. We still need to close the loop.
An agent is not about generating one tool call. It’s about generating a sequence of tool calls and reacting to the results.
class Agent:
def __init__(self, model, tools, system_instruction):
self.model = model
self.client = genai.Client()
self.contents = []
self.tools = tools
self.system_instruction = system_instruction
def run(self, message):
self.contents.append({
"role": "user",
"parts": [{"text": message}]
})
response = self.client.models.generate_content(
model=self.model,
contents=self.contents,
config=types.GenerateContentConfig(
system_instruction=self.system_instruction,
tools=[types.Tool(
function_declarations=[tool["definition"] for tool in self.tools.values()]
)]
)
)
self.contents.append(response.candidates[0].content)
if response.function_calls:
for call in response.function_calls:
tool = self.tools.get(call.name)
result = tool["function"](**call.args)
self.contents.append({
"role": "user",
"parts": [{
"functionResponse": {
"name": call.name,
"response": result
}
}]
})
return self.run("") # continue loop
return response
Gemini 3 uses thought signatures to preserve reasoning state across calls. These must be returned exactly as received when looping, or tool calls may fail.
If you use the SDK’s chat/session abstractions, this is handled for you. If you manually manage message parts, you must preserve them carefully.
Once you have the loop, creating a CLI agent is trivial.
agent = Agent(
model="GEMINI_3_MODEL_ID",
tools=tools,
system_instruction="You are a helpful coding assistant."
)
print("Agent ready. Type 'exit' to quit.")
while True:
user_input = input("You: ")
if user_input.lower() in ["exit", "quit"]:
break
response = agent.run(user_input)
print("Agent:", response.text)
Here are a few things to consider when creating your agent.
Building an agent with Gemini 3 shows how powerful modern AI has become. With the right loop, tools, and guardrails, you can create an agent that reasons, takes action, and solves multi-step problems.
But when it comes to customer support, most teams don’t want to engineer and maintain agent infrastructure from scratch.
They want an AI agent that works out of the box.
That’s where Helply comes in.
Helply is a self-learning AI customer support agent designed specifically for SaaS and e-commerce businesses.
It integrates directly with your help desk and can automatically resolve over 70% of Tier-1 support inquiries, 24/7, without human intervention.
Unlike basic chatbots that only answer questions, Helply is built on true agent architecture. It can understand intent, take real actions, and follow strict guardrails so responses stay accurate, safe, and on brand.
Helply continuously learns from your support tickets, knowledge base articles, and internal documentation to improve its responses and draft customer-ready replies over time.
It’s ideal for lean or scaling support teams that want to reduce ticket volume, lower support costs, and improve customer satisfaction without hiring more agents.
Helply also includes a built-in Knowledge Base Concierge that reviews support tickets to identify missing or outdated help content, helping you keep your documentation accurate and effective.
Key features of Helply include:
Helply is what that architecture looks like when it’s applied correctly to customer support.
Create your AI customer support agent with Helply today and transform how your team handles support!
LiveAgent vs Chatbase vs Helply: Compare features, pricing, and pros/cons. See which AI support tool fits your team. Click here to learn more!
Build AI agents with Kimi K2.5 using tools, coding with vision, and agent swarms. Learn best modes, guardrails, and recipes to ship reliable agents.
End-to-end support conversations resolved by an AI support agent that takes real actions, not just answers questions.