AgentMLAgentML
GitHubCommunityAgentflare.com
  • Installation
  • Overview
  • Quick Start

Quick Start

Build your first AgentML agent in just 5 minutes. This guide walks you through creating a simple chatbot that demonstrates the core concepts of AgentML and W3C SCXML.

Prerequisites

Make sure you have installed agentmlx before proceeding.

Your First Agent

Create a file called chatbot.aml:

<agentml xmlns="github.com/agentflare-ai/agentml"
       datamodel="ecmascript"
       xmlns:gemini="github.com/agentflare-ai/agentml-go/gemini">
 
  <datamodel>
    <data id="user_input" expr="''" />
    <data id="response" expr="''" />
  </datamodel>
 
  <state id="awaiting_input">
    <transition event="user.message" target="processing">
      <assign location="user_input" expr="_event.data.message" />
    </transition>
  </state>
 
  <state id="processing">
    <onentry>
      <gemini:generate
        model="gemini-2.0-flash-exp"
        location="_event"
        promptexpr="'You are a helpful assistant. User said: ' + user_input" />
    </onentry>
 
    <transition event="action.response" target="responding">
      <assign location="response" expr="_event.data.message" />
    </transition>
  </state>
 
  <state id="responding">
    <onentry>
      <log expr="'Bot: ' + response" />
    </onentry>
    <transition target="awaiting_input" />
  </state>
</agentml>

Understanding the Code

Let's break down each part of this agent:

Root Element

<agentml xmlns="github.com/agentflare-ai/agentml"
       datamodel="ecmascript"
       xmlns:gemini="github.com/agentflare-ai/agentml-go/gemini">
  • <agentml> is the root element for AgentML files (extends W3C SCXML's <scxml>)
  • xmlns declares the default namespace
  • datamodel="ecmascript" specifies we're using JavaScript for expressions
  • xmlns:gemini imports the Gemini LLM namespace

Data Model

<datamodel>
  <data id="user_input" expr="''" />
  <data id="response" expr="''" />
</datamodel>

The datamodel holds the agent's state variables. Think of this as the agent's memory.

State Machine

<state id="awaiting_input">
  <transition event="user.message" target="processing">
    <assign location="user_input" expr="_event.data.message" />
  </transition>
</state>

States define what the agent is doing. Transitions move between states when events occur.

LLM Integration

<gemini:generate
  model="gemini-2.0-flash-exp"
  location="_event"
  promptexpr="'You are a helpful assistant. User said: ' + user_input" />

The <gemini:generate> action calls the Gemini LLM. Results are stored in _event.

Running Your Agent

Set your Gemini API key:

export GEMINI_API_KEY=your_api_key_here

Run the agent:

agentmlx run chatbot.aml --initial-event '{"type":"user.message","data":{"message":"Hello!"}}' --verbose

You should see output showing the agent's state transitions and the LLM response.

Adding Schema Validation

Let's enhance our agent with event schema validation to ensure type safety:

<agentml xmlns="github.com/agentflare-ai/agentml"
       datamodel="ecmascript"
       xmlns:gemini="github.com/agentflare-ai/agentml-go/gemini">
 
  <datamodel>
    <data id="user_input" expr="''" />
    <data id="response" expr="''" />
  </datamodel>
 
  <state id="awaiting_input">
    <transition event="user.message"
                event:schema='{
                  "type": "object",
                  "description": "User message event",
                  "properties": {
                    "message": {
                      "type": "string",
                      "description": "The user's message text"
                    }
                  },
                  "required": ["message"]
                }'
                target="processing">
      <assign location="user_input" expr="_event.data.message" />
    </transition>
  </state>
 
  <state id="processing">
    <onentry>
      <gemini:generate
        model="gemini-2.0-flash-exp"
        location="_event"
        promptexpr="'You are a helpful assistant. User said: ' + user_input" />
    </onentry>
 
    <transition event="action.response"
                event:schema='{
                  "type": "object",
                  "description": "LLM response event",
                  "properties": {
                    "message": {
                      "type": "string",
                      "description": "The assistant's response"
                    }
                  },
                  "required": ["message"]
                }'
                target="responding">
      <assign location="response" expr="_event.data.message" />
    </transition>
  </state>
 
  <state id="responding">
    <onentry>
      <log expr="'Bot: ' + response" />
    </onentry>
    <transition target="awaiting_input" />
  </state>
</agentml>

The event:schema attribute validates incoming events against JSON Schema, ensuring data integrity.

Building a Multi-Intent Agent

Let's create a more sophisticated agent that can handle different user intents:

<agentml xmlns="github.com/agentflare-ai/agentml"
       datamodel="ecmascript"
       xmlns:gemini="github.com/agentflare-ai/agentml-go/gemini">
 
  <datamodel>
    <data id="user_input" expr="''" />
    <data id="response" expr="''" />
  </datamodel>
 
  <state id="awaiting_input">
    <transition event="user.message" target="classify_intent">
      <assign location="user_input" expr="_event.data.message" />
    </transition>
  </state>
 
  <state id="classify_intent">
    <onentry>
      <gemini:generate
        model="gemini-2.0-flash-exp"
        location="_event"
        promptexpr="'Classify the intent as greeting, question, or farewell: ' + user_input" />
    </onentry>
 
    <transition event="intent.greeting" target="handle_greeting" />
    <transition event="intent.question" target="handle_question" />
    <transition event="intent.farewell" target="handle_farewell" />
    <transition target="handle_unknown" />
  </state>
 
  <state id="handle_greeting">
    <onentry>
      <assign location="response" expr="'Hello! How can I help you today?'" />
    </onentry>
    <transition target="responding" />
  </state>
 
  <state id="handle_question">
    <onentry>
      <gemini:generate
        model="gemini-2.0-flash-exp"
        location="_event"
        promptexpr="'Answer this question: ' + user_input" />
    </onentry>
    <transition event="action.response" target="responding">
      <assign location="response" expr="_event.data.message" />
    </transition>
  </state>
 
  <state id="handle_farewell">
    <onentry>
      <assign location="response" expr="'Goodbye! Have a great day!'" />
    </onentry>
    <transition target="final_state" />
  </state>
 
  <state id="handle_unknown">
    <onentry>
      <assign location="response" expr="'I\'m not sure how to help with that.'" />
    </onentry>
    <transition target="responding" />
  </state>
 
  <state id="responding">
    <onentry>
      <log expr="'Bot: ' + response" />
    </onentry>
    <transition target="awaiting_input" />
  </state>
 
  <final id="final_state" />
</agentml>

Validating Your Agent

Before running your agent, validate it for errors:

agentmlx validate chatbot.aml

The validator provides compiler-like error messages to help you fix issues:

chatbot.aml:15:5: WARNING[W340] State 'processing' has only conditional transitions and may deadlock
  hint: Add an unconditional fallback transition
  hint: Example: <transition target="error_state" />

Debugging with Snapshots

Save runtime snapshots to debug your agent's execution:

agentmlx run chatbot.aml \
  --initial-event '{"type":"user.message","data":{"message":"Hello!"}}' \
  --save-snapshots ./debug \
  --snapshot-interval 1

This creates XML snapshots showing the agent's state at each step.

Testing Your Agent

Run W3C conformance tests to ensure your agent follows SCXML standards:

agentmlx test conformance

Next Steps

Congratulations! You've built your first AgentML agent. Here's what to explore next:

Learn Core Concepts

  • Event-Driven LLM - Understand event-based architecture
  • State Machines - Build complex workflows
  • Events & Schemas - Structure your data
  • Token Efficiency - Optimize LLM costs

Explore Architecture

  • Document Structure - Deep dive into AgentML syntax
  • Namespace System - Extend functionality
  • Interpreter - How agentmlx works
  • I/O Processors - External communications

Add Extensions

  • Gemini Extension - Google's Gemini models
  • Ollama Extension - Local LLM integration
  • Memory Extension - Vector search and graph database
  • Custom Extensions - Build your own

Deploy Your Agent

  • Docker Deployment - Containerize your agent
  • Self-Hosted - Run on your infrastructure
  • Vercel - Deploy to the edge

Example Projects

Check out complete example agents:

  • Customer Support Bot: examples/customer_support/
  • Flight Booking Agent: examples/flight_booking/
  • Data Analysis Agent: Multi-step analysis workflows
  • RAG Application: Retrieval-augmented generation

Find them at github.com/agentflare-ai/agentml/tree/main/examples


Questions? Join the GitHub Discussions for help and community support.

Back to Website© 2025 Agentflare, Inc.
© 2025 Agentflare, Inc.