Quick Start

This guide walks you through creating and running a simple AgentML agent. We'll define a basic agent that reads user input, uses an LLM to generate a response, and outputs the result.

Writing Your First Agent

Agent definitions are written in AgentML (.aml), an XML-based language. Below is a basic example agent document, which could be saved as agent.aml:

<?xml version="1.0" encoding="UTF-8"?>
<agentml xmlns="github.com/agentflare-ai/agentml"
       datamodel="ecmascript"
       xmlns:gemini="github.com/agentflare-ai/agentml-go/gemini">
  
  <datamodel>
     Define state variables 
    <data id="user_input" expr="''"/>
    <data id="response" expr="''"/>
  </datamodel>

  <state id="main">
     State 1: Waiting for user input 
    <state id="awaiting_input">
      <onentry>
         Synchronously get user input (e.g., from console or API) 
        <assign location="user_input" expr="getUserInput()"/>
      </onentry>
      <transition target="processing"/>
    </state>

     State 2: Process input with LLM 
    <state id="processing">
      <onentry>
         Call LLM (Gemini) to generate an event based on user input 
        <gemini:generate model="gemini-2.0"
                        location="_event"
                        promptexpr="'Process this input: ' + user_input"/>
      </onentry>
       Expect an LLM-generated event "action.response" with schema 
      <transition event="action.response"
                  event:schema='{"type":"object","properties":{"message":{"type":"string"}},"required":["message"]}'
                  target="responding"/>
    </state>

     State 3: Respond to user 
    <state id="responding">
      <onentry>
         Use the LLM output (event data) to set the response 
        <assign location="response" expr="_event.data.message"/>
        <log expr="'Response: ' + response"/>
      </onentry>
      <transition target="awaiting_input"/> Loop back for next input 
    </state>
  </state>
</agentml>

Breaking Down the Agent

1. Agent Declaration

<agentml xmlns="github.com/agentflare-ai/agentml"
       datamodel="ecmascript"
       xmlns:gemini="github.com/agentflare-ai/agentml-go/gemini">

We declare the <agent> root with the AgentML namespace and an ECMAScript data model (for JavaScript expressions). We also usexmlns:gemini to include the Gemini LLM integration namespace.

2. Datamodel

<datamodel>
  <data id="user_input" expr="''"/>
  <data id="response" expr="''"/>
</datamodel>

We define two pieces of data: user_input to hold the latest user message and response to hold the LLM-generated response. Both start as empty strings.

3. States

awaiting_input

On entry, it calls a function getUserInput() to retrieve input and stores it in user_input. Then it immediately transitions to processing.

processing

On entry, it invokes <gemini:generate> to send a prompt to the LLM. The location="_event" means the LLM's output will be captured as the special _event. The transition listens foraction.response events with a defined schema.

responding

On entry, it takes the event data (_event.data.message) and assigns it into our response variable, then logs it out. Finally it transitions back to awaiting_input to await the next input, creating a loop.

Running the Agent

To run an AgentML document, you use the AgentML Go API to load the .aml file and execute the state machine interpreter. Here's a Go program that runs the agent:

package main

import (
  "context"
  "github.com/agentflare-ai/agentml"
  "github.com/agentflare-ai/agentml/agent"
  _ "github.com/agentflare-ai/agentml-go/gemini" // import namespace implementations
)

func main() {
  ctx := context.Background()
  
  // Load the agent definition from file
  doc, err := agent.LoadFromFile("agent.aml") // parse XML into a Document
  if err != nil {
    panic(err)
  }
  
  // Create a new interpreter for the agent
  interp, err := agentml.NewInterpreter(ctx, doc)
  if err != nil {
    panic(err)
  }
  
  // Start the agent state machine
  if err := interp.Start(ctx); err != nil {
    panic(err)
  }
  
  // Wait for the agent to finish (if it has an end state or stops)
  <-interp.Done()
}

Code Breakdown:

Import Packages

We import the main agentml package and the agent subpackage. We also import the gemini package anonymously (_) to ensure the Gemini namespace is registered.

Parse Agent File

We parse the agent.aml file into a DOM (document object model).

Create Interpreter

We create an interpreter with agentml.NewInterpreter(ctx, doc). The interpreter is the runtime instance of the state machine.

Start Execution

Calling interp.Start(ctx) starts the agent in its initial state. The interpreter runs concurrently; we block on<-interp.Done() to wait until the agent finishes.

Next Steps

After running this program, you should see it repeatedly prompt and log responses. The key takeaway is that integrating AgentML is as simple as loading the XML and letting the interpreter handle the event loop and LLM calls according to your defined workflow.

For more involved scenarios, consider exploring the customer_support example in the repository which demonstrates a full multi-step conversational agent with bookings and confirmations.