Step 01 - Implementing AI Agents
Welcome to Section 2: Agentic Systems
Congratulations on completing Section 1! You’ve learned how to build AI-infused applications with chatbots, RAG patterns, and function calling.
In Section 2, we’re shifting gears to explore agentic systems — autonomous AI agents that can work together to solve complex, multi-step problems. Instead of a chatbot responding to user queries, you’ll build agents that can make decisions, use tools, and collaborate in workflows.
What You’ll Learn
In this section, you will:
- Understand the difference between AI Services (Section 1) and AI Agents (Section 2)
- Build your first autonomous agent using the
quarkus-langchain4j-agenticmodule - Learn how agents use tools (function calling) to take actions
- See agents make decisions based on contextual information
A New Scenario: Car Management System
The Miles of Smiles car rental company needs help managing their fleet. Here’s the business process flow:
- Rental Returns: When customers return cars, the rental team records feedback about the car’s condition.
- Cleaning Requests: Based on the feedback, the system should automatically decide if the car needs cleaning.
- Cleaning Returns: After cleaning, the team provides their own feedback and returns the car.
- Fleet Availability: Clean cars with no issues return to the available pool for rental.
Your job is to build an AI agent that can analyze feedback and intelligently decide whether to request a cleaning.
AI Services vs. AI Agents
Before diving in, let’s clarify some key differences:
| Feature | AI Services (Section 1) | AI Agents (Section 2) |
|---|---|---|
| Purpose | Answer user questions | Perform autonomous tasks |
| Interaction | Reactive (responds to prompts) | Reactive and Proactive (takes actions) |
| Tool Usage | Can call tools when needed | Can call tools to accomplish goals |
| Workflows | Single-agent interactions | Multi-agent collaboration (workflow or supervisor-based |
| Annotation | Methods use @SystemMessage and @UserMessage |
One method per interface (using @Agent) |
| Use Cases | Chatbots, Q&A, content generation | Automation, decision-making, orchestration |
In this section, you’ll see how agents extend the capabilities you created in Section 1 to build sophisticated, intelligent, and autonomous systems.
Prerequisites
Before starting, ensure you have:
- Completed Section 1 (or you are familiar with Quarkus LangChain4j basics)
- JDK 21+ installed
- OpenAI API key set as
OPENAI_API_KEYenvironment variable - A container runtime (Docker/Podman) for running a PostgreSQL Dev Service
Running the Application
Navigate to the section-2/step-01 directory and start the application:
Once started, open your browser to http://localhost:8080.
Understanding the UI
The application has two main sections:
- Fleet Status (top): Shows all cars in the Miles of Smiles fleet with their current status.
- Returns (bottom): Displays cars that are currently rented or being cleaned.

Try It Out
Let’s see the agent in action!
Test 1: Car Needs Cleaning
Act as a rental team member processing a car return. In the Returns > Rental Return section, select a car and enter this feedback:
Click the Return button.
What happens?
- The agent analyzes the feedback
- Recognizes the car needs cleaning
- Calls the
CleaningToolto request interior cleaning - Updates the car’s status to
AT_CLEANING
Check your terminal logs (you may need to scroll up). You should see output like:
🚗 CleaningTool result: Cleaning requested for Mercedes-Benz C-Class (2020), Car #6:
- Interior cleaning
Additional notes: Interior cleaning required due to dog hair in back seat.
Test 2: Car Is Clean
Now try returning a car that’s already clean:
What happens?
- The agent analyzes the feedback
- Determines no cleaning is needed
- Returns
CLEANING_NOT_REQUIRED(no tool call made) - Updates the car’s status to
AVAILABLE
In your logs, you’ll see the agent’s response contains:
Notice how the agent made a decision without calling the cleaning tool. This demonstrates reasoning!
Building Agents with Quarkus LangChain4j
The langchain4j-agentic module introduces the ability to create AI Agents.
This module is available in Quarkus using the quarkus-langchain4j-agentic module.
If you open the pom.xml file from the project, you will see this dependency:
<dependency>
<groupId>io.quarkiverse.langchain4j</groupId>
<artifactId>quarkus-langchain4j-agentic</artifactId>
</dependency>
Key Concepts
Agents share similarities with AI Services from Section 1:
- Declared as interfaces (implementation generated automatically)
- Use
@SystemMessageto define the agent’s role and behavior - Use
@UserMessageto provide request-specific context - Can be assigned tools to perform actions
- Support both programmatic and declarative (annotation-based) definitions, even if in Quarkus, we recommend the declarative approach
Key Differences
- Only one method per interface can be annotated with
@Agent- this is the agent entry point - Designed to be composed with for workflows or be invoked by a supervisor — agents can be composed together (more on this in Step 02)
- Focus on autonomous actions rather than conversational responses
Understanding the Application Architecture

The application consists of four main components:
- CarManagementResource: REST API endpoints
- CarManagementService: Business logic and agent orchestration
- CleaningAgent: AI agent that decides if cleaning is needed
- CleaningTool: Tool that requests cleaning services
Let’s explore each component.
Component 1: REST API Endpoints
The CarManagementResource provides REST APIs to handle car returns:
/**
* REST resource for car management operations.
*/
@Path("/car-management")
public class CarManagementResource {
@Inject
CarManagementService carManagementService;
/**
* Process a car return from rental.
*
* @param carNumber The car number
* @param rentalFeedback Optional rental feedback
* @return Result of the processing
*/
@POST
@Path("/rental-return/{carNumber}")
public Response processRentalReturn(Long carNumber, @RestQuery String rentalFeedback) {
try {
String result = carManagementService.processCarReturn(carNumber, rentalFeedback, "");
return Response.ok(result).build();
} catch (Exception e) {
e.printStackTrace();
return Response.status(Response.Status.INTERNAL_SERVER_ERROR)
.entity("Error processing rental return: " + e.getMessage())
.build();
}
}
/**
* Process a car return from cleaning.
*
* @param carNumber The car number
* @param cleaningFeedback Optional cleaning feedback
* @return Result of the processing
*/
@POST
@Path("/cleaningReturn/{carNumber}")
public Response processCleaningReturn(Long carNumber, @RestQuery String cleaningFeedback) {
try {
String result = carManagementService.processCarReturn(carNumber, "", cleaningFeedback);
return Response.ok(result).build();
} catch (Exception e) {
Log.error(e.getMessage(), e);
return Response.status(Response.Status.INTERNAL_SERVER_ERROR)
.entity("Error processing cleaning return: " + e.getMessage())
.build();
}
}
}
Key Points:
- The
processRentalReturnmethod (endpoint/car-management/rental-return/{carNumber}): Accepts feedback from the rental team - The
processCleaningReturnmethod (endpoint/car-management/cleaningReturn/{carNumber}): Accepts feedback from the cleaning team - Both endpoints delegate to
CarManagementService.processCarReturn
Component 2: Business Logic & Agent Invocation
The CarManagementService orchestrates the car return process:
/**
* Process a car return from any operation.
*
* @param carNumber The car number
* @param rentalFeedback Optional rental feedback
* @param cleaningFeedback Optional cleaning feedback
* @return Result of the processing
*/
@Transactional
public String processCarReturn(Long carNumber, String rentalFeedback, String cleaningFeedback) {
CarInfo carInfo = CarInfo.findById(carNumber);
if (carInfo == null) {
return "Car not found with number: " + carNumber;
}
// Process the car result
String result = cleaningAgent.processCleaning(
carInfo.make,
carInfo.model,
carInfo.year,
carNumber,
rentalFeedback != null ? rentalFeedback : "",
cleaningFeedback != null ? cleaningFeedback : "");
if (result.toUpperCase().contains("CLEANING_NOT_REQUIRED")) {
carInfo.status = CarStatus.AVAILABLE;
carInfo.persist();
}
return result;
}
Key Points:
- The
CleaningAgentfield is injected as a CDI bean - In the
processCarReturnmethod, the agent is invoked with car details and feedback. The response is checked forCLEANING_NOT_REQUIRED:- If found → Car marked as
AVAILABLE - If not found → Car stays
AT_CLEANING(tool was called)
- If found → Car marked as
This simple pattern allows you to integrate autonomous decision-making into your business logic!
Component 3: The CleaningAgent
Here’s where the magic happens — the AI agent definition:
/**
* Agent that determines what cleaning services to request.
*/
public interface CleaningAgent {
@SystemMessage("""
You handle intake for the cleaning department of a car rental company.
It is your job to submit a request to the provided requestCleaning function to take action based on the provided feedback.
Be specific about what services are needed.
If no cleaning is needed based on the feedback, respond with "CLEANING_NOT_REQUIRED".
""")
@UserMessage("""
Car Information:
Make: {carMake}
Model: {carModel}
Year: {carYear}
Car Number: {carNumber}
Feedback:
Rental Feedback: {rentalFeedback}
Cleaning Feedback: {cleaningFeedback}
""")
@Agent("Cleaning specialist. Determines what cleaning services are needed.")
@ToolBox(CleaningTool.class)
String processCleaning(
String carMake,
String carModel,
Integer carYear,
Long carNumber,
String rentalFeedback,
String cleaningFeedback);
}
Let’s break it down:
@SystemMessage
Defines the agent’s role and decision-making logic:
- Acts as the intake specialist for the cleaning department
- Should call the
requestCleaningfunction in theCleaningToolwhen cleaning is needed - Should be specific about which services to request
- Should return
CLEANING_NOT_REQUIREDif no cleaning is needed
Pro Tip: Clear Instructions Matter
The system message is critical! It tells the agent:
- WHO it is (cleaning intake specialist)
- WHAT to do (submit cleaning requests)
- WHEN to act (based on feedback)
- HOW to respond (specific services or
CLEANING_NOT_REQUIRED)
@UserMessage
Provides context for each request using template variables:
- Car details:
{carMake},{carModel},{carYear},{carNumber} - Feedback sources:
{rentalFeedback},{cleaningFeedback}
These variables are automatically populated from the method parameters.
@Agent
Marks this as an agent method — only one per interface.
- Provides a description: “Cleaning specialist. Determines what cleaning services are needed.”
- This description can be used by other agents or systems to understand this agent’s purpose
@ToolBox
Assigns the CleaningTool to this agent:
- The agent can call methods in this tool to perform actions
- The LLM decides when and how to use the tool based on the task (function calling has been covered in the Section 1 of the workshop)
Method Signature
Defines the inputs and output:
- Inputs: All the context the agent needs to make decisions
- Output:
String— the agent’s response (either tool result orCLEANING_NOT_REQUIRED)
No Implementation Required
Notice there’s no method body! LangChain4j automatically generates the implementation:
- Receives the inputs
- Sends the system + user messages to the LLM
- If the LLM wants to call the tool, it does so
- Returns the final response
Component 4: The CleaningTool
If you went through Section 1, you’ll remember that we already covered tool and function calling for single AI services.
They work in pretty much the exact same way for Agents: Tools enable agents to call functions that can take action.
These tools can be local, like in the following CleaningTool example, or remote, using the MCP protocol we visited in Section 1, Step 8..
/**
* Tool for requesting cleaning operations.
*/
@Dependent
public class CleaningTool {
/**
* Requests a cleaning based on the provided parameters.
*
* @param carNumber The car number
* @param carMake The car make
* @param carModel The car model
* @param carYear The car year
* @param exteriorWash Whether to request exterior wash
* @param interiorCleaning Whether to request interior cleaning
* @param detailing Whether to request detailing
* @param waxing Whether to request waxing
* @param requestText The cleaning request text
* @return A summary of the cleaning request
*/
@Tool("Requests a cleaning with the specified options")
@Transactional
public String requestCleaning(
Long carNumber,
String carMake,
String carModel,
Integer carYear,
boolean exteriorWash,
boolean interiorCleaning,
boolean detailing,
boolean waxing,
String requestText) {
// In a real implementation, this would make an API call to a cleaning service
// or update a database with the cleaning request
// Update car status to AT_CLEANING
CarInfo carInfo = CarInfo.findById(carNumber);
if (carInfo != null) {
carInfo.status = CarStatus.AT_CLEANING;
carInfo.persist();
}
var result = generateCleaningSummary(carNumber, carMake, carModel, carYear,
exteriorWash, interiorCleaning, detailing,
waxing, requestText);
System.out.println("\uD83D\uDE97 CleaningTool result: " + result);
return result;
}
Key Points:
@Dependentscope is required (see explanation below)@Toolannotation exposes this method to agents- The description helps the LLM understand when to use this tool
- Parameters define what information the agent must provide
- The method updates the car status to
AT_CLEANING, if thecarInfois notnull - The method returns a summary of the request (and prints a log messages)
Why do we use @Dependent scope for the Tool?
When a tool is added to an agent, LangChain4j introspects the tool object to find methods with @Tool annotations.
The problem with other scopes:
CDI creates proxies for beans with scopes like @ApplicationScoped or @SessionScoped. These proxy objects don’t preserve the @Tool annotations, so LangChain4j can’t detect them.
The solution:
Use @Dependent scope, which doesn’t create proxies, allowing LangChain4j to see the annotations directly.
Alternative:
If you need other CDI scopes, you can use a ToolProvider to manually register tools (not covered in this workshop).
Understanding Tool Execution Flow
Here is the sequence of actions happening when the agent is invoked:
- Agent receives car return feedback (entered by the user)
- LLM analyzes the feedback
- LLM decides to call
requestCleaning(or not, depending on the feedback) - If called, LLM determines which parameters to use:
- Should
interiorCleaningbe true? - Should
exteriorWashbe true? - What
requestTextshould be included?
- Should
- Tool executes and returns a result
- Agent receives the result and can respond
How It All Works Together
Let’s trace through a complete example:
Scenario: Dog Hair in Back Seat
sequenceDiagram
participant User
participant REST as CarManagementResource
participant Service as CarManagementService
participant Agent as CleaningAgent
participant LLM as OpenAI LLM
participant Tool as CleaningTool
User->>REST: POST /rental-return/6<br/>feedback: "Dog hair in back seat"
REST->>Service: processCarReturn(6, "Dog hair...", "")
Service->>Agent: processCleaning(...)
Agent->>LLM: System: You handle cleaning intake...<br/>User: Car #6, feedback: "Dog hair..."
LLM->>LLM: Analyze feedback<br/>Decision: Needs interior cleaning
LLM->>Tool: requestCleaning(<br/> carNumber: 6,<br/> interiorCleaning: true,<br/> requestText: "Dog hair removal"<br/>)
Tool->>Tool: Update car status to AT_CLEANING
Tool-->>LLM: "Cleaning requested: Interior cleaning..."
LLM-->>Agent: "Cleaning requested: Interior cleaning..."
Agent-->>Service: "Cleaning requested: Interior cleaning..."
Service->>Service: Check if contains "CLEANING_NOT_REQUIRED"<br/>No → Keep status AT_CLEANING
Service-->>REST: Result message
REST-->>User: 200 OK
Scenario: Car Looks Good
sequenceDiagram
participant User
participant REST as CarManagementResource
participant Service as CarManagementService
participant Agent as CleaningAgent
participant LLM as OpenAI LLM
User->>REST: POST /rental-return/3<br/>feedback: "Car looks good"
REST->>Service: processCarReturn(3, "Car looks good", "")
Service->>Agent: processCleaning(...)
Agent->>LLM: System: You handle cleaning intake...<br/>User: Car #3, feedback: "Car looks good"
LLM->>LLM: Analyze feedback<br/>Decision: No cleaning needed
LLM-->>Agent: "CLEANING_NOT_REQUIRED"
Agent-->>Service: "CLEANING_NOT_REQUIRED"
Service->>Service: Check if contains "CLEANING_NOT_REQUIRED"<br/>Yes → Set status to AVAILABLE
Service-->>REST: Result message
REST-->>User: 200 OK
Key Takeaways
- Agents are autonomous: They make decisions and take actions based on context.
- Tools enable actions: Agents use tools to interact with systems (databases, APIs, etc.)
- Clear prompts matter: The
@SystemMessageguides the agent’s decision-making - Type-safe interfaces: No manual API calls — just define interfaces and let Quarkus LangChain4j handle the rest
- CDI integration: Agents and tools are managed beans that integrate seamlessly with Quarkus
Experiment Further
Try these experiments to deepen your understanding:
1. Test Edge Cases
Try different feedback scenarios:
What does the agent decide for each? Does it call the cleaning tool?
2. Modify the System Message
Edit CleaningAgent.java and change the system message. For example:
@SystemMessage("""
You are a very picky cleaning intake specialist.
Request a full detail (exterior, interior, waxing, detailing)
unless the car is absolutely perfect.
If perfect, respond with "CLEANING_NOT_REQUIRED".
""")
How does this change the agent’s behavior?
3. Add More Tool Parameters
Edit CleaningTool.java to add a tireCleaning parameter.
Does the agent automatically learn to use it?
Troubleshooting
Error: OPENAI_API_KEY not set
Make sure you’ve exported the environment variable:
Then restart the application.
Tool methods not being called
- Verify the tool uses
@Dependentscope - Check that the
@Toolannotation is present - Ensure the tool is properly referenced in
@ToolBox
Agent always/never calls the tool
- Review your
@SystemMessage— is it clear about when to use the tool? - Try adding more explicit instructions
- Consider providing examples in the system message
What’s Next?
In this step, you built a single autonomous agent that makes decisions and uses tools.
In Step 02, you’ll learn how to compose multiple agents into workflows — where agents collaborate to solve complex problems together!