Building a Java API connecting to LLMs with Spring AI and Ollama local models
Introduction
In the rapidly evolving world of AI, developers often need to integrate multiple AI providers into their applications. Whether you're using local models with Ollama, cloud services like OpenAI, or planning to add Anthropic or Google's Gemini, having a unified interface to manage these providers is crucial.
In this tutorial, we'll build a flexible, extensible AI backend using Spring Boot and Spring AI that can seamlessly switch between different AI providers. We'll implement a clean architecture that makes it easy to add new providers without changing existing code.
What We'll Build
We're going to create a REST API that:
Supports multiple AI providers through a unified interface
Allows dynamic provider and model selection per request
Implements a registry pattern for provider management
Provides proper error handling and validation
Uses Spring AI for simplified AI integration
Here's what our architecture will look like:
graph LR Client[Client App] --> API[REST API] API --> Registry[Provider Registry] Registry --> Ollama[Ollama Provider] Registry --> Future[Future Providers] Ollama --> LLM[Local LLMs]
Prerequisites
Before we begin, make sure you have:
Java 21 or higher installed
Maven installed
Ollama installed and running (for local AI models)
Your favorite IDE (IntelliJ IDEA, VS Code, etc.)
Step 1: Project Setup
Let's start by creating a new Spring Boot project. You can use Spring Initializr or create it manually.
1.1 Create the Project Structure
mkdir ai-backends-javacd ai-backends-java
1.2 Create thepom.xml
<?xml version="1.0" encoding="UTF-8"?><
projectxmlns="http:// maven.apache.org/POM/4.0.0"xmlns:xsi="http:// www.w3.org/2001/XMLSchema-instance"xsi:schema Location="http:// maven.apache.org/POM/4.0.0 https:// maven.apache.org/xsd/maven-4.0.0.xsd"><
model Version> 4.0.0</model Version><
parent><
group Id> org.springframework.boot</group Id><
artifact Id> spring-boot-starter-parent</artifact Id><
version> 3.5.6</version><
relative Path/></parent><
group Id> com.aibackends</group Id><
artifact Id> ai</artifact Id><
version> 0.0.1-SNAPSHOT</version><
name> ai</name><
description> AIBackends Java</description><
properties><
java.version> 21</java.version><
spring-ai.version> 1.0.2</spring-ai.version></properties><
dependencies><
dependency><
group Id> org.springframework.boot</group Id><
artifact Id> spring-boot-starter-web</artifact Id></dependency><
dependency><
group Id> org.springframework.ai</group Id><
artifact Id> spring-ai-starter-model-ollama</artifact Id></dependency><
dependency><
group Id> org.springframework.boot</group Id><
artifact Id> spring-boot-starter-test</artifact Id><
scope> test</scope></dependency></dependencies><
dependency Management><
dependencies><
dependency><
group Id> org.springframework.ai</group Id><
artifact Id> spring-ai-bom</artifact Id><
version>${
spring-ai.version
}</version><
type> pom</type><
scope> import</scope></dependency></dependencies></dependency Management><
build><
plugins><
plugin><
group Id> org.springframework.boot</group Id><
artifact Id> spring-boot-maven-plugin</artifact Id></plugin></plugins></build></project>
1.3 Create the Main Application Class
Create the directory structure and main class:
mkdir -p src/main/java/com/aibackends/ai
// src/main/java/com/aibackends/ai/Ai Application.javapackage com.aibackends.ai;
import org.springframework.boot.Spring Application;
import org.springframework.boot.autoconfigure.Spring Boot Application;
@Spring Boot ApplicationpublicclassAi Application{
publicstaticvoidmain(String[] args){
Spring Application.run(Ai Application.class, args);
}}
Step 2: Define the Provider Architecture
Now let's create the core architecture that will allow us to support multiple AI providers.
2.1 Create the Provider Interface
First, we'll define an interface that all AI providers must implement:
// src/main/java/com/aibackends/ai/provider/Chat Provider.javapackage com.aibackends.ai.provider;
/** * Interface for AI chat providers */publicinterfaceChat Provider{
/** * Get a chat response from the AI provider * * @param message The user's message * @param model The model to use (provider-specific) * @return The AI's response */String get Chat Response(String message, String model);
/** * Get the provider type * * @return The provider type enum */Provider Type get Provider Type();
/** * Check if the provider supports a specific model * * @param model The model name to check * @return true if the model is supported */booleansupports Model(String model);
/** * Get the default model for this provider * * @return The default model name */String get Default Model();
}
2.2 Create the Provider Type Enum
This enum will represent all supported providers:
// src/main/java/com/aibackends/ai/provider/Provider Type.javapackage com.aibackends.ai.provider;
/** * Enum representing supported AI providers */publicenumProvider Type{
OLLAMA("ollama"), ANTHROPIC("anthropic"), GEMINI("gemini");
privatefinal String value; Provider Type(String value) {
this.value = value;
} public String get Value(){
return value;
} publicstatic Provider Type from Value(String value){
for (Provider Type type : Provider Type.values()) {
if (type.value.equals Ignore Case(value)) {
return type;
}
} thrownew Illegal Argument Exception("Unknown provider type: " + value);
}}
2.3 Create the Provider Registry
The registry will manage all available providers and allow us to retrieve them dynamically:
// src/main/java/com/aibackends/ai/provider/Chat Provider Registry.javapackage com.aibackends.ai.provider;
import org.springframework.stereotype.Component;
import java.util.Hash Map;
import java.util.List;
import java.util.Map;
import java.util.Optional;
@ComponentpublicclassChat Provider Registry{
privatefinal Map<Provider Type, Chat Provider> providers = new Hash Map<>();
publicChat Provider Registry(List<Chat Provider> chat Providers){
// Register all available providers
for (Chat Provider provider : chat Providers) {
providers.put(provider.get Provider Type(), provider);
}
} /** * Get a chat provider by type * * @param provider Type The provider type * @return The chat provider * @throws Illegal Argument Exception if provider not found */public Chat Provider get Provider(Provider Type provider Type){
return Optional.of Nullable(providers.get(provider Type)) .or Else Throw(() ->
new Illegal Argument Exception( "Provider not available: " + provider Type));
} /** * Get a chat provider by string value * * @param provider The provider name * @return The chat provider */public Chat Provider get Provider(String provider){
Provider Type provider Type = Provider Type.from Value(provider);
return get Provider(provider Type);
} /** * Check if a provider is available * * @param provider Type The provider type * @return true if available */publicbooleanis Provider Available(Provider Type provider Type){
return providers.contains Key(provider Type);
} /** * Get all available provider types * * @return List of available provider types */public List<Provider Type>
get Available Providers(){
return providers.key Set().stream().to List();
}}
Step 3: Implement the Ollama Provider
Now let's implement our first AI provider - Ollama, which runs AI models locally.
3.1 Create the Ollama Service
// src/main/java/com/aibackends/ai/service/Ollama Chat Service.javapackage com.aibackends.ai.service;
import com.aibackends.ai.provider.Chat Provider;
import com.aibackends.ai.provider.Provider Type;
import org.springframework.ai.chat.client.Chat Client;
import org.springframework.ai.ollama.Ollama Chat Model;
import org.springframework.ai.ollama.api.Ollama Options;
import org.springframework.stereotype.Service;
import java.util.List;
@ServicepublicclassOllama Chat ServiceimplementsChat Provider{
privatefinal Ollama Chat Model ollama Chat Model;
privatefinal List<String> supported Models = List.of( "llama3.2",
"llama3.1",
"llama3",
"llama2",
"mistral",
"mixtral",
"codellama",
"gemma",
"phi3",
"qwen2.5",
"deepseek-coder-v2" );
publicOllama Chat Service(Ollama Chat Model ollama Chat Model){
this.ollama Chat Model = ollama Chat Model;
} @Overridepublic String get Chat Response(String message, String model){
// Set the model in options String model To Use = model != null ? model : get Default Model(); Ollama Options options = Ollama Options.builder() .model(model To Use) .build();
// Create a new Chat Client with the specified model
var chat Client = Chat Client.builder(ollama Chat Model) .default Options(options) .build();
return chat Client.prompt() .user(message) .call() .content();
} @Overridepublic Provider Type get Provider Type(){
return Provider Type.OLLAMA;
} @Overridepublicbooleansupports Model(String model){
return supported Models.stream() .any Match(m -> m.equals Ignore Case(model));
} @Overridepublic String get Default Model(){
return"llama3.2";
}}
3.2 Create Configuration
// src/main/java/com/aibackends/ai/config/Ollama Config.javapackage com.aibackends.ai.config;
import org.springframework.context.annotation.Configuration;
@ConfigurationpublicclassOllama Config{
// Spring AI auto-configuration handles the Ollama beans
// No manual configuration needed when using spring-ai-starter-model-ollama
}
Step 4: Create the REST API
Now let's create the REST controller that will expose our AI services.
4.1 Create the Controller
// src/main/java/com/aibackends/ai/controller/AIController.javapackage com.aibackends.ai.controller;
import com.aibackends.ai.provider.Chat Provider;
import com.aibackends.ai.provider.Chat Provider Registry;
import com.aibackends.ai.provider.Provider Type;
import org.springframework.http.Response Entity;
import org.springframework.web.bind.annotation.*;
import java.util.List;
@Rest Controller@Request Mapping("/api")publicclassAIController{
privatefinal Chat Provider Registry provider Registry;
publicAIController(Chat Provider Registry provider Registry){
this.provider Registry = provider Registry;
} @Post Mapping("/chat")public Response Entity<?> chat(@Request Body Chat Request request) {
try {
// Validate request
if (request.message() == null || request.message().is Blank()) {
return Response Entity.bad Request() .body(new Error Response("Message cannot be empty"));
}
if (request.provider() == null || request.provider().is Blank()) {
return Response Entity.bad Request() .body(new Error Response("Provider must be specified"));
}
// Get the provider Chat Provider chat Provider = provider Registry.get Provider(request.provider());
// Validate model if specified
if (request.model() != null && !request.model().is Blank() && !chat Provider.supports Model(request.model())) {
return Response Entity.bad Request() .body(new Error Response("Model '" + request.model() + "' is not supported by provider '" + request.provider() + "'"));
}
// Get the response String response = chat Provider.get Chat Response( request.message(), request.model() );
return Response Entity.ok(new Chat Response( response, request.provider(), request.model() != null ? request.model() : chat Provider.get Default Model() ));
} catch (Illegal Argument Exception e) {
return Response Entity.bad Request() .body(new Error Response(e.get Message()));
} catch (Exception e) {
return Response Entity.internal Server Error() .body(new Error Response("Internal server error: " + e.get Message()));
}
} @Get Mapping("/providers")public Response Entity<Providers Response>
get Providers(){
List<Provider Info> providers = provider Registry.get Available Providers().stream() .map(provider Type -> {
Chat Provider provider = provider Registry.get Provider(provider Type);
returnnew Provider Info( provider Type.get Value(), provider.get Default Model() );
}) .to List();
return Response Entity.ok(new Providers Response(providers));
} @Get Mapping("/providers/{
provider
}/models")public Response Entity<?> get Provider Models(@Path Variable String provider) {
try {
Chat Provider chat Provider = provider Registry.get Provider(provider);
// For now, return a basic response. In a real implementation,
// each provider would have a method to list available models
return Response Entity.ok(new Models Response( provider, List.of(chat Provider.get Default Model()), chat Provider.get Default Model() ));
} catch (Illegal Argument Exception e) {
return Response Entity.bad Request() .body(new Error Response(e.get Message()));
}
}
// Request/Response DTOspublic record Chat Request( String message, String provider, String model ){} public record Chat Response( String response, String provider, String model ){} public record Error Response(String error){} public record Providers Response(List<Provider Info> providers){} public record Provider Info( String name, String default Model ){} public record Models Response( String provider, List<String> models, String default Model ){}
}
Step 5: Configure the Application
Create the application configuration file:
# src/main/resources/application.propertiesserver.port=8085 spring.application.name=ai-backends# Ollama configurationspring.ai.ollama.base-url=http:
// localhost:11434 spring.ai.ollama.chat.model=llama3.2# Logginglogging.level.com.aibackends=DEBUG
Step 6: Test the Application
6.1 Start Ollama
First, make sure Ollama is running and has a model installed:
# Install Ollama(if not already installed)# Visit https:
// ollama.ai for installation instructions# Pull a modelollama pull llama3.2# Start Ollama(usually starts automatically) ollama serve
6.2 Run the Application
./mvnw spring-boot:run
6.3 Test the Endpoints
List Available Providers
curl http://localhost:8085/api/providers | jq .
Response:
{
"providers": [ {
"name":
"ollama", "default Model":
"llama3.2"
} ]
}
Send a Chat Request
curl -X POST http:// localhost:8085/api/chat \ -H "Content-Type: application/json" \ -d '{
"message": "Hello! What is 2 + 2?", "provider": "ollama", "model": "llama3.2"
}' | jq .
Response:
{
"response":
"The answer to 2 + 2 is 4.", "provider":
"ollama", "model":
"llama3.2"
}
Step 7: Adding New Providers
The beauty of this architecture is how easy it is to add new providers. Let's see how you would add OpenAI support:
7.1 Add the Dependency
Add to your pom.xml
:
<
dependency><
group Id> org.springframework.ai</group Id><
artifact Id> spring-ai-starter-model-openai</artifact Id></dependency>
7.2 Create the Provider Implementation
@Service@Conditional On Property(name = "spring.ai.openai.api-key")publicclassOpen AIChat ServiceimplementsChat Provider{
privatefinal Open Ai Chat Model open Ai Chat Model;
publicOpen AIChat Service(Open Ai Chat Model open Ai Chat Model){
this.open Ai Chat Model = open Ai Chat Model;
} @Overridepublic String get Chat Response(String message, String model){
// Implementation similar to Ollama
} @Overridepublic Provider Type get Provider Type(){
return Provider Type.OPENAI;
}
// ... other methods
}
7.3 Add to Provider Type Enum
publicenumProvider Type{
OLLAMA("ollama"), OPENAI("openai"),
// Add this ANTHROPIC("anthropic"), GEMINI("gemini");
// ... rest of the enum
}
7.4 Configure in application.properties
# Open AI configuration spring.ai.openai.api-key=your-api-key-herespring.ai.openai.chat.options.model=gpt-3.5-turbo
That's it! The provider will automatically be registered and available through the API.
Advanced Features
Error Handling
Our implementation includes comprehensive error handling:
Validation for empty messages
Provider validation
Model validation
Graceful handling of provider errors
Model Selection
Each request can specify a different model:
{
"message":
"Write a poem",
"provider":
"ollama",
"model":
"mistral"
}
Provider Discovery
The /api/providers
endpoint allows clients to discover available providers dynamically.
Best Practices
Interface Segregation: The
ChatProvider
interface is focused and specificDependency Injection: Spring manages all dependencies automatically
Error Handling: All errors are handled gracefully with appropriate HTTP status codes
Extensibility: New providers can be added without modifying existing code
Configuration: Each provider can be configured independently
Conclusion
We've built a flexible, extensible AI backend that can work with multiple AI providers. The architecture we've implemented makes it easy to:
Add new AI providers without changing existing code
Switch between providers dynamically
Handle errors gracefully
Validate requests properly
Discover available providers and models
This approach gives you the flexibility to use local models for development and privacy-sensitive applications while being able to switch to cloud providers for production or when more powerful models are needed.
Happy coding! 🚀