Sitemap

Building an AI-Powered Semantic Search with .NET Aspire, Qdrant, and OpenAI

6 min readSep 22, 2025

A step-by-step guide to integrating a high-performance vector database and modern language models into a .NET microservices app to create a search that understands meaning, not just text.

Press enter or click to view image in full size
A comprehensive, hands-on journey for .NET developers into the world of LLMs, RAG, and Vector Search with OpenAI, Ollama, and the modern .NET AI stack.

As developers, we know that search is the backbone of almost every application we build. But for decades, search has been a frustrating game of keyword matching. If your user searches for “something to keep my ears warm in winter” but your product is named “fleece beanie” they’re likely to get zero results.

Press enter or click to view image in full size

[Click Here to Enroll Now and Get Your Special Discount!]

This is where Generative AI changes everything.

By leveraging vector embeddings — numerical representations of meaning — we can build a semantic search engine that understands the intent behind a user’s query. In this article, I’ll take you on a step-by-step journey to build exactly that. We will take a standard .NET e-commerce microservices application and infuse it with a powerful, AI-driven semantic search feature.

We’ll use a cutting-edge stack:

  • .NET Aspire for orchestrating our distributed system.
  • Qdrant as our high-performance, open-source vector database.
  • OpenAI’s models (via GitHub Models) for generating the embeddings that power our search.

By the end, you’ll have a complete blueprint for building modern, intelligent search experiences in your own .NET applications.

The Starting Point: A Distributed eShop with .NET Aspire

We’re not starting from scratch. Our foundation is a well-architected eShop application built with a microservices approach. It has separate services for the Catalog, Basket, and Ordering, along with a WebApp frontend built in Blazor.

The entire system is orchestrated by .NET Aspire. Think of Aspire as the conductor for our orchestra. Its .AppHost project is responsible for launching all our services, including the databases, and managing their configurations and connections. This gives us a robust, cloud-native development experience right on our local machine.

Our mission is to upgrade the Catalog service's basic keyword search with our new AI capabilities.

Step 1: Integrating Qdrant into the Aspire Environment

Before we can store vectors, we need a vector database. We’ll use Qdrant and add it directly to our Aspire application.

Hosting Integration in the .AppHost

First, we install the Aspire.Hosting.Qdrant package into our .AppHost project. Then, in the AppHost's Program.cs, we tell Aspire to run Qdrant for us:

// In eShop.AppHost/Program.cs

var builder = DistributedApplication.CreateBuilder(args);

// Add Qdrant as a containerized backing service
var vectordb = builder.AddQdrant("vectordb")
.WithDataVolume();

// Get a reference to the Catalog API project
var catalogApi = builder.AddProject<Projects.Catalog_Api>("catalog-api")
.WithReference(vectordb); // Give the Catalog a reference to Qdrant

With these few lines, Aspire will now:

  1. Download and run the official Qdrant Docker container.
  2. Create a persistent volume so our data isn’t lost on restart.
  3. Automatically inject the connection string for vectordb into our catalog-api service.

Step 2: Client-Side Setup in the Catalog Service

Now we need to teach our Catalog service how to talk to Qdrant and how to create embeddings. We'll install two key packages into the Catalog.Api project:

  • Aspire.Qdrant.Client
  • Microsoft.SemanticKernel.Connectors.Qdrant

With these in place, we open Catalog.Api/Program.cs and register all the necessary services for our AI pipeline.

// In Catalog.Api/Program.cs

// 1. Register the client for the Qdrant database
builder.AddQdrantClient("vectordb");

// 2. Register a strongly-typed collection for our product vectors
builder.Services.AddQdrantCollection<int, ProductVector>("product-vectors");

// 3. Register the AI clients for Chat and Embeddings
builder.AddOpenAIClient("openai"); // Reads connection string from AppHost

This dependency injection setup provides our application with all the tools it needs: a connection to Qdrant, a typed “collection” to work with our product data, and the clients for our OpenAI models.

Step 3: Modeling Our Vector Data

We need a C# class to represent the data we’ll store in Qdrant. A key architectural choice is to create a new ProductVector class instead of modifying the existing Product entity. This keeps our AI concerns neatly isolated.

// In Catalog.Api/Models/ProductVector.cs
public class ProductVector
{
[VectorStoreKey]
public ulong Id { get; set; } // Qdrant requires ulong or Guid for keys

[VectorStoreData]
public string Name { get; set; }

// ... other metadata properties

[VectorStoreVector(Dimensions: 1536)] // Must match our embedding model
public ReadOnlyMemory<float> Vector { get; set; }
}

We pay close attention to the details: Id is a ulong to match Qdrant's requirements, and Dimensions is set to 1536 to match the output of OpenAI's text-embedding-3-small model.

Step 4: Building the AI Service Layer

All our core logic will live in a new ProductAIService. This class will be responsible for two key jobs: populating the database and performing searches.

Populating the Database

The InitEmbeddingsAsync method reads products from our main PostgreSQL database, generates an embedding for each one, and "upserts" it into Qdrant.

// In Catalog.Api/Services/ProductAIService.cs
private async Task InitEmbeddingsAsync()
{
await _productVectorCollection.EnsureCollectionExistsAsync();

var products = await _dbContext.Products.ToListAsync();
foreach (var product in products)
{
var productInfo = $"Product Name: {product.Name}, Description: {product.Description}";

var productVector = new ProductVector
{
Id = (ulong)product.Id,
Name = product.Name,
// ... map other properties
Vector = await _embeddingGenerator.GenerateVectorAsync(productInfo)
};

await _productVectorCollection.UpsertAsync(productVector);
}
}

Performing the Search

The SearchProductsAsync method implements the "Retrieval" part of our RAG workflow.

// In Catalog.Api/Services/ProductAIService.cs
public async Task<IEnumerable<Product>> SearchProductsAsync(string query)
{
// Ensure the database is populated (a pattern for development)
if (!await _productVectorCollection.CollectionExistsAsync())
{
await InitEmbeddingsAsync();
}

// 1. Generate an embedding for the user's query
var queryEmbedding = await _embeddingGenerator.GenerateVectorAsync(query);

// 2. Search Qdrant for the top 5 most similar product vectors
var results = _productVectorCollection.SearchAsync(queryEmbedding, 5);

// 3. Retrieve the full product details from PostgreSQL for the matching IDs
var products = new List<Product>();
await foreach (var resultItem in results)
{
var product = await _dbContext.Products.FindAsync((int)resultItem.Record.Id);
if (product is not null)
{
products.Add(product);
}
}
return products;
}

Step 5: Building the API and Frontend

With the service logic in place, the final steps are to expose it via a Minimal API endpoint in ProductEndpoints.cs and then build the UI in our Blazor WebApp.

The API Endpoint

We create a new /aisearch/{query} endpoint that simply injects our ProductAIService and calls its search method.

// In Catalog.Api/Endpoints/ProductEndpoints.cs
group.MapGet("/aisearch/{query}", async (string query, ProductAIService service) =>
{
var products = await service.SearchProductsAsync(query);
return Results.Ok(products);
});

The Blazor UI

In our Products.razor page, we add a search box and a toggle switch. The UI calls a unified method in our CatalogApiClient which, based on the toggle's state, decides whether to hit our new /aisearch endpoint or the old /search endpoint.

<div class="input-group">
<input type="text" class="form-control" @bind="searchTerm" />
<button class="btn btn-primary" @onclick="DoSearch">Search</button>
</div>
<div class="form-check form-switch">
<InputCheckbox class="form-check-input" @bind-Value="aiSearch" />
<label class="form-check-label">Use Semantic Search</label>
</div>

The Final Result: Search That Understands

After running the full Aspire solution, we can finally test our work.

A traditional keyword search for “something for rainy days” fails, returning no results.

But when we toggle on “Use Semantic Search” and run the exact same query…

Press enter or click to view image in full size

…it works! The system understood the intent behind our query and correctly identified the Rain Jacketas the most semantically relevant product. This is the power of building with a modern .NET AI stack.

Conclusion

Integrating Generative AI into .NET applications is no longer a futuristic concept; it’s a practical reality. By combining the power of .NET Aspire for orchestration, Qdrant for high-performance vector search, and OpenAI’s incredible language models, we’ve transformed a simple e-commerce site into an intelligent platform that provides a truly modern user experience.

The patterns and tools you’ve seen here are the building blocks for the next generation of software. Now it’s your turn to build.

Step-by-step Development w/ Udemy Course

A comprehensive, hands-on journey for .NET developers into the world of LLMs, RAG, and Vector Search with OpenAI, Ollama, and the modern .NET AI stack.

Press enter or click to view image in full size

[Click Here to Enroll Now and Get Your Special Discount!]

By the end of this journey, you’ll have the tools, code, and confidence to build the next generation of intelligent, GenAI-powered applications in .NET.

--

--

Mehmet Ozkaya
Mehmet Ozkaya

Written by Mehmet Ozkaya

Software Architect | Udemy Instructor | AWS Community Builder | Cloud-Native and Serverless Event-driven Microservices https://github.com/mehmetozkaya

No responses yet