<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Rag on Devops Monk</title><link>https://devops-monk.com/tags/rag/</link><description>Recent content in Rag on Devops Monk</description><generator>Hugo</generator><language>en-us</language><lastBuildDate>Sun, 03 May 2026 00:00:00 +0000</lastBuildDate><atom:link href="https://devops-monk.com/tags/rag/index.xml" rel="self" type="application/rss+xml"/><item><title>Spring AI 2.0: Build a RAG Application with Spring Boot</title><link>https://devops-monk.com/2026/05/spring-ai-rag-application/</link><pubDate>Sun, 03 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/2026/05/spring-ai-rag-application/</guid><description>Spring AI 1.0 GA shipped in May 2025. It brings the Spring programming model to AI development: a unified ChatClient API that works across Claude, OpenAI, Gemini, Ollama, and Azure OpenAI — switching AI providers is changing one dependency.
This guide builds a complete RAG (Retrieval-Augmented Generation) application that answers questions about your documentation using any AI provider.
What Is RAG? A large language model (LLM) knows everything in its training data but nothing about your specific documents, code, or business data.</description></item><item><title>Spring AI: Build a RAG Application</title><link>https://devops-monk.com/tutorials/spring-boot/spring-boot-spring-ai-rag/</link><pubDate>Sun, 03 May 2026 00:00:00 +0000</pubDate><guid>https://devops-monk.com/tutorials/spring-boot/spring-boot-spring-ai-rag/</guid><description>Large language models know a lot — but not about your data. RAG (Retrieval-Augmented Generation) solves this: find the relevant context from your documents, inject it into the prompt, and let the model answer grounded in your data. This article builds a complete RAG API with Spring AI 2.0.
What You&amp;rsquo;ll Build A Q&amp;amp;A API over your product documentation:
User: &amp;#34;What&amp;#39;s the return policy for electronics?&amp;#34; → Search vector store for relevant docs → Inject matching paragraphs into prompt → Claude/GPT answers based on your actual docs Without RAG: the LLM guesses or hallucinate your policy.</description></item></channel></rss>