Advanced retrieval for your AI Apps and Agents on Azure

Advanced retrieval on Azure lets AI agents move beyond “good-enough RAG” into precise, context-rich answers by combining hybrid search, graph reasoning, and agentic query planning. This blogpost walks through what that means in practice, using a concrete retail example you can adapt to your own apps.Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​learn.microsoft


Why your agents need better retrieval

Most useful agents are really “finders”:

  • Shopper agents find products and inventory.
  • HR agents find policies and benefits rules.
  • Support agents find troubleshooting steps and logs.Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

If retrieval is weak, even the best model hallucinates or returns incomplete answers, which is why Retrieval-Augmented Generation (RAG) became the default pattern for enterprise AI apps.Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​


Hybrid search: keywords + vectors + reranking

Different user queries benefit from different retrieval strategies: a precise SKU matches well with keyword search, while fuzzy “garden watering supplies” works better with vectors. Hybrid search runs both in parallel, then fuses them.Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

On Azure, a strong retrieval stack typically includes:learn.microsoft+1​Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

  • Keyword search using BM25 over an inverted index (great for exact terms and filters).
  • Vector search using embeddings with HNSW or DiskANN (great for semantic similarity).
  • Reciprocal Rank Fusion (RRF) to merge the two ranked lists into a single result set.
  • A semantic or cross-encoder reranker on top to reorder the final set by true relevance.

Example: “garden watering supplies”

Imagine a shopper agent backing a hardware store:

  1. User asks: “garden watering supplies”.
  2. Keyword search hits items mentioning “garden”, “hose”, “watering” in name/description.Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​
  3. Vector search finds semantically related items like soaker hoses, planters, and sprinklers, even if the wording differs.Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​
  4. RRF merges both lists so items strong in either keyword or semantic match rise together.learn.microsoft​Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​
  5. A reranker model (e.g., Azure AI Search semantic ranker) re-scores top candidates using full text and query context.azure+1​Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

This hybrid + reranking stack reliably outperforms pure vector or pure keyword across many query types, especially concept-seeking and long queries.argonsys​Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​


Going beyond hybrid: graph RAG with PostgreSQL

Some questions are not just “find documents” but “reason over relationships,” such as comparing reviews, features, or compliance constraints. A classic example:Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

“I want a cheap pair of headphones with noise cancellation and great reviews for battery life.”Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

Answering this well requires understanding relationships between:

  • Products
  • Features (noise cancellation, battery life)
  • Review sentiment about those specific features

Building a graph with Apache AGE

Azure Database for PostgreSQL plus Apache AGE turns relational and unstructured data into a queryable property graph, with nodes like Product, Feature, and Review, and edges such as HAS_FEATURE or positive_sentiment.learn.microsoft+1​Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

A typical flow in a retail scenario:

  1. Use azure_ai.extract() in PostgreSQL to pull product features and sentiments from free-text reviews into structured JSON (e.g., “battery life: positive”).Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​
  2. Load these into an Apache AGE graph so each product connects to features and sentiment-weighted reviews.learn.microsoft​Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​
  3. Use Cypher-style queries to answer questions like “headphones where noise cancellation and battery life reviews are mostly positive, sorted by review count.”learn.microsoft​Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

Your agent can then:

  • Use vector/hybrid search to shortlist candidate products.
  • Run a graph query to rank those products by positive feature sentiment.
  • Feed only the top graph results into the LLM for grounded, explainable answers.Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

Hybrid search and graph RAG still assume a single, well-formed query, but real users often ask multi-part or follow-up questions. Azure AI Search’s agentic retrieval solves this by letting an LLM plan and execute multiple subqueries over your index.securityboulevard+1​Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

Example: HR agent multi-part question

Consider an internal HR agent:

“I’m having a baby soon. What’s our parental leave policy, how do I add a baby to benefits, and what’s the open enrollment deadline?”Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

Agentic retrieval pipeline:infoq​Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

  1. Query planning
    • Decompose into subqueries: parental leave policy, dependent enrollment steps, open enrollment dates.
    • Fix spellings and incorporate chat history (“we talked about my role and region earlier”).
  2. Fan-out search
    • Run parallel searches over policy PDFs, benefits docs, and plan summary pages with hybrid search.
  3. Results merging and reranking
    • Merge results across subqueries, apply rankers, and surface the top snippets from each area.
  4. LLM synthesis
    • LLM draws from all retrieved slices to produce a single, coherent answer, citing relevant docs or links.

Microsoft’s evaluation shows agentic retrieval can materially increase answer quality and coverage for complex, multi-document questions compared to plain RAG.infoq​Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​


Designing your own advanced retrieval flow

When turning this into a real solution on Azure, a pragmatic pattern looks like this:learn.microsoft+2​Advanced-retrieval-for-your-AI-Apps-and-Agents-on-Azure.pptx​

  • Start with hybrid search + reranking as the default retrieval layer for most agents.
  • Introduce graph RAG with Apache AGE when:
    • You must reason over relationships (e.g., product–feature–review, user–role–policy).
    • You repeatedly join and aggregate across structured entities and unstructured text.
  • Add agentic retrieval in Azure AI Search for:
    • Multi-part questions.
    • Long-running conversations where context and follow-ups matter.

You can mix these strategies: use Azure AI Search’s agentic retrieval to plan and fan out queries, a PostgreSQL + AGE graph to compute relational insights, and then fuse everything back into a single grounded answer stream for your AI app or agent.

Leave a comment