<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:content="http://purl.org/rss/1.0/modules/content/">
  <channel>
    <title>I Love Agents</title>
    <description>AI Agents, Modern Workflows, and Everything in Between</description>
    <link>https://iloveagents.com</link>
    <atom:link href="https://iloveagents.com/feed.xml" rel="self" type="application/rss+xml" />
    <language>en-us</language>
    <lastBuildDate>Mon, 13 Apr 2026 18:03:34 GMT</lastBuildDate>
    
    <item>
      <title>Foundry Voice Live React SDK: Building Multi-Modal AI Agents</title>
      <description>Open-source React SDK for Microsoft Foundry Voice Live API. Build real-time voice agents with avatars, secure WebSocket proxy, and production-ready patterns.</description>
      <content:encoded><![CDATA[
I keep saying this: the next stage of agents isn’t “type better prompts”. It’s **multi-modal**—voice, video, avatars—where the agent feels less like a chat box and more like a teammate.

So here’s the thing I wanted but couldn’t find: **React hooks + components for Microsoft Foundry Voice Live API**.

This SDK helps you build real-time voice AI apps with:

- Azure video avatars
- Live2D / 3D avatars
- audio visualizers
- function calling
- TypeScript-first ergonomics

This post is the **quick start**. The GitHub repo has the more verbose examples, wiring, and “okay but how do I ship this?” details.

![Foundry Voice Live Banner](https://github.com/iLoveAgents/foundry-voice-live/raw/main/.github/images/iloveagents-foundry-voice-banner.png)

## Why Voice Changes Everything

Text agents are great. They’re also… text. Voice unlocks:

- **Natural conversations** — No typing, no waiting. Talk to your agent like a colleague.
- **Accessibility** — Voice interfaces reach users who can't or won't type.
- **Hands-free workflows** — Field workers, drivers, surgeons—anyone whose hands are busy.
- **Emotional context** — Tone, pace, and pauses carry meaning that text loses.
- **Avatar presence** — Visual feedback builds trust and engagement.

Voice Live handles the hard parts: streaming speech in/out with a single session, plus optional avatars. When it works, it feels surprisingly “present”.

## What We're Building

You get two packages:

| Package | Purpose |
|---------|---------|
| [`@iloveagents/foundry-voice-live-react`](https://www.npmjs.com/package/@iloveagents/foundry-voice-live-react) | React hooks and avatar components |
| [`@iloveagents/foundry-voice-live-proxy-node`](https://www.npmjs.com/package/@iloveagents/foundry-voice-live-proxy-node) | Secure WebSocket proxy for production |

Here's the architecture:

```
┌─────────────────────────────────────────┐
│            Your React App               │
└─────────────────────────────────────────┘
                    │
                    ▼
┌─────────────────────────────────────────┐
│   @iloveagents/foundry-voice-live-react │
│   • useVoiceLive hook                   │
│   • VoiceLiveAvatar component           │
└─────────────────────────────────────────┘
                    │
        ┌───────────┴───────────┐
        ▼                       ▼
┌───────────────┐      ┌───────────────────┐
│  Direct API   │  OR  │  Proxy Server     │
│  (Dev only)   │      │  (Production)     │
└───────────────┘      └───────────────────┘
                    │
                    ▼
┌─────────────────────────────────────────┐
│     Microsoft Foundry Voice Live API    │
└─────────────────────────────────────────┘
```

The proxy is critical. Browser-based apps can't safely hold API keys—anyone can inspect network traffic. The proxy authenticates server-side, so credentials never touch the client.

## Quick Start

```bash copy
npm create vite@latest voice-agent -- --template react-ts
cd voice-agent
npm install @iloveagents/foundry-voice-live-react
```

Replace `src/App.tsx` with the code below, then `npm run dev`:

### Voice-Only Agent

```tsx copy filename="App.tsx"
import { useRef, useEffect } from "react";
import { useVoiceLive } from "@iloveagents/foundry-voice-live-react";

function App() {
  const audioRef = useRef<HTMLAudioElement>(null);
  
  const { connect, disconnect, connectionState, audioStream } = useVoiceLive({
    connection: {
       resourceName: 'your-foundry-resource',
      apiKey: 'your-api-key',  // Dev only. Use proxy in production.
    },
    session: {
      instructions: 'You are a helpful assistant for iLoveAgents.',
    },
  });

  useEffect(() => {
    if (audioRef.current && audioStream) {
      audioRef.current.srcObject = audioStream;
    }
  }, [audioStream]);

  return (
    <div>
      <p>Status: {connectionState}</p>
      <button onClick={connect} disabled={connectionState === 'connected'}>
        Start Conversation
      </button>
      <button onClick={disconnect} disabled={connectionState !== 'connected'}>
        End
      </button>
      <audio ref={audioRef} autoPlay />
    </div>
  );
}

export default App;
```

On connect, the hook requests mic access and starts streaming.

### With Avatar

Want a face? Swap `App.tsx` for this:

```tsx copy filename="App.tsx"
import { useVoiceLive, VoiceLiveAvatar } from "@iloveagents/foundry-voice-live-react";

function App() {
  const { videoStream, audioStream, connect, disconnect } = useVoiceLive({
    connection: {
      resourceName: "your-foundry-resource",
      apiKey: "your-api-key",
    },
    session: {
      instructions: "You are a friendly assistant with a visual presence.",
      voice: { name: "en-US-AvaMultilingualNeural", type: "azure-standard" },
      avatar: { character: "lisa", style: "casual-sitting" },
    },
  });

  return (
    <div>
      <VoiceLiveAvatar videoStream={videoStream} audioStream={audioStream} />
      <button onClick={connect}>Start</button>
      <button onClick={disconnect}>Stop</button>
    </div>
  );
}

export default App;
```

The avatar syncs lip movements with speech.

## Production: Secure Proxy

**Never ship API keys to the browser.** Run the proxy server-side:

```bash copy
# Docker (recommended)
docker run -p 8080:8080 \
  -e FOUNDRY_RESOURCE_NAME=your-foundry-resource \
  -e FOUNDRY_API_KEY="your-api-key" \
  -e ALLOWED_ORIGINS="*" \
  ghcr.io/iloveagents/foundry-voice-live-proxy:latest
```

Or with npx for quick testing:

```bash copy
FOUNDRY_RESOURCE_NAME=your-foundry-resource \
FOUNDRY_API_KEY="your-api-key" \
ALLOWED_ORIGINS="*" \
npx @iloveagents/foundry-voice-live-proxy-node
```

Then change your connection config to use the proxy (no API key needed):

```tsx copy
connection: {
  proxyUrl: 'ws://localhost:8080/ws',  // use wss:// in production
}
```

Pro tip: put the proxy behind auth and rate-limit it.

## Function Calling

Voice agents can call tools. Swap `App.tsx` for this:

```tsx copy filename="App.tsx"
import { useRef, useEffect } from "react";
import { useVoiceLive } from "@iloveagents/foundry-voice-live-react";

function App() {
  const audioRef = useRef<HTMLAudioElement>(null);

  const { connect, disconnect, connectionState, audioStream, sendEvent } = useVoiceLive({
    connection: { 
      proxyUrl: "ws://localhost:8080/ws" 
    },
    session: {
      instructions: "Help users check order status. Use the get_order_status tool when asked.",
      tools: [
        {
          type: "function",
          name: "get_order_status",
          description: "Look up the status of a customer order",
          parameters: {
            type: "object",
            properties: {
              order_id: { type: "string", description: "The order ID" },
            },
            required: ["order_id"],
          },
        },
      ],
      toolChoice: "auto",
    },
    // Handle tool calls from the AI
    toolExecutor: (name, args, callId) => {
      let result = {};

      if (name === "get_order_status") {
        const { order_id } = JSON.parse(args);
        // Mock order status lookup
        result = { order_id, status: "processing" };
      }

      // Send result back to continue the conversation
      sendEvent({
        type: "conversation.item.create",
        item: {
          type: "function_call_output",
          call_id: callId,
          output: JSON.stringify(result),
        },
      });
      sendEvent({ type: "response.create" });
    },
  });

  // Connect audio stream to audio element
  useEffect(() => {
    if (audioRef.current && audioStream) {
      audioRef.current.srcObject = audioStream;
    }
  }, [audioStream]);

  return (
    <div>
      <p>Status: {connectionState}</p>
      <button onClick={connect} disabled={connectionState === "connected"}>
        Start Conversation
      </button>
      <button onClick={disconnect} disabled={connectionState !== "connected"}>
        End
      </button>
      <audio ref={audioRef} autoPlay />
    </div>
  );
}

export default App;

```

Try asking: *"What's the status of order 1345?"* — the agent will call the tool and speak the result.

When the model calls the tool, you run it, send a `function_call_output`, then trigger `response.create` so it can speak.

## Run the Examples

Clone the repo to see it in action:

```bash copy
git clone https://github.com/iloveagents/foundry-voice-live.git
cd foundry-voice-live
just install

# Configure credentials
cp packages/proxy-node/.env.example packages/proxy-node/.env
cp examples/.env.example examples/.env
# Edit both .env files with your Foundry credentials

# Start proxy + examples
just dev
```

Open http://localhost:3001 to explore voice-only, avatar, and function-calling examples.

## Foundry Agent Service

Since v0.3.0, the SDK supports [Foundry Agent Service](https://learn.microsoft.com/azure/ai-services/speech-service/voice-live-agents-quickstart)—Microsoft's recommended way to connect voice to agents built in Azure AI Foundry.

The difference from standard Voice Live: instead of sending instructions in the session config, you point to an agent you've already configured in the Foundry portal. The agent handles its own system prompt, tools, and grounding.

This is where it gets interesting. Your Foundry agent can use [Foundry IQ](https://learn.microsoft.com/azure/foundry/agents/concepts/what-is-foundry-iq) for knowledge grounding—permission-aware RAG across SharePoint, Azure Blob, OneLake, and the web—plus [agent memory](https://learn.microsoft.com/azure/foundry/what-is-foundry) for retaining context across conversations. All that richness, and the voice client stays dead simple:

```tsx copy filename="App.tsx"
import { useRef, useEffect } from "react";
import { useVoiceLive, sessionConfig } from "@iloveagents/foundry-voice-live-react";

function App() {
  const audioRef = useRef<HTMLAudioElement>(null);

  const { connect, disconnect, connectionState, audioStream } = useVoiceLive({
    connection: {
      proxyUrl: "ws://localhost:8080/ws?agentName=MyAgent&projectName=my-project",
    },
    session: sessionConfig()
      .voice("en-US-AvaMultilingualNeural")
      .semanticVAD({ interruptResponse: true })
      .transcription()
      .build(),
  });

  useEffect(() => {
    if (audioRef.current && audioStream) {
      audioRef.current.srcObject = audioStream;
    }
  }, [audioStream]);

  return (
    <div>
      <p>Status: {connectionState}</p>
      <button onClick={connect}>Start</button>
      <button onClick={disconnect}>Stop</button>
      <audio ref={audioRef} autoPlay />
    </div>
  );
}
```

This is actually pretty cool: the agent’s knowledge, memory, and tools all live server-side in Foundry. The React app just opens the voice channel. Complexity stays on the platform, the client stays simple.

Two auth paths, both handled by the proxy:

- **Server-side (simplest)** — `DefaultAzureCredential` acquires tokens automatically. Just `az login` for dev, managed identity in production. No app registration needed.
- **Browser-side (MSAL)** — Each user signs in with their own Entra ID identity. Pass the token as a URL param: `?token=${msalToken}`

Both work with avatars—add `.avatar('lisa', 'casual-sitting', { codec: 'h264' })` to your session config. The [examples](https://github.com/iLoveAgents/foundry-voice-live/tree/main/examples/src/pages) include all four combinations: voice, voice+MSAL, avatar, avatar+MSAL.

## Tradeoffs & Current Limitations

The honest bit (short version):

- **Preview-ish surface** — expect breaking changes as Voice Live evolves.
- **Cost** — voice + avatars can get pricey; watch usage.

## Let's Build Multi-Modal Agents Together

I built this because I think voice is where agents become genuinely useful—not just “wow demos”, but tools people actually want around. It’s MIT licensed and contributions are welcome.

**What voice agent scenarios are you exploring?** Customer support? Accessibility? Hands-free interfaces? I'd love to hear what you're building.

Star the repo, try the examples, and let me know what's missing.

## Resources

- [GitHub: foundry-voice-live](https://github.com/iLoveAgents/foundry-voice-live)
- [npm: @iloveagents/foundry-voice-live-react](https://www.npmjs.com/package/@iloveagents/foundry-voice-live-react)
- [Microsoft Voice Live Documentation](https://learn.microsoft.com/en-us/azure/ai-services/speech-service/voice-live)
- [Foundry Agent Service Quickstart](https://learn.microsoft.com/azure/ai-services/speech-service/voice-live-agents-quickstart)
- [Foundry IQ: Knowledge Grounding](https://learn.microsoft.com/azure/foundry/agents/concepts/what-is-foundry-iq)
- [Voice Live API Reference](https://learn.microsoft.com/en-us/azure/ai-services/speech-service/voice-live-api-reference)
- [Microsoft Foundry](https://azure.microsoft.com/products/ai-foundry/)
]]></content:encoded>
      <link>https://iloveagents.com/foundry-voice-live-react-sdk</link>
      <guid>https://iloveagents.com/foundry-voice-live-react-sdk</guid>
      <pubDate>Mon, 02 Feb 2026 00:00:00 GMT</pubDate>
      <author>Christian Glessner</author>
      <category>microsoft foundry</category><category>voice ai</category><category>react</category><category>ai agents</category>
    </item>
    <item>
      <title>Contract Analysis with Microsoft Foundry Content Understanding</title>
      <description>Content Understanding fixes the input layer. I benchmarked it on legal contracts and beat GPT-4.1 mini by 19 points — once I learned that schema descriptions are prompts.</description>
      <content:encoded><![CDATA[
import Image from "next/image";

<figure className="my-8 flex justify-center">
  <Image
    src="/images/content/microsoft-foundry-content-understanding-cuad-benchmark/shit-in-shit-out.png"
    alt="Shit in, shit out — the universal rule of AI systems"
    width={1200}
    height={800}
    className="rounded-lg"
  />
</figure>

**Shit in, shit out.**

AI agents, copilots & RAG — no matter how advanced the model, they all follow the same old rule. If your input documents are messy and your extraction schema is vague, the output will be unreliable. Faster retrieval doesn't fix bad inputs — it just surfaces them more efficiently.

Content Understanding is about fixing the input layer: turning raw documents into structured, well-defined signals that models can actually reason over. It's a Foundry tool that builds on Document Intelligence — combining its proven extraction capabilities with LLM-powered field understanding.

<figure className="my-8">
  <Image
    src="/images/content/microsoft-foundry-content-understanding-cuad-benchmark/content-understanding-overview.png"
    alt="Azure Content Understanding Framework — Inputs flow through Analyzers to Structured Output"
    width={1200}
    height={600}
    className="rounded-lg border"
  />
  <figcaption className="text-center text-sm text-muted-foreground mt-2">
    Content Understanding: Documents, images, video, audio → Analyzers → Structured output for search, agents, databases, copilots — <a href="https://learn.microsoft.com/en-us/azure/ai-services/content-understanding/overview" className="underline hover:text-primary">Source</a>
  </figcaption>
</figure>

I wanted to see how much that really matters in practice. So I ran a benchmark.

Using Microsoft Foundry's Content Understanding service, I evaluated clause detection on the **[CUAD legal contract dataset](https://www.atticusprojectai.org/cuad)** — and compared results against the [ContractEval benchmark](https://arxiv.org/abs/2508.03080). The first run was okay. The real gains only came once I treated schema descriptions as prompts and tuned them with the same care as model inputs.

<div className="my-10 p-8 bg-gradient-to-r from-primary/10 to-primary/5 rounded-xl border border-primary/20 text-center">
  <div className="inline-block px-4 py-1 mb-4 text-xs font-bold uppercase tracking-widest bg-primary/20 text-primary rounded-full">CUAD Clause Detection Benchmark</div>
  <div className="text-5xl font-bold text-primary mb-2">83.5% F1</div>
  <div className="text-lg text-muted-foreground">vs 64.4% GPT-4.1 mini baseline</div>
  <div className="text-sm text-muted-foreground mt-1">+19 points improvement</div>
</div>

## Results

| System | Micro F1 | Precision | Recall |
| ------ | -------: | --------: | -----: |
| Azure Content Understanding | **83.5%** | **84.5%** | **82.4%** |
| GPT-4.1 mini (ContractEval, Aug 2024) | 64.4% | — | — |

50 contracts. 41 clause types. Same test set as the published benchmark.

## The Key Insight: Schema Descriptions Are Prompts

Here's what made the difference: **field descriptions are prompts**.

A simple field name like `MostFavoredNation` tells the model *what* to find. A well-crafted description tells it *how* — where to look, what phrasings to match, what format to expect.

**Before:**
```
MostFavoredNation: "Most favored nation clause"
```

**After:**
```
MostFavoredNation: "Most Favored Nation clause guaranteeing party receives
terms at least as favorable as those offered to any third party. Typically
found in pricing or terms sections. Look for: 'most favored nation', 'MFN',
'no less favorable than', 'pricing parity', 'equivalent to the best terms'."
```

That's the difference between "okay" and "great" — and it applied across all 41 clause types.

### Schema Best Practices

Following [Azure Content Understanding best practices](https://learn.microsoft.com/azure/ai-services/content-understanding/concepts/best-practices):

| Practice | Example | Why it works |
|----------|---------|--------------|
| **Affirmative language** | "The date when..." not "Find the date" | Clearer target |
| **Location hints** | "Look in the preamble" | Reduces search space |
| **Concrete examples** | "e.g., 'Master Services Agreement'" | Anchors understanding |
| **Common phrasings** | "'governed by', 'construed under'" | Improves recall |

The notebook includes optimized descriptions for all 41 clause types — use them as a starting point.

## Try It Yourself

```bash copy
git clone https://github.com/iLoveAgents/microsoft-foundry-content-understanding-cuad-benchmark.git
cd microsoft-foundry-content-understanding-cuad-benchmark
```

You'll need a **Microsoft Foundry** project with Content Understanding enabled. Full setup in the README.

## Why This Matters

If you're building document workflows — contract review, compliance, intake automation — this benchmark tells you:

1. **Content Understanding is production-grade**: 83.5% F1 on a complex legal task.
2. **Schema design is the lever**: Not model size. Not retrieval speed. The input layer.
3. **It integrates with Foundry**: Same project, same credentials, same stack.

The model didn't change between my first run and my best run. The schema did.

Fix the input layer. The rest follows.

## Resources

- [Benchmark repo](https://github.com/iLoveAgents/microsoft-foundry-content-understanding-cuad-benchmark)
- [Azure Content Understanding docs](https://learn.microsoft.com/en-us/azure/ai-services/content-understanding/)
- [CUAD dataset (NeurIPS 2021)](https://arxiv.org/abs/2103.06268)
- [ContractEval benchmark](https://arxiv.org/abs/2508.03080)

---

Questions or results to share? Reach out on [LinkedIn](https://www.linkedin.com/in/christian-glessner/).
]]></content:encoded>
      <link>https://iloveagents.com/microsoft-foundry-content-understanding-cuad-benchmark</link>
      <guid>https://iloveagents.com/microsoft-foundry-content-understanding-cuad-benchmark</guid>
      <pubDate>Tue, 20 Jan 2026 00:00:00 GMT</pubDate>
      <author>Christian Glessner</author>
      <category>microsoft foundry</category><category>content understanding</category><category>python</category>
    </item>
    <item>
      <title>Microsoft Foundry Agents &amp; Foundry IQ: Per-User Security Trimming Isn’t Supported Yet (ACL Retrieval + RBAC Reality)</title>
      <description>Per-user ACL trimming needs request-scoped auth. Here’s why Microsoft Foundry agents 2.0 fall short today, which docs are misleading, and what to use instead.</description>
      <content:encoded><![CDATA[
I went into this week expecting a clean story:

“Define an agent once in Microsoft Foundry, connect it to a knowledge base, publish a stable endpoint… and still get **per-user security trimming** for enterprise data.”

After a lot of hands-on testing, I’m not there yet.

If your knowledge source requires **document-level ACL trimming** (Azure AI Search ACL trimming, SharePoint-backed sources, Foundry IQ / Knowledge Bases that depend on user context), there’s a practical gap today:

**Microsoft Foundry agents (preview) don’t give you a supported way to inject request-scoped user auth context per run when using knowledge retrieval through the agent platform.**

This post is the guidance I’m giving customers right now.

> Scope note: this reflects behavior I observed in **January 2026** while the relevant agent capabilities are still in **preview**. Expect this area to change quickly.

## Terminology

Microsoft’s official docs talk about **Foundry agents / Foundry Agent Service (preview)** and **Agent Applications**.

In this post, I’ll use a shorthand to make the comparison unambiguous:

- **Foundry Agents 2.0** = the **new Microsoft Foundry agents (preview)** experience (this is *my* shorthand, not an official product name)
- **Foundry Agents 1.0 (classic)** = the **non-preview / older agent experience** (this is *my* shorthand)
- **Responses API (model + tools per request)** = the request-scoped call surface you can use in both cases: to **invoke agents** (including Agent Applications) and to call **Foundry models directly**; the difference is whether tools/auth are defined on the agent version or injected per request

This matters because the security behavior is different depending on whether you’re configuring an **agent version** (agent-defined tools) or composing tools **per request** (Responses API tool injection).

## At a glance: when to use what

- If you need **per-user ACL trimming**: use the **Responses API (Foundry model + tools per request)** so you can attach user context per call.
- If shared access is fine: use **agents** (shared identity) for reuse and a managed surface.
- If you want a stable endpoint: publish as an **Agent Application**, but assume RBAC-based access and client-managed conversation history.

## Why per-user security trimming is non-negotiable

In enterprise, “knowledge retrieval” isn’t just about *finding the right chunk* — it’s about **not leaking the wrong chunk**.

If Alice can see `DocA` and Bob can’t, your agent must pass Alice’s identity to retrieval on every run so the source can trim results.

In Foundry + MCP + Search, that often means passing a request-scoped header like:

- `x-ms-query-source-authorization: <user-token>`

## The 3 different surfaces people mix up (I did too)

This is where confusion starts. There are (at least) three distinct interaction models:

1. **Direct OpenAI-compatible Responses API** (model + tools per request)
2. **Agents inside a Foundry project** (Foundry Agents 1.0 and Foundry Agent Service (preview))
3. **Agent Applications** (published, stable endpoint with isolation and its own identity)

Each has different capabilities and security assumptions.

## Agents vs “model + tools per request”: what’s actually different?

This is the practical difference I keep seeing teams miss.

### When you use a Foundry agent

You’re opting into an **agent definition + versioning** surface:

- Tools are attached to the agent version (reusable configuration).
- You get a named asset you can iterate on (agent versions) and (when you publish) a stable endpoint via an Agent Application.
- It’s the right shape if you want **reuse** (agents, and workflows where applicable) and a platform-managed serving surface.

But you also inherit platform constraints: if there’s no request-scoped tool auth surface, you can’t “just pass a header” per run.

### When you use a Foundry model directly

You’re using a **request-scoped composition** surface:

- Tools (including MCP tools) are injected per `responses.create()` call.
- Headers can be request-scoped, which is exactly what per-user trimming needs.

But: this bypasses agent reuse. The tool isn’t part of an agent definition, so it can’t be reused via agents/workflows or exposed via an Agent Application without reintroducing the same gap.

<details>
  <summary>More context (what I observed): portal vs SDK + how docs map to behavior</summary>

One reason this problem is easy to miss is that the portal experience can make secure retrieval feel like it “just works”.

In my testing:

- In the Foundry portal, querying a knowledge base that enforces ACL trimming can succeed as an interactive user.
- When I try to reproduce the same pattern from code via agents/knowledge retrieval, retrieval results can come back empty unless the knowledge source receives request-scoped user context.

These are the conclusions from my hands-on testing, mapped to what Microsoft documents today:

1) **Responses API supports request-scoped MCP headers** (this is the only model where request-scoped headers naturally fit)

2) **Agent-defined tools are agent-version scoped** (headers behave like configuration, not per-run inputs)

3) **Publishing creates an Agent Application, but inbound auth is RBAC by default**

- Microsoft documents this in [Publish and share agents / Agent Applications](https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/publish-agent?view=foundry): default inbound authentication is Azure RBAC (`/applications/invoke/action`), and `Azure AI User` is listed as the minimum role to chat with a published agent.

If you read only one thing in this post: the friction comes from trying to apply **“per-request user context”** patterns to an **“agent version configuration”** surface.

</details>

## What works today: per-request MCP headers with the Responses API

If you bypass agents and call the Responses API directly, you can attach headers per request.

```python copy
mcp_tool = {
    "type": "mcp",
    "server_label": "kb_acl_test",
    "server_url": KB_MCP_URL,
    "project_connection_id": PROJECT_CONNECTION_ID,
    "require_approval": "never",
    "allowed_tools": ["knowledge_base_retrieve"],
    "headers": {
        "x-ms-query-source-authorization": user_token,
    },
}

response = openai_client.responses.create(
    model=MODEL_DEPLOYMENT,
    input=USER_QUERY,
    tools=[mcp_tool],
)
```

Expected outcome:

- The knowledge source trims results to the caller’s permissions.
- Different users can ask the same question and receive different (correct) citations.

If you need **true per-user ACL trimming today**, this is the most straightforward path: do Entra auth + OBO in your app, then call Responses with request-scoped tool headers.

### Why this matches your mental model

This is the same idea you’d use for any enterprise integration:

- Your app authenticates the user with Microsoft Entra ID.
- Your app obtains a delegated token (OBO) for the downstream resource.
- Your app calls the model/tool surface with that token attached **per request**.

## The Knowledge Retrieval docs example is a security footgun (and should say so)

This is not stated clearly in the docs today, and it should be.

Reference: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/tools/knowledge-retrieval

The current example is misleading from a security standpoint because it effectively:

- Evaluates a token provider at **agent creation time**
- Bakes a **user-bound, expiring token** into the agent definition

Here's the risky pattern (note the final `()` on the token provider):

```python copy
from azure.identity import get_bearer_token_provider

# Create MCP tool with SharePoint authorization header
mcp_kb_tool = MCPTool(
    server_label="knowledge-base",
    server_url=mcp_endpoint,
    require_approval="never",
    allowed_tools=["knowledge_base_retrieve"],
    project_connection_id=project_connection_name,
    headers={
        "x-ms-query-source-authorization": get_bearer_token_provider(
            credential, 
            "https://search.azure.com/.default"
        )()  # ← This () evaluates the token NOW
    }
)

# Create agent with MCP tool
agent = project_client.agents.create_version(
    agent_name=agent_name,
    definition=PromptAgentDefinition(
        model=agent_model,
        instructions=instructions,
        tools=[mcp_kb_tool]  # ← Tool + token are now baked into this agent version
    )
)
```

Why this is a problem:

- **The token will eventually expire** (you ship an agent version that later fails retrieval).
- **The identity used is not the invoking user**, but whoever created the agent (or whatever service identity was used during creation).
- Developers may incorrectly assume this provides **per-user ACL trimming**.
- It conflates **service credentials** with **user security context**.

What the docs should say explicitly:

- This is a **service credential** pattern.
- It’s **not request-scoped**.
- It’s **not suitable for per-user security trimming**.

If you need per-user security trimming, you need a surface where user context can be attached **per request / per run** (for example via request-scoped headers when calling the Responses API).

## Publishing doesn’t solve it (and introduces a second enterprise concern)

Publishing creates an **Agent Application** with its own identity and a stable endpoint.

Reference: [Publish and share agents / Agent Applications](https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/publish-agent?view=foundry)

Two details from the docs matter a lot for architecture:

1) **Inbound auth (default) is Azure RBAC**

- Callers must have the Azure RBAC permission `/applications/invoke/action` on the application resource.
- The docs also state users need at least **Azure AI User** role on the Agent Application scope to chat with a published agent.

2) **The application endpoint is intentionally limited**

- Only `POST /responses` is available.
- Other APIs like `/conversations`, `/files`, `/vector_stores` are not available, and the call forces `store=false`.
- For multi-turn, **the client must store conversation history**.

That’s not “bad” — it’s just a very different production surface than many people expect.

My enterprise concern here is the combination:

- “End users need Azure RBAC to invoke” + “Azure AI User is broader than invoke-only”

In a lot of real customer environments, security teams treat Azure RBAC for end users as *infrastructure access* — and won’t allow it.

### Docs say “application-scope invoke”. My tests required more.

This is the most important nuance from my tests:

- The docs clearly describe the **intended** model: callers need `/applications/invoke/action` on the **Agent Application** resource, and at least **Azure AI User** on the **Agent Application scope** to chat with a published agent.
- In my testing, that *was not sufficient* for end-user invocation in the way the docs suggest. Even after granting the application-scope permission, I still hit access denied until the caller had broader access tied back to the Foundry project (what many orgs would treat as “Foundry project RBAC / Azure AI User”).

So the post’s guidance is deliberately conservative:

- Treat “publish = enterprise-ready invoke-only sharing for end users” as **not reliable yet** for strict enterprise RBAC expectations.
- If your environment cannot grant Azure roles to end users, plan for an **app-layer** front door (Entra app roles, OBO, your own authZ) and call the Responses API.

## “Can’t we just use OAuth identity passthrough?”

Microsoft Foundry does support OAuth identity passthrough for MCP authentication, but not for Foundry IQ / knowledge retrieval tools yet.

Reference: https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/mcp-authentication?view=foundry

But the docs include a very explicit constraint:

- To use OAuth identity passthrough, **users interacting with your agent need at least the Azure AI User role**.

So yes, passthrough is a supported model — but **it's still a blocker** for organizations that cannot grant Azure roles to end users.

This is the key enterprise mismatch:

- **Security teams want**: "Application auth + least privilege + no infra-level role assignments to end users."
- **The current platform guidance leans on**: "End users have Azure roles so the platform can do passthrough."

These two models are incompatible.

## What I recommend to customers (right now)

Here’s the decision tree I’m using in real projects:

### If you need per-user ACL trimming

- Prefer: **App-layer Entra auth + OBO + Responses API** (request-scoped MCP headers per call).
- Avoid: Agent definitions that embed a token at creation time.
- Treat “agent-based knowledge retrieval with per-user context” as **preview-risk** until there’s a request-scoped mechanism.

### If you want agents/workflows reuse

- Use agents when your knowledge retrieval can operate with a **shared identity** (no per-user trimming) or when your org is willing to grant end users the required Azure roles for identity passthrough.
- Don’t use agents if your security model requires: “end users have no Azure RBAC assignments” + “per-request user token must be injected by your app”.

### If shared access is acceptable (no per-user trimming)

- Use a **shared identity** (Agent Identity / Foundry project managed identity / key-based auth), and design your knowledge source accordingly.

### If you want a stable endpoint for broader distribution

- Use **Agent Applications**, but assume:
  - inbound access is RBAC-based by default
  - conversation state is client-managed

## The feature I’m waiting for (to make this enterprise-clean)

This is what would unblock a lot of production scenarios:

- A supported way to pass **request-scoped tool auth context** per agent run (for example, via a `tool_resources`-style mechanism) so that user context is never persisted in the agent definition.
- A truly least-privilege “invoke-only” model that fits common enterprise patterns (ideally Entra app roles + app-layer OBO).

## Resources (official docs)

- [Knowledge retrieval with agents](https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/tools/knowledge-retrieval?view=foundry)
- [MCP authentication (preview)](https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/mcp-authentication?view=foundry)
- [Publish and share agents / Agent Applications](https://learn.microsoft.com/en-us/azure/ai-foundry/agents/how-to/publish-agent?view=foundry)

---
I’m also actively sharing these findings with folks on the Microsoft Foundry team so we can close the gaps faster — if you’re running into the same constraints in production, I’d love to compare notes.

If you’re building with Microsoft Foundry agents (or searching for Azure AI Foundry) and you’re hitting this exact security trimming wall: what’s your target architecture?

Are you forced into app-layer OBO too, or did you find a clean agent-native pattern that I missed?


If you want to discuss real-world enterprise patterns (and what you’re seeing in your tenant), reach out on [LinkedIn](https://www.linkedin.com/in/christian-glessner/)]]></content:encoded>
      <link>https://iloveagents.com/foundry-agents-foundry-iq-per-user-security-trimming</link>
      <guid>https://iloveagents.com/foundry-agents-foundry-iq-per-user-security-trimming</guid>
      <pubDate>Sat, 17 Jan 2026 00:00:00 GMT</pubDate>
      <author>Christian Glessner</author>
      <category>microsoft foundry</category><category>ai agents</category><category>foundry iq</category><category>mcp</category><category>security</category>
    </item>
    <item>
      <title>Connect Copilot Studio Agents to Microsoft Agent Framework</title>
      <description>Learn how to integrate Microsoft Copilot Studio agents with Agent Framework. Combine low-code agents with custom code, automate testing, and build hybrid multi-agent systems.</description>
      <content:encoded><![CDATA[
What I love about Microsoft Agent Framework is how it integrates with the Microsoft ecosystem. You can seamlessly combine Copilot Studio agents with Microsoft Foundry agents, custom code, and third-party systems — all through a unified API.

This opens up powerful scenarios: orchestrate low-code Copilot Studio agents alongside custom Python agents, automate testing for your bots, and build hybrid workflows that leverage the best of both platforms.

## Why This Matters

**Integration benefits:**
- **Automated testing** — Run regression tests against Copilot Studio agents programmatically
- **Multi-agent orchestration** — Combine conversational AI with specialized agents for data analysis, document processing, etc.
- **Hybrid architectures** — Low-code for business users, code-first for developers
- **Batch processing** — Process customer inquiries from queues or CSV files

**Technical advantages:**
- Native Azure AD authentication with device code flow
- Direct Power Platform API integration
- Streaming and non-streaming support
- Same SDK experience across all agent types

## Demo Repository

**Repository:** [github.com/iLoveAgents/agent-framework-copilot-studio](https://github.com/iLoveAgents/agent-framework-copilot-studio)

Includes automated Azure setup, basic and advanced examples, streaming support, and production patterns.

## Quick Start

### Prerequisites

- Python 3.10 or higher
- [uv package manager](https://docs.astral.sh/uv/) (or use pip)
- Azure subscription with Copilot Studio access
- Azure CLI installed and authenticated

### One-Time Setup

**1. Run the automated setup script:**

```bash copy
./setup_azure_app.sh
```

This script creates an Azure AD app registration, enables public client flows, adds Power Platform API permissions, and grants admin consent. It saves the configuration to your `.env` file automatically.

**2. Find your environment ID:**

Go to [Power Platform Admin Center](https://admin.powerplatform.microsoft.com/) and copy your environment ID (format: `Default-<guid>`).

**3. Configure your `.env` file:**

```bash copy filename=".env"
COPILOTSTUDIOAGENT__ENVIRONMENTID=Default-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
COPILOTSTUDIOAGENT__SCHEMANAME=<your-agent-schema-name>
COPILOTSTUDIOAGENT__AGENTAPPID=<set-by-setup-script>
COPILOTSTUDIOAGENT__TENANTID=<set-by-setup-script>
```

### Running the Basic Example

```bash copy
uv sync
uv run main.py
```

The basic example ([main.py](https://github.com/iLoveAgents/agent-framework-copilot-studio/blob/main/main.py)) shows:
- Creating a `CopilotStudioAgent()` with environment variables
- Both streaming and non-streaming responses
- Automatic authentication with device code flow (opens browser once, caches tokens)

## Advanced Configuration

For production scenarios, the advanced example ([copilotstudio_explicit.py](https://github.com/iLoveAgents/agent-framework-copilot-studio/blob/main/copilotstudio_explicit.py)) shows explicit configuration:
- Custom token acquisition (service principals, managed identities)
- Explicit `ConnectionSettings` for multi-environment setups
- Direct `CopilotClient` instantiation for connection pooling
- Granular error handling

## Use Cases

### Automated Testing
Run pytest regression tests against your agents — test greetings, FAQs, intents. Catch regressions before production.

### Multi-Agent Orchestration
Route customer queries between Copilot Studio (conversational UI) and Microsoft Foundry agents (specialized data analysis). Extend with planner-executor patterns or intent-based routing.

### Batch Processing
Process CSV files or queue messages programmatically — customer inquiries, support tickets, data migrations.

## Troubleshooting

### Authentication Fails

```bash
rm -f .msal_token_cache.json
uv run main.py
```

### Can't Connect to Environment

- Verify your environment ID format: `Default-<guid>` (capital D, with hyphen)
- Make sure your agent is published in Copilot Studio
- Check you have access in [Power Platform Admin Center](https://admin.powerplatform.microsoft.com/)

### Permission Errors

```bash
# Create the Power Platform service principal if missing
az ad sp create --id 8578e004-a5c6-46e7-913e-12f58912df43

# Re-grant consent
az ad app permission admin-consent --id <YOUR_APP_ID>
```

## What I Love

Zero custom auth logic, identical API surface across agent types, production-ready patterns out of the box, and fast iteration with automated setup. The Microsoft ecosystem bridges are built, tested, and ready.

## Let's Build Together

Using Copilot Studio in production? Building hybrid agent systems? Share your experience on [iLoveAgents](https://iloveagents.com) or connect on [LinkedIn](https://www.linkedin.com/in/christian-glessner/).

## Resources

**Demo Repository:**
- [agent-framework-copilot-studio on GitHub](https://github.com/iLoveAgents/agent-framework-copilot-studio)

**Microsoft Documentation:**
- [Microsoft Agent Framework Overview](https://learn.microsoft.com/en-us/agent-framework/overview/agent-framework-overview)
- [Copilot Studio Documentation](https://learn.microsoft.com/microsoft-copilot-studio/)
- [Power Platform Admin Center](https://admin.powerplatform.microsoft.com/)
- [Agent Framework Python SDK](https://pypi.org/project/agent-framework/)

**Related Posts:**
- [Getting Started with Microsoft Agent Framework](/posts/getting-started-microsoft-agent-framework-python-dotnet)
]]></content:encoded>
      <link>https://iloveagents.com/agent-framework-copilot-studio-integration</link>
      <guid>https://iloveagents.com/agent-framework-copilot-studio-integration</guid>
      <pubDate>Thu, 27 Nov 2025 00:00:00 GMT</pubDate>
      <author>Christian Glessner</author>
      <category>agent framework</category><category>copilot studio</category><category>microsoft foundry</category><category>python</category>
    </item>
    <item>
      <title>Building GraphRAG Contract Agents: Combining Knowledge Graphs with Microsoft Agent Framework</title>
      <description>Hands-on journey building contract analysis agents using GraphRAG, Neo4j, and Microsoft Agent Framework on Microsoft Foundry.</description>
      <content:encoded><![CDATA[
If you're analyzing real documents at scale—contracts, government notices, quality manuals, or industry standards—you need precision beyond fuzzy search. Here's a practical approach with GraphRAG powered by my Dream Stack: **Microsoft Agent Framework + Microsoft Foundry + Neo4j**. In practice, this combination consistently yields higher‑quality answers because the graph preserves relationships that vectors alone miss.

## Why GraphRAG for Contract Analysis?

Here's the problem: traditional RAG hits semantic search on a vector database, retrieves some chunks, and hands them to an LLM. It works, but it's flat. You lose relationships, structure, and context.

**GraphRAG changes this.** By storing contract data in a Neo4j graph database, we preserve the relationships:

- Which organizations are parties to which contracts?
- What clauses exist across different agreements?
- How are governing laws connected to incorporation countries?

Then we combine three retrieval strategies:

1. **Structured queries** - Direct Cypher queries for precise data
2. **Vector search** - Semantic similarity on embedded text excerpts
3. **Text-to-Cypher** - Natural language → graph queries

The agent decides which tool to use. That's the orchestration layer doing real work.

## The Dream Stack: <span className="whitespace-nowrap">Microsoft Agent Framework</span> + <span className="whitespace-nowrap">Microsoft Foundry</span> + Neo4j

I wanted to keep this demo focused on the agent pattern, so I used:

- **Microsoft Agent Framework** - My go-to for building production-ready agents with Azure OpenAI Responses API
- **Microsoft Foundry** - GPT-5 for reasoning and embeddings (Responses API + Embeddings) powering orchestration
- **Neo4j** - Graph database with native vector search (I used Aura Free, works great)

The architecture is simple:

```mermaid
flowchart LR
  A[PDFs] --> B[LLM Extract JSON] --> C[Build Knowledge Graph] --> E[Neo4j]
  F[Agent] --> G[GraphRAG Tool] --> E
```

<div className="h-2"></div>

## Building the Agent: Hands-On Walkthrough

### 1. Define Your Graph Schema

First, design your knowledge graph. For contracts, I modeled:

**Nodes:**

- `Agreement` - Contract documents
- `Organization` - Companies involved
- `Country` - Jurisdictions
- `ContractClause` - Individual clauses
- `ClauseType` - Categories (Price Restrictions, Insurance, etc.)
- `Excerpt` - Text chunks with embeddings

**Relationships:**

- `IS_PARTY_TO` - Organizations → Agreements
- `HAS_CLAUSE` - Agreements → Clauses
- `HAS_EXCERPT` - Clauses → Text excerpts
- `GOVERNED_BY_LAW`, `INCORPORATION_IN` - Jurisdictional links

![Neo4j Graph Schema](/images/content/agent-framework-graphrag-neo4j/graph-schema.png)

### 2. Create Agent Tools with GraphRAG Queries

Here's where it gets interesting. I wrapped Neo4j queries as agent function tools:

```python copy filename="contract_tools.py"
from contract_graphrag.contract_service import ContractSearchService

class ContractTools:
    """Agent function tools for contract review."""

    def __init__(self):
        self.service = ContractSearchService()

    def get_contract(self, contract_id: int) -> dict:
        """Retrieve full details for a specific contract."""
        return self.service.get_contract_by_id(contract_id)

    def get_contracts_by_organization(self, org_name: str) -> list[dict]:
        """Find all contracts where an organization is a party."""
        return self.service.get_contracts_by_organization(org_name)

    def search_contracts_by_similarity(self, query: str, limit: int = 5) -> list[dict]:
        """Semantic search across contract excerpts using vector embeddings."""
        return self.service.semantic_search(query, limit)

    def query_with_natural_language(self, question: str) -> dict:
        """Convert natural language to Cypher query and execute."""
        return self.service.text_to_cypher(question)
```

Each tool maps to a different retrieval strategy. The agent decides which one fits the user's question.

### 3. Wire It to Microsoft Agent Framework

Creating the agent is straightforward with Azure OpenAI Responses API:

```python copy filename="agent_config.py"
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import DefaultAzureCredential

credential = DefaultAzureCredential()

agent = AzureOpenAIResponsesClient(credential=credential).create_agent(
    instructions="""You are a contract review assistant with access to a knowledge
    graph of agreements, organizations, clauses, and jurisdictions. Use your tools
    to answer questions accurately. Prefer structured queries when IDs or names are
    known, semantic search for conceptual questions, and natural language queries
    for complex relationships.""",
    name="ContractReviewAgent",
    tools=[
        tools.get_contract,
        tools.get_contracts_by_organization,
        tools.search_contracts_by_similarity,
        tools.query_with_natural_language,
    ],
)
```

The agent instructions guide tool selection. I kept them concise—no need for prompt engineering gymnastics here.

### 4. Query the Agent

Here's what happens when you ask: **"Find contracts for AT&T with Price Restrictions but no Insurance clauses"**

1. Agent calls `get_contracts_by_organization("AT&T")`
2. For each contract, checks clause types
3. Filters results
4. Returns structured answer

Or: **"Show me contracts mentioning product delivery requirements"**

1. Agent calls `search_contracts_by_similarity("product delivery requirements")`
2. Vector search finds semantically similar excerpts
3. Returns matching contracts with highlighted excerpts

The orchestration is automatic. The agent picks the right tool.

## What I Learned: Tradeoffs & Honest Reflections

**What works really well:**

- 🚀 **Graph relationships beat flat vectors** - Questions like "which organizations share governing law?" are trivial with Cypher
- 🤖 **Tool orchestration is powerful** - The agent switching between structured/semantic/natural language queries feels like real intelligence
- 💡 **DefaultAzureCredential is clean** - No credential juggling; `az login` just works
- 🛠️ **Neo4j Aura Free is perfect for demos** - Free tier, cloud-hosted, vector search included

**What's still challenging:**

- Text-to-Cypher needs guardrails - LLMs can generate invalid queries; I added retry logic and validation
- Graph design is upfront work - You need to model relationships carefully or queries fall apart
- Context window limits - I chunk excerpts to ~500 tokens; longer contracts need pagination

**Not production-ready yet:**

- No multi-turn conversation memory (agent resets each query)
- No guardrails for PII/sensitive clauses
- Limited error handling for malformed PDFs
- Neo4j connection pooling could be better

This is a demo to explore the pattern, not a product. If you're building something similar, expect to add auth, monitoring, and proper error handling.

## Try It Yourself

I've open-sourced the full project with MIT license:

<div className="not-prose my-4">
    <a
        href="https://github.com/iLoveAgents/agent-framework-graphrag-neo4j"
        target="_blank"
        rel="noopener noreferrer"
        aria-label="Open repository on GitHub"
        className="inline-flex items-center gap-2 rounded-full border border-base-300 bg-base-100/70 px-4 py-2 text-sm font-medium hover:bg-base-200 transition-colors"
    >
        <svg aria-hidden="true" viewBox="0 0 24 24" fill="currentColor" className="h-4 w-4 opacity-80"><path fill-rule="evenodd" d="M12 2C6.477 2 2 6.522 2 12.053c0 4.438 2.865 8.203 6.839 9.528.5.095.682-.218.682-.483 0-.237-.01-1.022-.014-1.852-2.782.605-3.369-1.186-3.369-1.186-.455-1.164-1.11-1.474-1.11-1.474-.908-.623.069-.61.069-.61 1.004.072 1.532 1.034 1.532 1.034.892 1.538 2.341 1.094 2.91.836.091-.655.35-1.094.636-1.346-2.22-.255-4.555-1.115-4.555-4.959 0-1.095.39-1.99 1.029-2.691-.103-.253-.446-1.272.098-2.65 0 0 .84-.27 2.75 1.027A9.564 9.564 0 0 1 12 6.844c.85.004 1.705.115 2.504.337 1.909-1.296 2.748-1.027 2.748-1.027.546 1.379.203 2.398.1 2.651.64.701 1.027 1.596 1.027 2.69 0 3.853-2.339 4.701-4.566 4.95.359.31.678.92.678 1.855 0 1.338-.012 2.419-.012 2.747 0 .268.18.582.688.482A10.06 10.06 0 0 0 22 12.053C22 6.522 17.523 2 12 2Z" clip-rule="evenodd"/></svg>
        <span>Open GitHub Repo</span>
    </a>
</div>



Quick start (requires Python 3.12+, Azure OpenAI, and Neo4j):

```bash copy
# Clone and setup
git clone https://github.com/iLoveAgents/agent-framework-graphrag-neo4j.git
cd agent-framework-graphrag-neo4j
uv sync

# Configure environment
cp .env.example .env
# Edit .env with your Azure OpenAI and Neo4j credentials

# Authenticate with Azure
az login

# Build the graph (sample contracts included)
uv run 02_build_graph.py

# Run the agent
uv run 03_agent.py --demo
```

The demo runs six queries to showcase different retrieval patterns. Or launch the browser UI:

```bash copy
uv run devui.py
```

Opens at `http://127.0.0.1:8080` with a visual trace viewer—super helpful for debugging tool calls.

## Where GraphRAG Fits in Enterprise Workflows

This isn't just a demo pattern—I see real enterprise use cases:

**Legal/Compliance:**

- Contract risk analysis across thousands of agreements
- Clause comparison and anomaly detection
- Regulatory compliance checks with graph traversals

**Knowledge Management:**

- Technical documentation with linked concepts
- Customer support with product relationship graphs
- Internal policy networks

**Financial Services:**

- Investment portfolio relationships
- Risk correlation analysis
- Regulatory reporting with structured queries

The key is: **when relationships matter as much as content, GraphRAG wins.**

 

## Let's Build Together 🤝

What are you building with GraphRAG? Have you combined knowledge graphs with agent orchestration?

I'd love to hear your stories:

- What retrieval strategies work best for your domain?
- How do you handle graph schema evolution?
- Any gotchas with text-to-Cypher generation?

Drop a comment, open an issue on GitHub, or reach out on LinkedIn. We're building in public and sharing what we learn.

**Let's make agent orchestration practical for everyone.** 🚀


]]></content:encoded>
      <link>https://iloveagents.com/agent-framework-graphrag-neo4j</link>
      <guid>https://iloveagents.com/agent-framework-graphrag-neo4j</guid>
      <pubDate>Fri, 07 Nov 2025 00:00:00 GMT</pubDate>
      <author>Christian Glessner</author>
      <category>microsoft foundry</category><category>agent framework</category><category>graphrag</category><category>neo4j</category><category>python</category>
    </item>
    <item>
      <title>Publish Azure Logic Apps as an MCP Server — turn 1,400+ connectors into AI tools</title>
      <description>Turn Azure Logic Apps workflows into MCP tools in minutes. Let an AI agent do the setup for you.</description>
      <content:encoded><![CDATA[
I've been experimenting with a simple idea that unlocks huge potential for enterprise automation: turn Azure Logic Apps into MCP servers so any AI agent can call them like native tools. In one move, your agents get access to 1,400+ prebuilt connectors across Microsoft and third-party services — with enterprise-grade auth, monitoring, and governance built in.

Here's the twist I like most: let an agent do the setup for you. No scripts. No manual portal clicking. You give the agent a resource URL and it provisions everything — enables MCP endpoints, creates workflows, tests them, and hands you back a working server.

Why this matters now: agents don't need to "learn" every API. They can orchestrate business-ready workflows that already exist in Logic Apps — approvals, CRM updates, IT automations, finance ops — all behind a clean, typed tool surface.

> **Official Documentation:** [Set up a Model Context Protocol server in Azure Logic Apps](https://learn.microsoft.com/en-us/azure/logic-apps/set-up-model-context-protocol-server-standard)

## What you need

- Azure subscription with an active account
- Azure Logic App (Standard) with Workflow Service Plan or App Service Environment v3
- Azure CLI installed
- An AI agent (Claude, VS Code with Copilot, Agent Framework, etc.)

## Repository

This guide uses the [azure-logic-apps-mcp](https://github.com/iLoveAgents/azure-logic-apps-mcp) repository. It contains:
- [AGENTS.md](https://github.com/iLoveAgents/azure-logic-apps-mcp/blob/main/AGENTS.md) — Agent instructions for setup
- [README.md](https://github.com/iLoveAgents/azure-logic-apps-mcp/blob/main/README.md) — Full documentation

## Quick start: Let an agent set it up

Copy this prompt and send it to your AI agent (Claude Desktop, VS Code with Copilot, etc.):

```text copy filename="setup-prompt.txt"
I want to set up my Azure Logic App as an MCP server. Please read the guide at:
https://github.com/iLoveAgents/azure-logic-apps-mcp/blob/main/AGENTS.md

My Azure Logic App resource URL:
/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.Web/sites/{logic-app-name}

Please:
1. Enable MCP endpoints on my Logic App
2. Create a "hello" tool that accepts an optional "name" parameter and returns JSON: { "greeting": "Hello, <name>!" }
3. Use anonymous authentication (for testing)
4. Verify the MCP server works
5. Give me the MCP server URL and show me how to connect from VS Code
```

**Tip:** Get your Logic App resource URL from the Azure Portal → Open your Logic App → Click "JSON View" (top-right) → Copy the "Resource ID" field

The agent will set everything up and give you a URL like `https://my-logic-app.azurewebsites.net/api/mcp` to connect with.

**Connect from VS Code:**
- Command Palette (Cmd/Ctrl+Shift+P) → "MCP: Add Server" → Paste the URL

**Connect from Claude Desktop:**
Add to your `claude_desktop_config.json`:
```json copy filename="claude_desktop_config.json"
{
  "mcpServers": {
    "logic-apps": {
      "url": "https://my-logic-app.azurewebsites.net/api/mcp",
      "transport": "streamable-http"
    }
  }
}
```

## Test your "hello" tool

Once connected, try this prompt:

**VS Code + Copilot:**
```text copy
Call the MCP tool "hello" with { "name": "Christian" } and show me the response.
```

**Claude Desktop:**
```text copy
Use the logic-apps MCP server and call the "hello" tool with { "name": "World" }. Return the JSON response.
```

Expected response:
```json
{
  "greeting": "Hello, Christian!"
}
```

## Use OAuth for production

The real power comes when you enable OAuth 2.0 with Azure's "Easy Auth" feature.

**Prompt to enable OAuth:**
```text copy filename="enable-oauth-prompt.txt"
I want to enable OAuth 2.0 authentication for my Azure Logic App MCP server.

Please read the guide at:
https://github.com/iLoveAgents/azure-logic-apps-mcp/blob/main/AGENTS.md

My Azure Logic App resource URL:
/subscriptions/{subscription-id}/resourceGroups/{resource-group}/providers/Microsoft.Web/sites/{logic-app-name}

Please:
1. Create a Microsoft Entra ID app registration for the MCP server
2. Configure Easy Auth (OAuth 2.0) on my Logic App
3. Update the MCP server settings in host.json
4. Create a "hello" tool that returns the authenticated user's name from the token
5. Give me the MCP server URL and show me how to connect from VS Code

The "hello" tool should return JSON: { "greeting": "Hello, <authenticated-user>!" }
```

When you connect from VS Code, it automatically handles the OAuth flow. The user authenticates once, and that identity is used for all MCP tool calls.

**Important note about Easy Auth and connectors:** Unfortunately, Logic Apps MCP with Easy Auth **cannot use On-Behalf-Of flow**. Easy Auth only provides `X-MS-CLIENT-PRINCIPAL-*` headers with user identity information but **does not supply the OAuth access token** to your workflows. This means:

- You can identify the authenticated user (name, ID, email)
- You cannot automatically pass the user's token to connectors (Teams, SharePoint, etc.)
- Connectors require their own authentication connections (service principals or user connections)

For workflows that need to call Microsoft Graph or other APIs on behalf of the authenticated user, you'll need to configure connector authentication separately.

## What you can build

With 1,400+ connectors, you can create tools for:

**Automation:**
- Approvals (Teams, Outlook, ServiceNow)
- Document processing (SharePoint, OneDrive)
- Notifications (Teams, Slack, SMS)

**Integration:**
- CRM updates (Salesforce, Dynamics 365)
- Database operations (SQL, Cosmos DB, Dataverse)
- File operations (SharePoint, Azure Storage, SFTP)

**Business processes:**
- Invoice processing
- IT ticket creation
- Customer onboarding
- Report generation

Browse the full catalog: [Microsoft Connectors Reference](https://learn.microsoft.com/connectors/connector-reference/connector-reference-logicapps-connectors)

## How it works

When you enable MCP on a Logic App:

1. **MCP endpoints** are exposed at `/api/mcp` using streamable HTTP or Server-Sent Events (SSE)
2. **Each stateless workflow** becomes an MCP "tool" that agents can discover and call
3. **HTTP trigger schema** defines the tool's input parameters (automatically discovered by MCP clients)
4. **HTTP response** becomes the tool's output (must be JSON)
5. **Easy Auth** (when enabled) validates OAuth tokens and injects user identity headers

**Configuration:**
- `host.json` enables the MCP server with extension bundle 4.33.0+
- Stateless workflows with HTTP triggers are automatically exposed as tools
- Tool names come from the workflow folder names (use lowercase with hyphens)

**No dependencies. No custom scripts. Just Azure Logic Apps + MCP.**

## What works (and what doesn't)

After working with this setup, here's what I've learned:

**What I love:**
- Agents can access enterprise workflows without learning every API
- VS Code auto-discovers OAuth settings — zero manual config needed
- 1,400+ connectors available immediately
- Agent-driven setup means less time in the Azure Portal

**Current limitations:**
- Easy Auth doesn't pass OAuth tokens to connectors (no On-Behalf-Of flow)
- Each connector still needs its own authentication configuration
- You get user identity headers (`X-MS-CLIENT-PRINCIPAL-*`) but not the access token
- Best suited for workflows with pre-configured service principals

**Production-ready for:**
- Workflows with service principal authentication
- User identity verification (knowing who's calling your tools)
- Anonymous dev/test environments
- Orchestrating existing enterprise integrations

## Resources

**Official Microsoft Documentation:**
- [Set up a Model Context Protocol server in Azure Logic Apps](https://learn.microsoft.com/en-us/azure/logic-apps/set-up-model-context-protocol-server-standard)
- [Logic Apps overview](https://learn.microsoft.com/azure/logic-apps/logic-apps-overview)
- [Microsoft Connectors Reference](https://learn.microsoft.com/connectors/connector-reference/connector-reference-logicapps-connectors)

**Agent-first setup (this approach):**
- Repository: [github.com/iLoveAgents/azure-logic-apps-mcp](https://github.com/iLoveAgents/azure-logic-apps-mcp)
- Agent guide: [AGENTS.md](https://github.com/iLoveAgents/azure-logic-apps-mcp/blob/main/AGENTS.md)
- Full docs: [README.md](https://github.com/iLoveAgents/azure-logic-apps-mcp/blob/main/README.md)

---

What are you building with Logic Apps and MCP? I'd love to hear about your agent workflows — share your experience on [iLoveAgents](https://iloveagents.com) or connect with me on LinkedIn.

]]></content:encoded>
      <link>https://iloveagents.com/azure-logic-apps-mcp</link>
      <guid>https://iloveagents.com/azure-logic-apps-mcp</guid>
      <pubDate>Sat, 18 Oct 2025 00:00:00 GMT</pubDate>
      <author>Christian Glessner</author>
      <category>mcp</category><category>logic apps</category>
    </item>
    <item>
      <title>iLoveAgents is Live: Launch Week Recap + VS Code Theme</title>
      <description>Launch-week recap: my Microsoft Agent Framework quickstart (Python &amp; .NET) and a new VS Code theme for Microsoft Foundry devs.</description>
      <content:encoded><![CDATA[
import Image from "next/image";

Last week I quietly launched iLoveAgents — a developer-first blog about building real, production-ready AI agents with Microsoft Agent Framework and Azure AI Foundry.

If you missed it, start here:

- [Getting Started with Microsoft Agent Framework: Python & .NET Quickstart](/getting-started-microsoft-agent-framework-python-dotnet)

That post walks through a Hello World Python and .NET, using Azure OpenAI Responses, so you can copy-paste, run, and ship.


## New: Azure AI Foundry inspired VS Code Theme

I also published a tiny extra that makes day-to-day work nicer:

- [iLoveAgents — Azure AI Foundry Theme (VS Code)](https://marketplace.visualstudio.com/items?itemName=iLoveAgents.iloveagents-ai-foundry-theme) 

> With this theme, Agent Framework dev is **twice as fun**. (Highly scientific measurement. 😉)

### Install

```bash
code --install-extension iLoveAgents.iloveagents-ai-foundry-theme
# or in VS Code: Extensions → search "iLoveAgents AI Foundry theme" → Install
```

### Theme Preview

<div className="grid md:grid-cols-2 gap-6 my-10">
  <figure className="space-y-3">
    <Image
      src="/images/content/iloveagents-launch-week/theme-foundry-light.png"
      alt="iLoveAgents Azure AI Foundry VS Code theme in light mode"
      width={1400}
      height={900}
      className="rounded-xl border border-base-300 shadow-sm"
      priority
    />
    <figcaption className="text-sm text-base-content/60 text-center">
      Azure AI Foundry Light mode
    </figcaption>
  </figure>
  <figure className="space-y-3">
    <Image
      src="/images/content/iloveagents-launch-week/theme-foundry-dark.png"
      alt="iLoveAgents Azure AI Foundry VS Code theme in dark mode"
      width={1400}
      height={900}
      className="rounded-xl border border-base-300 shadow-sm"
      priority
    />
    <figcaption className="text-sm text-base-content/60 text-center">
      Azure AI Foundry Dark mode
    </figcaption>
  </figure>
</div>

<div className="grid md:grid-cols-2 gap-6 my-10">
  <figure className="space-y-3">
    <Image
      src="/images/content/iloveagents-launch-week/theme-light.png"
      alt="iLoveAgents VS Code theme in light mode"
      width={1400}
      height={900}
      className="rounded-xl border border-base-300 shadow-sm"
    />
    <figcaption className="text-sm text-base-content/60 text-center">
      Light mode preview
    </figcaption>
  </figure>
  <figure className="space-y-3">
    <Image
      src="/images/content/iloveagents-launch-week/theme-dark.png"
      alt="iLoveAgents VS Code theme in dark mode"
      width={1400}
      height={900}
      className="rounded-xl border border-base-300 shadow-sm"
    />
    <figcaption className="text-sm text-base-content/60 text-center">
      Dark mode preview
    </figcaption>
  </figure>
</div>
]]></content:encoded>
      <link>https://iloveagents.com/iloveagents-launch-week</link>
      <guid>https://iloveagents.com/iloveagents-launch-week</guid>
      <pubDate>Mon, 06 Oct 2025 00:00:00 GMT</pubDate>
      <author>Christian Glessner</author>
      <category>agent framework</category><category>microsoft foundry</category><category>tools</category><category>vscode</category>
    </item>
    <item>
      <title>Getting Started with Microsoft Agent Framework: Python &amp; .NET Quickstart</title>
      <description>Learn Microsoft Agent Framework with working Python and .NET examples. A private preview contributor&#039;s guide to building your first AI agent on Microsoft Foundry.</description>
      <content:encoded><![CDATA[
Today feels like a double launch: Microsoft Agent Framework is now in public preview, and iLoveAgents goes live with it.

A few months ago, I joined the private preview, shipped code, and kept thinking: do we really need another framework? A clean rewrite instead of evolving Semantic Kernel felt risky. But once I got hands-on, that skepticism turned into excitement. I filed bugs, gave feedback, and realized — this isn’t just another framework. It’s the one I’d been waiting for: clean abstractions, practical integrations, and the right balance between research and production.

What makes this launch exciting is how new and focused the codebase is. It hasn’t inherited years of legacy or over-engineered complexity.

Until now, Semantic Kernel and AutoGen moved in parallel — two amazing projects, and even different Teams, with distinct missions. Semantic Kernel built the enterprise foundations: connectors, memory, reliability. AutoGen explored coordination, planning, and multi-agent collaboration. Now those paths have merged. The Teams behind both projects are building together — the new heart of Microsoft’s agent ecosystem.

It’s one framework to rule them all — finally, built for both research and reality.

And what I love most is that it starts intentionally minimal — what’s there is purposeful, and what’s missing is opportunity we can shape together. It brings together everything I loved about AutoGen’s flexibility and Semantic Kernel’s production readiness into one coherent experience. Yes, it’s fresh. Yes, there are rough edges. But I’m excited to keep building with it—and iLoveAgents is where I’ll share what I learn, starting with a Python and .NET hello world.

## Why Microsoft Agent Framework Matters

Here’s what makes the framework stand out — and why it’s worth your time:

- **Unified SDK and runtime** for agents, workflows, and orchestration  
- **Deep Microsoft Foundry integration** for deployment, observability, and governance  
- **Seamless multi-model support** — connect agents to **Azure OpenAI**, **OpenAI API**, **Anthropic Sonnet**, and other models hosted in **Microsoft Foundry** or external providers  
- **Built on open standards** like **OpenAPI**, **Model Context Protocol (MCP)**, and **Agent-to-Agent (A2A)** communication  
- **Production-ready features** such as telemetry, checkpoints, and human-in-the-loop flows built in from day one  

Together, these make Microsoft Agent Framework the path from intelligent-automation experiments to **enterprise-grade multi-agent systems** — the kind of agents customers actually want in production, not just in demos.

## Agent Framework Hello World: Your First Agent

Let's get concrete. I'll walk you through building a simple HaikuBot agent in both Python and .NET.

### Prerequisites

Before you start, make sure you have:

- Azure subscription with Azure OpenAI access
- Azure CLI installed and configured (`az --version` to check)
- Python 3.10+ **or** .NET 8.0+
- Basic familiarity with async programming

### One-Time Setup (Both Languages)

1. **Provision an Azure OpenAI deployment** in [Microsoft Foundry](https://ai.azure.com) and note your deployment name (e.g., `gpt-5`).

2. **Export environment variables** (run once in your shell of choice):

    ```bash
    export AZURE_OPENAI_ENDPOINT="https://<your-resource>.openai.azure.com/"
    export AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME="<your-deployment-name>"
    ```

    ```powershell
    setx AZURE_OPENAI_ENDPOINT "https://<your-resource>.openai.azure.com/"
    setx AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME "<your-deployment-name>"
    ```

3. **Authenticate with Azure CLI** so the SDKs can use your credentials:

    ```bash
    az login
    ```

### Python Version

Create a folder for your project (e.g., `haikubot-python`), add a file called `haikubot.py`, then install the preview SDK and run it:

```bash copy
pip install --pre agent-framework azure-identity azure-ai-agents
python haikubot.py
```

Here's the complete code:

```python copy filename="haikubot.py"
import asyncio
import os
from agent_framework.azure import AzureOpenAIResponsesClient
from azure.identity import AzureCliCredential


async def main() -> None:
    endpoint = os.environ["AZURE_OPENAI_ENDPOINT"]
    deployment = os.environ["AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME"]

    agent = AzureOpenAIResponsesClient(
        credential=AzureCliCredential(),
        endpoint=endpoint,
        deployment_name=deployment,
    ).create_agent(
        name="HaikuBot",
        instructions="You are the upbeat launch poet for iLoveAgents.",
    )

    response = await agent.run("Write a haiku about Microsoft Agent Framework.")
    print(response)


if __name__ == "__main__":
    asyncio.run(main())
```

**Expected output:**
```
Agents unite, code flows free,
Framework guides the way,
Azure's bright new day.
```

### .NET Version

Create a console app, install the preview packages, and replace `Program.cs` with the code below:

```bash copy
dotnet new console -n HaikubotDotnet
cd HaikubotDotnet
dotnet add package Azure.AI.OpenAI --prerelease
dotnet add package Azure.Identity
dotnet add package Microsoft.Agents.AI.OpenAI --prerelease
dotnet run
```

Here's the code:

```csharp copy filename="Program.cs"
#pragma warning disable OPENAI001
using System;
using Azure.AI.OpenAI;
using Azure.Identity;
using Microsoft.Agents.AI;

var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")
    ?? throw new InvalidOperationException("AZURE_OPENAI_ENDPOINT is not set.");
var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME")
    ?? "gpt-5";

AIAgent agent = new AzureOpenAIClient(new Uri(endpoint), new AzureCliCredential())
    .GetOpenAIResponseClient(deploymentName)
    .CreateAIAgent(
        instructions: "You are the upbeat launch poet for iLoveAgents.",
        name: "HaikuBot");

Console.WriteLine(await agent.RunAsync("Write a haiku about Microsoft Agent Framework."));
```

> **Important:** Ensure you're on the latest Azure.AI.OpenAI beta (≥ 2.3.0) to use the Responses API. Earlier versions don't support it.

From here, you can extend the instructions, register tools, compose planner-executor patterns, or wire agents into larger workflows.

When I say the code is cleaner, I'm not just being vague. Here's what stands out:

- Less boilerplate — You don't need to wire up multiple services and factories
- Clearer abstractions — `AIAgent` + `.RunAsync()` hides complexity without sacrificing power
- Modular extension points — Attach tools, workflows, and middleware without modifying core code
- Built-in interoperability — Message passing and agent protocols are standardized, not reinvented

It feels like they took years of learnings from both Semantic Kernel and AutoGen and distilled them into something that just works.

## Tradeoffs & When to Wait

I'm excited about Microsoft Agent Framework, but I want to be honest about where it is right now.

**This is still preview software.** That means:

- Some enterprise scenarios and connectors aren't fully hardened yet
- APIs might change between preview releases
- Documentation is catching up—expect to read source code sometimes
- Breaking changes are possible (though the team is minimizing them)

**When to use Microsoft Agent Framework:**
- You're starting a new multi-agent project
- You want to be part of the ecosystem as it evolves
- You can handle preview-stage API changes
- You need tight Microsoft Foundry integration

**When to stick with Semantic Kernel (for now):**
- You have existing production applications
- You need absolute stability today
- You only need single-agent patterns
- Your project can't tolerate breaking changes

My take? If you're building something new and you're comfortable with preview software, jump in now. The sooner you start, the more you can influence the direction.

## What's Coming on iLoveAgents

This is why I started iLoveAgents: to track, test, and share what this new agentic era makes possible. Here's what I'm lining up next:

- Microsoft Agent Framework multi-agent orchestration patterns (planner + executor + reviewer) with real Microsoft Foundry telemetry
- Deployment runbooks for moving agents from local development to managed hosting on Microsoft Foundry
- Tooling deep dives: OpenAPI, Model Context Protocol servers, and responsible AI guardrails
- Playbooks for production readiness—observability, logging, and human-in-the-loop handoffs

## Let's Build Together

This is just the beginning—for Microsoft Agent Framework and for iLoveAgents. I'm building in public, and I want to hear from you. Share the enterprise AI agents or prototypes you're building so we can figure out this agentic era together.

**What agent scenarios are you exploring?** Are you building customer support bots? Research assistants? DevOps automation? I'd love to feature interesting use cases from the community in future posts.

**Got questions or feedback?** Drop a comment below or reach out on [LinkedIn](https://www.linkedin.com/in/christian-glessner/).

Let's figure out this agentic era together.

## Further Reading & Resources

- [Microsoft Agent Framework Overview (Microsoft Learn)](https://learn.microsoft.com/en-us/agent-framework/overview/agent-framework-overview)
- [Microsoft Agent Framework GitHub Repository](https://github.com/microsoft/agent-framework/)
- [Introducing Microsoft Agent Framework — Microsoft Foundry Blog](https://devblogs.microsoft.com/foundry/introducing-microsoft-agent-framework-the-open-source-engine-for-agentic-ai-apps/)
- [Learning Path: Develop AI Agents on Azure](https://learn.microsoft.com/en-us/training/paths/develop-ai-agents-on-azure/)
- [GitHub Issues — Microsoft Agent Framework](https://github.com/microsoft/agent-framework/issues)
]]></content:encoded>
      <link>https://iloveagents.com/getting-started-microsoft-agent-framework-python-dotnet</link>
      <guid>https://iloveagents.com/getting-started-microsoft-agent-framework-python-dotnet</guid>
      <pubDate>Wed, 01 Oct 2025 00:00:00 GMT</pubDate>
      <author>Christian Glessner</author>
      <category>agent framework</category><category>microsoft foundry</category><category>dotnet</category><category>python</category>
    </item>
  </channel>
</rss>