• unwind ai
  • Posts
  • Supervisor Agent for Multi-Agent Apps

Supervisor Agent for Multi-Agent Apps

PLUS: No-code AI agents and RAG, GitHub Copilot's Next Edit Suggestions

Today’s top AI Highlights:

  1. Build AI agents and RAG pipelines with as low as 4 lines of code

  2. Opensource AI memory engine that continuously learns and evolves

  3. GitHub Copilot's Next Edit Suggestions predicts where you'll edit next

  4. First general-purpose MoE embeddings model open-sourced

  5. The only PR bot that actually tests your code

& so much more!

Read time: 3 mins

AI Tutorials

When solving coding problems, developers often encounter them in different formats - whether as text descriptions, screenshots from documentation, or images from whiteboards. Having a tool that can understand these different formats and help generate optimal solutions can significantly speed up the development process.

In this tutorial, we'll build a powerful multimodal coding assistant that combines three specialized AI agents working together:

  1. Vision Agent (using Gemini 2.0 Pro): Handles image processing, extracting coding problems and requirements from uploaded screenshots or pictures

  2. Coding Agent (using o3-mini): Generates optimized code solutions with proper documentation and type hints

  3. Execution Agent (using o3-mini + E2B): Runs the generated code in a secure sandbox environment and provides execution results and error analysis

Users can submit problems either as text descriptions or images, and the appropriate agent takes charge based on the input type.

We share hands-on tutorials like this 2-3 times a week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this newsletter on your social channels and tag Unwind AI (X, LinkedIn, Threads, Facebook) to support us!

Latest Developments

Lyzr offers both low- and no-code solutions for building AI agents and RAG pipelines. With the Lyzr framework, you can create agents and multi-agent systems with minimal Python code, while the Agent Studio offers a no-code visual interface with pre-built templates. Both options come packed with ready-to-use agents for tasks like chat, knowledge search, and data analysis, along with customizable RAG pipelines that support various LLMs and vector stores.

Whether you need to deploy locally for data privacy or prefer cloud deployment for quick scaling, Lyzr handles the complexity of agent orchestration while giving you the flexibility to choose your infrastructure.

Key Highlights:

  1. Easy Agent Development - Build production-ready agents with minimal Python code. A complete chatbot with RAG capabilities can be deployed in just 4 lines of code: install Lyzr, set API key, import ChatBot module, and provision the bot. The framework supports extensive customization with your choice of LLMs, vector stores, and embedding models.

  2. Purpose-Built Multi-Agent System - Lyzr Automata, included in the framework, lets you create sophisticated automation pipelines where multiple agents work together. Each agent can be assigned specific roles and tasks, with built-in tools for external integrations like API calls. The system provides granular control over agent interactions while keeping the implementation straightforward.

  3. Local Deployment Options - Run everything in your own environment using Lyzr's deployable SDKs and private APIs. This ensures sensitive data never leaves your infrastructure while giving you full control over security protocols. The framework handles the heavy lifting of agent orchestration while keeping your data within your boundaries.

  4. No-Code Agent Studio - Build agents visually without writing code using Lyzr Agent Studio. Choose from 100+ pre-built templates, customize workflows through a drag-and-drop interface, and integrate with enterprise systems like Salesforce, SAP, and ServiceNow. The platform includes features for team collaboration, agent management, and automated testing.

  5. Developer Tooling - Access comprehensive debugging tools and monitoring through Lyzr's AI Management System (AIMS). Track agent performance, monitor event logs, and identify bottlenecks easily. The framework comes with pre-built agents for common use cases like chat, knowledge search, and data analysis, significantly reducing development time.

Cognee is an open-source Python library that brings together knowledge graphs and RAG to build evolving semantic memory for AI agents and apps. While traditional RAG systems often struggle with complex dependencies and context chains, Cognee uses dynamic knowledge graphs to maintain relationships between different pieces of information, helping your AI systems develop a more complete understanding of the data they work with.

The library implements ECL (Extract, Cognify, Load) pipelines that let you interconnect and retrieve conversations, documents, and audio transcriptions while reducing hallucinations and development overhead. What sets Cognee apart is its ability to automatically evolve and update these knowledge graphs as new information comes in.

Key Highlights:

  1. Smart Context Management - Rather than relying solely on similarity-based retrieval, Cognee traces relationship chains across your data through its ECL pipelines. For example, when analyzing code repositories, it automatically maps function calls across files and provides complete visibility into how different components interact, helping you build more context-aware AI systems.

  2. Flexible Data Processing - The library works with multiple vector stores (LanceDB, Qdrant, PGVector) and graph databases (NetworkX, Neo4j), letting you choose the backend that fits your scale. Its pipeline architecture makes it easy to customize how data is processed and enriched, whether you're working with documents, code, audio transcripts or other content types.

  3. Developer-Friendly Implementation - Getting started requires minimal setup - install with pip, configure your preferred LLM provider (OpenAI, Anyscale, or local models via Ollama), and start building memory pipelines with a clean Python API. The modular design lets you swap components and extend functionality without touching core logic.

  4. Features & Integrations - Built-in support for data versioning, concurrent operations, and error handling makes Cognee suitable for production deployments. The evaluation framework helps measure and improve retrieval accuracy, while integrations with tools like Continue and Cline let you incorporate semantic memory directly into development workflows.

Quick Bites

GitHub Copilot introduces Next Edit Suggestions (NES), a preview feature that intelligently suggests edits to existing code based on your current changes and coding patterns. Going beyond traditional code completions, NES proactively identifies where subsequent edits might be needed—from simple typo fixes to complex refactoring—and allows you to navigate between suggestions using the Tab key. Now available in VS Code.

LangChain has unveiled LangGraph Supervisor, a new Python library to build hierarchical multi-agent systems where a central supervisor agent orchestrates specialized AI agents. This lightweight library, built on top of LangGraph, streamlines building agent-to-agent communication through tool-based handoffs, in just a few lines of code. It also comes with built-in support for streaming, memory management, and human-in-the-loop capabilities.

Exa has launched "Exa Answer," a new web-grounded API service that combines custom embeddings and keyword search to generate contextual responses from web content. The service offers both direct answers for specific queries and detailed summaries with citations for open-ended questions using GPT-4o-mini as its base model. Priced at $5 per 1000 answers and OpenAI API compatible.

Nomic has released Embed Text V2, the first general-purpose Mixture-of-Experts (MoE) embedding model that supports over 100 languages and achieves state-of-the-art performance on multilingual benchmarks. The model has 475M total parameters with only 305M active during inference. It is fully open-sourced under Apache 2.0 license - including training data, weights, and code - model available to download on Hugging Face.

  • OpenAI o1 and o3-mini now support both file & image uploads in ChatGPT

  • Rate limits for o3-mini-high is raised by 7x for Plus users to up to 50 per day

Tools of the Trade

  1. RAG Engine: A managed service to connect external data sources (like websites, documents, and files) to your LLM applications through a simple API, handling all the RAG pipeline including ingestion, processing, and vector storage. Charges $4.99/month plus direct costs for vector database hosting and embeddings.

  2. SelfKit: Open source SaaS boilerplate that provides a complete stack for building self-hosted web applications, featuring built-in authentication, payments, analytics, and internationalization capabilities. Designed for those who want to self-host their projects using open-source tools, minimizing recurring costs and external dependencies.

  3. CodeCapy: The only PR bot that actually tests your code. It automatically detects new PRs, generates natural language end-to-end UI tests based on code changes, executes tests in isolated Scrapybara instances, posts test results to PR comments, and more.

  4. Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.

Hot Takes

  1. It's borderline SHOCKING that Meta hasn't dropped a model yet.
    Thousands of open-source labs are actively working on and are dropping R1 variants.
    Wonder if the safety teams are holding them back. 🤔 ~
    Bindu Reddy

  2. The progress in AI is down to three basic resources; (1) people (experts), (2) data, and (3) infrastructure. Arguably, at this point the US is only ahead in (3). Also, at this point the Chinese open source models are ahead. Not only DeepSeek but also Qwen. This is a fact. ~
    Ion Stoica

That’s all for today! See you tomorrow with more such AI-filled content.

Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!

Unwind AI - X | LinkedIn | Threads | Facebook

PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉 

Reply

or to participate.