• unwind ai
  • Posts
  • Auto-RAG for Intelligent RAG Pipelines

Auto-RAG for Intelligent RAG Pipelines

PLUS: Built-in code runner in ChatGPT Canvas, Build stateful AI agents

In partnership with

Today’s top AI Highlights:

  1. Auto-RAG allows LLMs to handle their own RAG retrieval decisions

  2. Open-source framework for building stateful LLM applications

  3. Build full-stack AI apps in minutes with AWS’s new Amplify AI Kit

  4. OpenAI upgrades ChatGPT Canvas with new features and a code runner

  5. Extract 500+ technologies from any repository - detect languages, SaaS, infrastructure, dependencies, etc.

& so much more!

Read time: 3 mins

AI Tutorials

In this tutorial, we have created a multi-agent AI legal team where each AI agent represents a different legal specialist role, from research and contract analysis to strategic planning, working together to provide thorough legal analysis and recommendations. We have used OpenAI's GPT-4o, Phidata, and Qdrant vector database.

This Streamlit application mirrors a full-service legal team where these specialized AI agents collaborate just like a human legal team - researching legal documents, analyzing contracts, and developing legal strategies - all working in concert to provide comprehensive legal insights.

The AI Agent Team:

  1. Legal Researcher - Equipped with DuckDuckGo search tool to find and cite relevant legal cases and precedents. Provides detailed research summaries with sources and references specific sections from uploaded documents.

  2. Contract Analyst - Specializes in thorough contract review, identifying key terms, obligations, and potential issues. References specific clauses from documents for detailed analysis.

  3. Legal Strategist - Focuses on developing comprehensive legal strategies, providing actionable recommendations while considering both risks and opportunities.

  4. Team Lead - Coordinates analysis between team members, ensures comprehensive responses, properly sourced recommendations, and references to specific document parts. Acts as an Agent Team coordinator for all three agents.

Our application provides five distinct types of analysis, each activating different combinations of our specialized agents

We share hands-on tutorials like this 2-3 times a week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this newsletter on your social channels and tag Unwind AI (X, LinkedIn, Threads, Facebook) to support us!

Latest Developments

Most RAG systems often need multiple rounds of retrieval to gather sufficient knowledge while answering complex questions, and rely on manually-crafted rules or few-shot prompts to handle this iterative process. This not only adds computational overhead but fails to utilize the inherent reasoning capabilities of LLM.

Auto-RAG is a new iterative retrieval model that gives LLMs autonomous decision-making capabilities for retrieving and using external knowledge. LLMs independently determine when and what information to retrieve through multi-turn dialogues with the retriever. The framework fine-tunes open-source LLMs on synthesized reasoning-based instructions, teaching them to systematically plan retrievals and queries to gather sufficient knowledge to answer questions.

Key Highlights:

  1. Autonomous Reasoning Pipeline - The system incorporates 3 distinct reasoning types during retrieval: retrieval planning to identify needed information, information extraction to process retrieved documents, and answer inference for generating responses.

  2. Self-Guided Strategy - Auto-RAG dynamically adjusts the number of retrieval iterations based on question complexity and knowledge relevance. For simpler questions, it may terminate after one iteration, while complex multi-hop queries trigger additional retrievals, without any manually-defined rules or thresholds.

  3. Robust Architecture - The framework includes built-in safeguards against hallucination by enabling the model to reject irrelevant information and continue searching rather than generating speculative answers.

  4. Production-Ready Performance - Auto-RAG significantly outperforms existing iterative retrieval methods while requiring fewer retrievals per query. The system maintains high accuracy even with limited training data - as few as 500 examples are sufficient for acquiring basic retrieval capabilities.

  5. Availability - The GitHub repository has been created, and the code is expected to be open-sourced soon. Do keep an eye on it!

Discover 100 Game-Changing Side Hustles for 2024

In today's economy, relying on a single income stream isn't enough. Our expertly curated database gives you everything you need to launch your perfect side hustle.

  • Explore vetted opportunities requiring minimal startup costs

  • Get detailed breakdowns of required skills and time investment

  • Compare potential earnings across different industries

  • Access step-by-step launch guides for each opportunity

  • Find side hustles that match your current skills

Ready to transform your income?

Letta is an open-source LLM framework to build AI agents with persistent memory and complex reasoning capabilities. The framework enables AI agents to genuinely remember past interactions, learn from conversations, and make intelligent decisions based on accumulated context. You can build everything from personalized AI assistants that evolve with each user interaction to enterprise knowledge workers that interface with company data and tools. Letta is model-agnostic so you can maintain full visibility and control over how your agents think and remember.

Key Highlights:

  1. Full Control and Flexibility - Letta is model-agnostic; you are free to integrate any preferred LLM. The platform's white-box nature ensures full visibility into the internal mechanisms of LLMs and agents, allowing for deep customization and optimization.

  2. Intelligent Memory Management - Agents automatically extract and organize important information from conversations, building a rich knowledge graph over time. The framework handles memory prioritization, fact extraction, and context retrieval, letting you focus on building great user experiences.

  3. Multi-Step Reasoning - Create agents that break down complex tasks, maintain context across multiple interactions, and execute multi-stage workflows autonomously. The built-in "heartbeat" system enables agents to think through problems step-by-step and maintain coherent reasoning chains.

  4. Seamless Data Integration - You can connect agents to your existing data sources and tools through a flexible API system. Agents can securely access databases, documents, and internal systems while maintaining clear audit trails of all interactions and decisions.

  5. Building with Letta - Start building in minutes with the CLI and Agent Development Environment that allows you to create, edit, and monitor agents in your Letta server. The Python SDK also offers a seamless development experience, letting you interact with agents without manually calling REST APIs.

Quick Bites

OpenAI wrapped up its fourth day of the 12-day announcement series with new features to Canvas in ChatGPT. Firstly, Canvas is no longer exclusive to paid users; it's accessible to everyone. A new button in ChatGPT's composer (prompt area) provides access to tools like DALL.E, web search, and now Canvas. Here are the new features:

  • Canvas appears on the right side of the screen, giving a collaborative space to work with GPT-4o. You can make edits yourself within the Canvas, ask GPT-4o for in-line suggestions, and use suggested edits like adding emojis or adjusting the length of the draft.

  • When you input code, ChatGPT detects it and opens a code editor in Canvas. This editor features Python syntax highlighting and basic autocomplete.

  • Another very cool feature is a built-in code runner in Canvas. Canvas has a web assembly Python emulator, which allows it to load almost any Python library and run your code almost instantly.

  • You can now use Canvas with Custom GPTs. For existing GPTs, simply edit your custom instructions and add an instruction to use the Canvas.

AWS has released Amplify AI kit, now generally available, to build full-stack web apps with generative AI features like chat and summarization. Defined entirely in TypeScript, this serverless toolkit simplifies creating secure, real-time AI interactions and generative UIs without needing deep cloud or ML expertise.

Google Quantum AI has unveiled Willow, a new quantum chip that achieves exponential error reduction as it scales up (a breakthrough in quantum error correction) and completed a benchmark computation in <5 minutes that would take today's fastest supercomputer 10 septillion (1025) years. The team is now focused on the first "useful, beyond-classical" computation for real-world applications like discovering new medicines and designing more efficient batteries for electric cars.

Amazon has formed a new AGI SF Lab to build foundational AI agent capabilities, led by David Luan, co-founder of Adept. The lab will develop agents that can handle complex workflows using computers, browsers, and code interpreters, and focus on enabling AI to perform real-world actions and learn from human feedback.

Tools of the Trade

  1. Stack Analyser: Scans repositories to identify over 500+ technologies used, including dependencies, languages, infrastructure, SaaS, and databases. It supports various languages and platforms, giving a comprehensive list of services and their relationships.

  2. Gentrace: A collaborative testing platform for teams to create and run evaluations for LLM-powered applications through both code and UI interfaces. It provides tools for automated testing, experimentation, and performance monitoring, all without much coding.

  3. MLE-Agent: A command-line tool for ML engineers to automate common ML development tasks and generate project reports. It integrates with services like arXiv and Papers with Code to assist with research, and provides automated ML baseline creation, debugging, Kaggle competition workflows, etc.

  4. Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.

Hot Takes

  1. In my opinion we have already achieved AGI and it’s even more clear with O1. We have not achieved “better than any human at any task” but what we have is “better than most humans at most tasks”…. ~
    Vahid Kazemi


  2. honestly.. 99% of people outside of the twitter/x are extremely uncreative with ai
    "write me a paper"
    "summarize my notes"
    "write an email"
    show them even 10% of what an llm can do and their minds are blown
    also why "prompt engineering" will exist for a while ~
    Sully

That’s all for today! See you tomorrow with more such AI-filled content.

Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!

Unwind AI - X | LinkedIn | Threads | Facebook

PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉 

Reply

or to participate.