- unwind ai
- Posts
- Connect AI Agents to any Data Source
Connect AI Agents to any Data Source
PLUS: MAX Mode for Claude 3.7 Sonnet in Cursor, NotebookLM generates Mind Maps
Today’s top AI Highlights:
NVIDIA’s opensource library to connect any AI Agent to any Data Source
Ingestion, memory, and retrieval for AI apps and agents
Google’s NotebookLM can now generate interactive mindmaps
Cursor launches MAX Mode for Claude 3.7 Sonnet
Awesome MCP servers to supercharge your workflows on Claude, Cursor, Windsurf
& so much more!
Read time: 3 mins
AI Tutorials
In this tutorial, we'll show you how to create your own powerful Deep Research Agent that performs in minutes what might take human researchers hours or even days—all without the hefty subscription fees. Using OpenAI's Agents SDK and Firecrawl, you'll build a multi-agent system that searches the web, extracts content, and synthesizes comprehensive reports through a clean Streamlit interface.
OpenAI's Agents SDK is a lightweight framework for building AI applications with specialized agents that work together. It provides primitives like agents, handoffs, and guardrails that make it easy to coordinate tasks between multiple AI assistants.
Firecrawl’s new deep-research endpoint enables our agent to autonomously explore the web, gather relevant information, and synthesize findings into comprehensive insights.
We share hands-on tutorials like this 2-3 times a week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.
Latest Developments
NVIDIA has released AgentIQ, an open-source library that lets you connect AI agents to any data source or tool without being locked into a specific framework. It essentially acts as a universal adapter for your agents.
This open-source toolkit converts every agent, tool, and workflow into simple function calls that work together seamlessly, allowing you to build components once and reuse them across projects. The library provides comprehensive profiling and evaluation tools that help you identify bottlenecks and maintain accuracy while keeping your existing tech stack intact.
Key Highlights:
Framework-Agnostic - Works with any agentic framework including LangChain, LlamaIndex, CrewAI, and Microsoft Semantic Kernel without requiring migration to a new platform. This preserves your investment in existing code while adding new capabilities through a unified interface.
Function-Based - Treats every agent, tool, and workflow as a function call with type validation and schema-based input/output validation. This enables true composability where components can be combined and repurposed across different scenarios, significantly reducing duplication of effort.
Built-In Profiling - Tracks input/output tokens and execution timing at a granular level throughout your workflow, helping identify performance bottlenecks even in deeply nested operations. This data-driven approach pinpoints exactly where optimization efforts should focus.
MCP Integration - Fully compatible with Anthropic's MCP standard, allowing tools served by MCP Servers to be used as AgentIQ functions. This extends your toolkit to include standardized resources from the broader AI ecosystem.
Try Artisan’s All-in-one Outbound Sales Platform & AI BDR
Ava automates your entire outbound demand generation so you can get leads delivered to your inbox on autopilot. She operates within the Artisan platform, which consolidates every tool you need for outbound:
300M+ High-Quality B2B Prospects, including E-Commerce and Local Business Leads
Automated Lead Enrichment With 10+ Data Sources
Full Email Deliverability Management
Multi-Channel Outreach Across Email & LinkedIn
Human-Level Personalization
GraphLit is a knowledge API that extracts, processes, and retrieves information from unstructured data for AI applications and agents. The platform automates the complex ETL (Extract, Transform, Load) pipeline for LLMs by handling content ingestion from multiple sources, including PDFs, audio, video, and web pages.
Rather than stitching multiple components together to build a pipeline, GraphLit provides complete RAG-as-a-Service with built-in vector embeddings, conversation history management, and entity extraction through a simple API interface available for Python, Node.js, and .NET.
Key Highlights:
Data Ingestion - Process virtually any unstructured data format from diverse sources including cloud storage, SharePoint, Slack, and email. It automatically extracts text and tables using OCR and LLMs, transcribes audio with Deepgram, and handles web scraping without requiring custom pipeline development.
Ready-made RAG - GraphLit handles all RAG infrastructure with text chunking, vector embeddings, and entity extraction to create a knowledge graph. There is no need to integrate and optimize projects like LangChain or LlamaIndex, or to manage a separate vector database.
Insight using Knowledge Graph - Graphlit creates and automatically maintains a knowledge graph that links entities (people, organizations, places, etc.) found within the content. GraphRAG leverages the knowledge graph for enhanced context during the RAG process and connects information across diverse data sets.
Multimodal Support - Fully integrated with large multimodal models including OpenAI GPT-4o and Claude Sonnet 3.5. The platform generates image descriptions with visual object detection and enables similarity search via image embeddings.
Quick Bites
Google is rolling out two major new features in the Gemini app: Canvas and Audio Overview. Canvas provides an interactive workspace within Gemini for real-time collaborative document and code editing, including instant previews for web app prototypes and export to Google Docs.
Audio Overview, the one in NotebookLM, is now available in Gemini app also, transforming uploaded files (documents, slides, research reports) into podcast-style discussions between two AI hosts for enhanced comprehension.
That’s not it from Google: NotebookLM now generates interactive Mind Maps from the uploaded documents to enhance your comprehension experience. You can even ask Gemini about the individual components in a mind map to drill down further.
Codeium has released a new feature Windsurf IDE — "Windsurf Tab," a major improvement to its AI code prediction. Windsurf Tab now understands the entire development context – your terminal commands, clipboard contents, and even your conversations with the AI assistant – to provide far more accurate code suggestions as you type. It is free and unlimited for all, with a faster version for paid subscribers.
Cursor has released MAX mode for Claude 3.7 Sonnet, optimized with maximum thinking capabilities, extensive tool usage, and full 200k token context window utilization, with compute-intensive context selection. The enhancement improves performance on complex intellectual tasks compared to the standard implementation. MAX mode is priced at $0.05 per request, with each tool call in Agent mode charged as a separate request.
Tools of the Trade
5 Awesome MCP servers to supercharge your workflows on Claude, Cursor, Windsurf, Cline, and other MCP clients.
E2B MCP Server: Enables MCP clients to execute code in a secure sandbox environment. It supports both JavaScript and Python. You can also install E2B for Claude Desktop automatically via Smithery.
Firecrawl MCP Server: Lets clients scrape, crawl, search, extract, and deep research websites. It can also convert websites into clean, LLM-ready text right in your editor.
Tavily MCP Server: Enables LLMs to perform web searches, retrieve direct answers to queries, and access recent news articles with AI-extracted relevant content, supporting features like customizable search depth, domain filtering, and result summarization.
Qdrant MCP Server: Allows MCP clients to store and retrieve information from Qdrant vector database, effectively providing a semantic memory layer. It exposes qdrant-store and qdrant-find tools, configured via environment variables, for managing and querying vector embeddings.
Browser-Use MCP Server: Allows MCP clients to autonomously surf the web and control actions such as clicking and reading pages to complete your tasks. Instead of relying on redundant LLM API calls packaged with Browser Use, this server lets the client directly utilize Browser-Use without other APIs.
Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.

Hot Takes
Blessed are the GPU poor, for they shall inherit the AGI. ~
Bojan TunguzI hear AI doomers are building bomb shelters in Silicon Valley to prepare for the intelligence explosion. ~
Pedro Domingos
That’s all for today! See you tomorrow with more such AI-filled content.
Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!
PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉
Reply