- unwind ai
- Posts
- Build MCP AI Agents with NoCode
Build MCP AI Agents with NoCode
PLUS: Native graph-vector database, Use MCP servers with Claude on web
Today’s top AI Highlights:
Build AI agents with MCP without writing a single line of code
RAG and context-aware agents using one native graph and vector database
Google will integrate Veo video generation model with Gemini
Browser extension to use MCP servers in Claude.ai web
& so much more!
Read time: 3 mins
AI Tutorial
While tech bros are busy building autonomous AI agents to optimize your shopping or debug your code, let's focus on something that actually matters: healing your broken heart. Breakups are universally painful, and sometimes what you need isn't another self-help book or a friend who's tired of hearing about your ex—it's an emotionally intelligent AI system that won't judge you for stalking your ex's Instagram at 3 AM.
In this tutorial, we'll build a multi-agent AI breakup recovery application using Streamlit and Agno, powered by Gemini 2.0 Flash. This team of specialized AI agents will provide emotional support, help you craft cathartic unsent messages, plan recovery routines, and even give you some brutal honesty when needed.
We share hands-on tutorials like this every week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.
Latest Developments
Automation platform n8n has just integrated native Model Context Protocol capabilities directly into its core. You can now use two new nodes: an MCP Server Trigger and an MCP Client Tool.
These nodes turn n8n into an MCP server for MCP clients like the Claude desktop app or Cursor to access your workflows as tools, or connect your n8n agents to external MCP servers. You get fine-grained control, choosing exactly which tools are exposed by your n8n server or made available to your n8n agent from an external server. Both nodes also support standard Bearer and generic header authentication methods.
Key Highlights:
MCP Server Trigger — Exposes your n8n workflows as tools that Claude, Cursor, Windsurf, and other MCP clients can access. You control exactly which tools each AI can see - you can expose your calculator tool to all AIs but keep your email sender tool private, for example. The node generates a unique URL endpoint for each server you create.
MCP Client Tool — Lets your n8n AI agents connect to external MCP servers. You can precisely select which external tools your agent can access - give it access to all tools, only specific ones you select, or all except certain ones you want to block.
Built-in Security — Both nodes require proper authentication setup. The Server Trigger supports bearer tokens and custom headers to verify incoming connections, while the Client Tool lets you configure authentication credentials needed to connect to external MCP servers.
Multiple Server Support — Run several different MCP servers from a single n8n instance, each with its own authentication requirements and tool selections, giving you granular control over which workflows are exposed to different AI systems.
You’ve heard the hype. It’s time for results.
For all the buzz around agentic AI, most companies still aren't seeing results. But that's about to change. See real agentic workflows in action, hear success stories from our beta testers, and learn how to align your IT and business teams.
Developers often choose graph databases for explicit relationships and vector databases for semantic similarity in a single advanced application like sophisticated RAG setups or context-aware agentic systems, integrating and managing separate graph and vector databases. This comes with significant complexity and infrastructure overheads.
HelixDB combines graph and vector databases into one unified system. It lets you run similarity searches and graph traversals in a single query. You can store graph nodes, edges, vector embeddings, and relationships between them all in one place, with queries that compile directly into API endpoints for blazing-fast performance without the overhead of sending query strings.
Key Highlights:
Graph + Vector Native Design - Stop maintaining separate databases for relationships and vectors. HelixDB handles both graph nodes/edges and vector embeddings in a single system, letting you create explicit connections between vectors and graph elements for more powerful RAG applications.
Compiled Query Endpoints - Write a query once and HelixDB builds it directly into the database as a dedicated API endpoint. This cuts down network latency, prevents injection attacks, and turns complex operations into instant microservices without the parsing overhead.
Developer-Focused Experience - Reduce complexity with type safety that catches errors at compile time, intuitive syntax that requires 70% less code than traditional solutions, and a simple setup process that gets you running quickly with just a few CLI commands.
High-Performance Architecture - Built in Rust with LMDB as the storage engine, HelixDB delivers millisecond query latency, fast startup times, and ACID compliance for data integrity, all optimized specifically for AI applications that need both relationship modeling and semantic search.
Quick Bites
AI models are painfully slow at inference, generating just one token at a time. AMD's new Parallel Draft (PARD) tackles this by transforming traditional draft models to predict multiple tokens in a single forward pass. PARD lets models predict multiple tokens simultaneously while maintaining output quality, and works seamlessly with frameworks like Transformers and vLLM. The results speak for themselves: Llama3 models run up to 3.3× faster, DeepSeek models 2.3× faster, and Qwen models achieve a whopping 4.87× speedup on AMD hardware.
Google DeepMind CEO Demis Hassabis has confirmed that the team will eventually combine Gemini AI models with Veo, their video-generating model, to enhance understanding of the physical world. Gemini currently natively generates text, images, and audio. Speaking on the Possible podcast, Hassabis explained that Gemini was designed to be multimodal any-to-any from the start, supporting the vision of a "universal digital assistant" that can genuinely help users in real-world contexts.
Tools of the Trade
MCP for Claude.ai: A browser extension that enables MCP capabilities in Claude.ai to connect Claude to external tools and services directly from the browser, just like Claude desktop app.
Canva Code: AI code generator that allows users to create interactive digital experiences using simple prompts without any code. You can build functional widgets, games, and other interactive elements that integrate directly with Canva's platform for use in websites, presentations, documents, etc.
OctAPI: A VS Code extension that visually represents and organizes API routes within your development environment to explore, navigate, and manage API endpoints directly in their editor. It automatically detects routes from multiple frameworks, provides one-click navigation to implementation code.
Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.

Hot Takes
i simply cannot put myself in the head of your average CS major who’s used claude code and deep research, but doesn’t see the writing on the wall or feel the AGI even slightly ~
James CampbellThere is no consumer desire to “make apps”
“Vibe coding” is an exponential starting point for people who were already motivated to do things the old way if they had to
Those people want the same control as serious developers, not a “vibe coding app platform” ~
Joe Kennedy
That’s all for today! See you tomorrow with more such AI-filled content.
Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!
PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉
Reply