- unwind ai
- Posts
- Turn Any LLM Into A Reasoning Agent
Turn Any LLM Into A Reasoning Agent
PLUS: Build AI Agents that use MCP, First open-source Vision Speech Model
Today’s top AI Highlights:
Python framework to build AI Agents that use MCP
Turn any LLM into a Reasoning Agent with just one line of code
AMD’s opensource app to run LLMs locally on Windows PCs with Ryzen AI processors
First open-source Vision Speech Model by French startup Kyutai
AI models in n8n workflows can now use MCP servers
& so much more!
Read time: 3 mins
AI Tutorials
In this tutorial, we'll show you how to create your own powerful Deep Research Agent that performs in minutes what might take human researchers hours or even days—all without the hefty subscription fees. Using OpenAI's Agents SDK and Firecrawl, you'll build a multi-agent system that searches the web, extracts content, and synthesizes comprehensive reports through a clean Streamlit interface.
OpenAI's Agents SDK is a lightweight framework for building AI applications with specialized agents that work together. It provides primitives like agents, handoffs, and guardrails that make it easy to coordinate tasks between multiple AI assistants.
Firecrawl’s new deep-research endpoint enables our agent to autonomously explore the web, gather relevant information, and synthesize findings into comprehensive insights.
We share hands-on tutorials like this 2-3 times a week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.
Latest Developments

You can now build reasoning agents with Agno's newly released Reasoning features. The framework lets you create AI agents that think before responding, with three simple approaches to choose from: using specialized reasoning models like OpenAI's o3-mini, adding a "think" tool to your existing models, or using Agno’s multi-agent reasoning system with just one line of code.
What's cool is you can even mix and match models - use DeepSeek-R1 for the heavy thinking and Claude for natural-sounding responses.
Key Highlights:
Pick Your Reasoning Style - You can directly use powerful reasoning models like OpenAI's o3-mini, give agents a "think" tool to orchestrate complex tools with a structured approach, or just set reasoning=True and let Agno's behind-the-scenes multi-agent reasoning system handle chain-of-thought. Choose the method that makes sense for your app.
Fix the "Smart But Inarticulate" Agent Problem - DeepSeek-R1 might be a reasoning beast, but its output isn't the prettiest. Agno lets you use DeepSeek-R1 (or similar) for the thinking, then pass the result to Claude Sonnet (or GPT-4.5) for a human-friendly response. You get the brains and the conversational skills.
Add Anthropic "Think" Tool - Use structured "scratchpads" to make agents more predictable and reliable in their responses. If you are finding your Agent to be not consistent in the logic that it follows, adding the "think" tool will allow it to generate more predictable behaviour, as this will let the agent to think of the rules and logic behind the decision prior to doing something.
Chain-of-Thought Reasoning with One Line of Code - Implement a built-in Chain-of-thought system just by setting the reasoning=True argument while initiating the Agent class. Agno abstracts away the complexity of setting up multi-agent workflows, allowing you to focus on defining agent behavior and achieving desired outcomes.

mcp-agent is a Python library that makes it easy to build AI agents that connect to your data and tools through Model Context Protocol. It handles all the server connection details for you and provides ready-to-use patterns for common agent designs.
mcp-agent takes care of the messy connection stuff, and it's built with smart design patterns so your agents can actually do complex things, not just chat. It even helps you coordinate multiple agents working together.
Key Highlights:
Simple MCP Integration - MCP Agent abstracts away the complexity of managing server connections. Connect to file systems, databases, APIs, or any other MCP server with just a few lines of code through a consistent interface that handles all the behind-the-scenes work.
Workflow Patterns - The framework includes implementations of all the essential agent patterns like Parallel, Router, Evaluator-Optimizer, and Orchestrator-Workers. Each pattern is exposed as an AugmentedLLM, making them fully composable - you can use a Router inside an Orchestrator or an Evaluator-Optimizer as a component in more complex workflows.
Model-Agnostic - Switch between different LLM providers (OpenAI, Anthropic, etc.) with minimal code changes. It provides consistent interfaces across providers while handling the provider-specific implementation details.
Developer-First - You write normal Python code with familiar control structures - use if statements for branching, while loops for cycles, and async/await for concurrency. This makes debugging easier and lets you leverage standard development tools you already know.
Quick Bites
AMD has launched GAIA, an open-source application for running LLMs locally on Windows PCs with Ryzen AI processors. It uses Neural Processing Unit (NPU) and integrated GPU in Ryzen AI hardware to run LLMs faster. GAIA offers dual-mode functionality—a Hybrid Mode optimized for Ryzen AI PCs and a Generic Mode that uses Ollama as the backend for compatibility with any Windows PC.
Here are two very interesting opensource projects built on LangChain that you’d want to check out:
LongManus is a multi-agent system that replicates the functionality of Manus AI agent. The application combines seven specialized agents powered by Qwen models, equipped with tools for web search, web crawling, and browser use for completing complex tasks.
Oliva is a multi-agent voice RAG assistant that lets users search vector databases through spoken commands to find products. It runs on LangChain for agent workflows, Qdrant for vector storage, Superlinked for semantic search, Livekit for voice communication, and Deepgram’s speech-to-text model.
French AI company Kyutai has released MoshiVis, the first open-source vision speech model that can discuss images in real-time speech conversations. It is built on their existing Moshi speech platform with just 206M added parameters for visual processing. MoshiVis maintains Moshi's natural conversational abilities with minimal latency increase. It is released under Apache 2.0 with complete code and weights.
Tools of the Trade
n8n MCP node: Allows n8n workflows to interact with MCP servers, enabling models within n8n to access external tools and data sources using by the MCP standard. This node supports both STDIO and SSE transport methods for connecting to MCP server.
Refact.ai: Open-source self-hostable autonomous AI coding agent that works directly with IDEs to provide code completion, chat functionality, and customizable tooling across 25+ programming languages. Works with multiple LLMs.
SpongeCake: Open-source SDK that sets up Docker-based virtual desktops for running OpenAI's computer-use agents. It provides a Python interface to control these virtual desktops through mouse/keyboard actions and screenshots.
Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.

Hot Takes
Just switched from Sonnet 3.7 back to Sonnet 3.5 and can't believe it does exactly what I want without adding 10 random features that eventually stop my code from running ~
Tom DörrAI will move into a window (later this year) that I would call "second mover's advantage." That is, the first obvious moves that could be big are played out given the technology/funding cycle. The rest of us get to watch how it worked out, take stock of the pace, understand how users use it, and better consider where it will be vs where it was--without baggage.
Much of mobile and web had second movers that became dominant. ~
Suhail
That’s all for today! See you tomorrow with more such AI-filled content.
Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!
PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉
Reply