• unwind ai
  • Posts
  • Build AI Agents in any Language

Build AI Agents in any Language

PLUS: Google Deep Research available for free, Reduce Claude API costs by up to 90%

Today’s top AI Highlights:

  1. Build AI agents and automations with a code-first, multi-language framework

  2. Dapr’s opensource AI agent framework built on their microservices runtime

  3. Reduce Claude API costs by up to 90% with new API updates

  4. OpenAI Operator + Deep Research for 10x lesser price

  5. Connect any LLM to MCP tools with one line of code

& so much more!

Read time: 3 mins

AI Tutorials

OpenAI just released its Agents SDK, a rebranded, production-ready, and advanced version of the OpenAI Swarm framework to build multi-agent applications. Keep reading for more details👇. We couldn't wait to get our hands on it and build something useful.

In this tutorial, we'll walk you through building a multi-agent research assistant using OpenAI's Agents SDK. You'll create a system where multiple specialized agents work together to research any topic, collect facts, and generate comprehensive reports — all within a user-friendly application that's easy to use and extend.

We share hands-on tutorials like this 2-3 times a week, designed to help you stay ahead in the world of AI. If you're serious about leveling up your AI skills and staying ahead of the curve, subscribe now and be the first to access our latest tutorials.

Don’t forget to share this newsletter on your social channels and tag Unwind AI (X, LinkedIn, Threads, Facebook) to support us!

Latest Developments

Motia is a lightweight code-first open-source framework that makes it incredibly easy to build AI agents and event-driven automations. Think of it as a way to connect pieces of code that react to events, all without having to manage any of the messy infrastructure usually involved.

You can write these code pieces, called "steps," in JavaScript, TypeScript, Python, or Ruby, and Motia handles the rest. Plus, it comes with Motia Workbench, a visual tool for building, testing, and seeing everything happen in real-time.

Key Highlights:

  1. Build Anything, Automate Everything - Motia is designed to create anything from simple API integrations to complex AI agents. If you can code it, you can automate it with Motia, whether it's data processing, real-time notifications, or full-blown AI-powered workflows.

  2. Code in Your Favorite Language (and Mix Them) - Write individual steps in JavaScript, TypeScript, Python, or Ruby. Need to use a Python library for data science and a JavaScript library for frontend interactions? No problem – Motia lets you combine them seamlessly within the same workflow. Import any package you need, just like you normally would.

  3. Zero Infrastructure - Motia takes care of all the backend complexity. You don't need to set up message queues, event buses, or anything else. Just write your code, define your steps, and Motia handles the event routing and execution. This is a massive time-saver.

  4. Motia Workbench: Your Visual Command Center - This built-in, browser-based tool is where you design, test, and debug your workflows. See your steps connected visually, trigger events manually, watch real-time logs, and even customize how steps appear in the interface.

  5. Built-in State Management - Motia comes with built-in state management features. You can easily track and access data between steps and across flow executions, with out-of-the-box support for memory, file, and Redis-backed storage, including optional TTL for automatic state cleanup.

Microsoft's Dapr team has released Dapr Agents, a Python framework that lets developers build production-grade AI agent systems on top of their battle-tested microservices runtime. Instead of creating yet another agent framework from scratch, Dapr Agents leverages Dapr's established capabilities for distributed systems to provide resilient, scalable agent workflows.

This approach turns the complexity of agent development into a more familiar challenge—building microservices with built-in state management, messaging, and workflow orchestration.

Key Highlights:

  1. Lightweight Agent Architecture - Run thousands of agents efficiently on a single core with virtual actors that scale to zero when not in use and boot back up in milliseconds. Each agent maintains isolated state while the system transparently distributes workloads across machines.

  2. Built-in Workflow Resilience - Forget about crashes mid-task. Dapr Agents uses their proven workflow engine to automatically retry agentic operations and guarantee task completion even through network interruptions, node failures, or process crashes—critical for long-running AI processes.

  3. Agent Patterns - Dapr Agents lets you build single agents, orchestrated workflows, or collaborative multi-agent systems. Switch between deterministic workflows (where you define the exact sequence) and dynamic event-driven patterns where agents respond to messages and collaborate in real-time based on context.

  4. Infrastructure Management - Dapr abstracts away the underlying infrastructure. Instead of writing code specific to AWS DynamoDB, Azure Cosmos DB, or Redis, you interact with Dapr's State Management API. The same applies to message queues, secrets, LLMs, and other components. This unified approach drastically simplifies development.

Quick Bites

Cohere has released Command A, their latest enterprise AI model that delivers maximum performance across agentic tasks with minimal compute requirements. With a 256k token context window, multilingual support across 23 languages, and token generation speeds up to 2.4x faster than some rivals, Command A matches or outperforms GPT-4o and DeepSeek-V3 in human evaluations across business tasks, STEM, and code. Available via API and on Hugging Face for research.

After competing with OpenAI Operator agent, Convergence AI has released Deep Work, a multi-agent generalist system that combines OpenAI Operator and Deep Research into one (much like Magnus AI) available today for $20 a month. You just need to give a task and the agents will complete it for you. Deep Work also lets you build and save custom workflows and run them at a set time and frequency.

Google DeepMind just dropped a new TypeScript/Node SDK for Gemini, unifying access to both the Gemini Developer API and Vertex AI. This experimental release packs in features like simpler setup, built-in caching, easy chat management, and improved file handling, plus support for Realtime API and function calling. It's still under active development and not yet production-ready.

  • An upgraded version of Gemini 2.0 Flash Thinking Experimental with better efficiency and speed is now available on the Gemini app. An expanded context length of 1M tokens is available for Advanced users.

  • Deep Research is now available for all free users to try (limited requests). It is also being upgraded with Gemini 2.0 Flash Thinking Experimental to enhance its quality across all research stages.

  • A new Personalization feature is being rolled out where Gemini connects with your Google apps, beginning with Search, and references your interaction with the apps in its outputs. For example, you can ask Gemini for restaurant recommendations and it will reference your recent food-related searches.

  • Gemini 2.0 Flash Thinking now connects to Google apps like Calendar, Maps, Tasks, YouTube, etc. to help you with multi-stage multi-app workflows in a single prompt. For example, you can ask Gemini: “Look up an easy cookie recipe on YouTube, add the ingredients to my shopping list and find me grocery stores that are still open nearby.”

  • Google Gems (like Custom GPTs) is now available to all free users.

Anthropic has rolled out major updates to their API to increase throughput and reduce token usage with Claude 3.7 Sonnet, cutting costs by up to 90% and latency by up to 85% for long prompts. These are available immediately to all Anthropic API customers and require minimal code changes to implement.

  • Cache-Aware Rate Limits - Prompt cache read tokens no longer count against Input Tokens Per Minute limits, optimizing throughput while maintaining extensive context in memory.

  • Simpler Cache Management - Claude now automatically identifies and uses the most relevant cached content without requiring manual tracking, reducing developer workload and freeing up more tokens.

  • Token-Efficient Tool Use - A new beta feature reduces output token consumption by up to 70% when Claude calls external tools, plus a specialized text_editor tool enables targeted edits to specific portions of documents with improved accuracy.

Tools of the Trade

  1. Airweave: Open-source tool that makes any app searchable for your agent by syncing your users' app data, APIs, databases, and websites into your graph and vector databases with minimal configuration.

  2. OpenTools: API to connect any LLM to MCP tools with one line of code, handling authentication and implementation details behind the scenes. It provides access to hosted tools like web search and location data through an OpenAI-compatible interface.

  3. Same.dev: Clones websites with picture-perfect accuracy without a single line of code. It’s like V0, Bolt, Lovable - all in a single platform. You can edit the code further and deploy it with a single click. Currently free.

  4. Surf: A Next.js application to use OpenAI’s computer-use AI agent interacting with E2B's virtual desktop environment. Just give it simple tasks and watch the AI agent perform them on the virtual desktop.

  5. Awesome LLM Apps: Build awesome LLM apps with RAG, AI agents, and more to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos, and automate complex work.

Hot Takes

  1. Sex is great, but have you ever vibe coded a B2B SaaS sales app that makes $1M MRR? ~
    Bojan Tunguz

  2. I’m starting a new business.
    After you fire your developers and start vibe coding everything, we’ll come in and fix all the bugs and security issues with your AI-generated code.
    We’ll take what you have and make it work.
    The service will start at $1,000/hour. ~
    Santiago

That’s all for today! See you tomorrow with more such AI-filled content.

Don’t forget to share this newsletter on your social channels and tag Unwind AI to support us!

Unwind AI - X | LinkedIn | Threads | Facebook

PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one, two (or 20) of your friends 😉 

Reply

or to participate.