• unwind ai
  • Posts
  • GPT-4o Finetuning Available for All

GPT-4o Finetuning Available for All

PLUS: Gemini can now process 1K PDF pages

In partnership with

Learn AI-led Business & startup strategies, tools, & hacks worth a Million Dollars (free AI Masterclass) 🚀

This incredible 3-hour Crash Course on AI & ChatGPT (worth $399) designed for founders & entrepreneurs will help you 10x your business, revenue, team management & more.

It has been taken by 1 Million+ founders & entrepreneurs across the globe, who have been able to:

  • Automate 50% of their workflow & scale your business

  • Make quick & smarter decisions for their company using AI-led data insights

  • Write emails, content & more in seconds using AI

  • Solve complex problems, research 10x faster & save 16 hours every week

Today’s top AI Highlights:

  1. GPT-4o fine-tuning - get 1 million free training tokens daily

  2. Opensource AI Agent that autonomously debugs your code

  3. Fine-tune Google Gemini 1.5 Flash completely for free

  4. Authors sue Anthropic for using pirated books to train Claude AI

  5. Chat with Claude 3.5 Sonnet directly in code editor to generate, transform, and analyze code

& so much more!

Read time: 3 mins

Latest Developments

Fine-tuning GPT-4o Available to All đź§‘‍🔧

Fine-tuning for GPT-4o is now live, allowing you to tailor the model with custom datasets to get higher performance at a lower cost for your specific use cases. This feature comes with a generous offer: 1 million free training tokens per day for every organization until September 23. Imagine achieving state-of-the-art results with a model that understands your domain-specific language and nuances!

Key Highlights:

  1. Boost Performance; Reduce Costs - You can customize the structure, tone, and even train GPT-4o to follow intricate, domain-specific instructions. Training costs $25 per million tokens, while inference is priced at $3.75 per million input tokens and $15 per million output tokens.

  2. Real-World Success - Early access partners have demonstrated the power of fine-tuning. Cosine's Genie, powered by a fine-tuned GPT-4o model, achieved 30.08% on SWE-bench Full, beating its previous SOTA score of 19.27%, the largest-ever improvement in this benchmark!

  3. Get Started Today - Fine-tuning for both GPT-4o (gpt-4o-2024-08-06) and GPT-4o mini (gpt-4o-mini-2024-07-18) is available now on all paid tiers. Access the fine-tuning dashboard, select your desired base model, and start experimenting! Remember, you have 2 million free training tokens per day for GPT-4o mini until September 23rd.

SuperCoder 2.0, a new multi-agent system using GPT-4 and Sonnet-3.5, successfully solves 34% of the problems in the SWE-Bench Lite dataset. This performance puts it at #4 globally and makes it the top-performing opensource coding system. SuperCoder 2.0 excels at identifying and resolving issues within complex real-world code, showcasing its potential for autonomous software development. The system passed 101 out of 300 tests, demonstrating its ability to generate patches that resolve issues and pass test cases.

Key Highlights:

  1. Two-Tiered Approach - SuperCoder 2.0 uses a two-tiered approach of Code Search and Code Generation to effectively navigate codebases and generate functional bug fixes. It first identifies relevant code sections and then generates patches to address the identified issues.

  2. Efficient Code Search - A RAG system and an agent-based system are used to efficiently navigate large codebases and pinpoint potential bug locations.

  3. Robust Code Generation - The system regenerates entire method bodies to avoid indentation issues, ensuring clean and functional patches. It also includes a feedback loop that refines the code generation process based on test case results.

  4. Opensource and Top Ranked - SuperCoder 2.0 is opensource and currently ranks #1 among all open-source coding systems.

Quick Bites

  1. Few updates on Google Gemini API and Google AI Studio

    • Maximum PDF page upload size increased to 1,000 pages or 2GB (up from 300 pages) in Google AI Studio and the Gemini API.

    • Free Gemini 1.5 Flash users get 15 requests per minute, 1 million tokens per minute, 1,500 requests per day, free context caching with up to 1 million tokens of storage per hour, and free fine-tuning.

    • Free Gemini 1.5 Pro users get two requests per minute, 32,000 tokens per minute, and 50 requests per day.

    • Requests per day limit for Gemini 1.5 Pro has been removed for paid users.

  2. China’s Unitree Robotics has revealed its humanoid robot G1 with a stronger performance and cleaner aesthetics, ready for mass production. Tagged at $16,000, don’t expect it yet to be ready for household work. Unitree however hasn't confirmed that mass production is actually underway.

  1. A group of authors have sued Anthropic for allegedly using pirated books to train its AI model. The authors say that Anthropic used an open-source dataset known as “The Pile” to train Claude AI chatbots. Within this dataset is something called Books3, a massive library of pirated ebooks that includes works of thousands of other authors.

  2. Google has developed a bioacoustic AI model called HeAR that analyzes sounds like coughs and breath to help detect diseases such as tuberculosis and chronic obstructive pulmonary disease. It was trained on 300 million pieces of curated audio data, and is already being used by healthcare companies in India.

Tools of the Trade

  1. Zed AI: A fast, AI-powered text editor for developers, built on top of Claude 3.5 Sonnet. It provides full transparency and control over AI interactions, and enables real-time, low-latency code refactoring. It also features advanced context-building tools and a Fast Edit Mode for incredibly fast text transformations.

  1. Phoenix: An opensource AI observability platform for monitoring and evaluating AI models across various frameworks and environments. It works in Jupyter notebooks, local setups, or cloud deployments.

  2. Formatron: A lightweight tool that lets you control language model output formats with minimal overhead. It supports various libraries through plugins and offers flexible, efficient formatting options.

  3. Awesome LLM Apps: Build awesome LLM apps using RAG to interact with data sources like GitHub, Gmail, PDFs, and YouTube videos through simple text. These apps will let you retrieve information, engage in chat, and extract insights directly from content on these platforms.

Hot Takes

  1. The first predictable bad effect of AI, easy deepfakes, is already here thanks to open Flux, and there has been remarkably little wide outcry, either in the press or among policy makers.
    I wonder if it hasn't sunk in yet, or if it ends up being less disruptive than anticipated. ~
    Ethan Mollick

  2. SF has the highest concentration of intelligence.

    both human and artificial ~
    Craig Weiss

Meme of the Day

That’s all for today! See you tomorrow with more such AI-filled content.

Real-time AI Updates 🚨

⚡️ Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss what’s trending!

Unwind AI - Twitter | LinkedIn | Instagram | Facebook

PS: We curate this AI newsletter every day for FREE, your support is what keeps us going. If you find value in what you read, share it with at least one (or 20) of your friends!

Reply

or to participate.