• unwind ai
  • Posts
  • Is GPT-4.5 around the Corner? 😱

Is GPT-4.5 around the Corner? 😱

PLUS: LLM for India, Super Intelligence > General Intelligence in 10 years

Today’s top AI Highlights:

  1. GPT 4.5 Details Leaked (Mostly)

  2. Bilingual LLM for Indic Language Outperforms GPT Models

  3. Using GPT-2 to Fine-tune GPT-4: Weak-to-Strong Generalization

  4. Create Spotify Playlists from Text Prompts

  5. Notion + Sunrise + Todoist: All-in-one AI App

& so much more!

Read time: 3 mins

Latest Developments 🌍

Leaked Details of GPT 4.5 🫣

A picture of GPT 4.5 details has been supposedly leaked from OpenAI. Being OpenAI's most advanced model, GPT-4.5 would offer multi-modal capabilities across language, audio, vision, video, and 3D, alongside complex reasoning and cross-modal understanding.

The image also reveals GPT-4.5 prices and GPT-4 Turbo prices which are expected to remain the same. We don’t know yet if the image is true.

Uploaded image

Bilingual LLMs for Indian Languages 👬

Open models like Llama and Mistral offer wider access but lack support for Indic languages due to the scarcity of high-quality Indic language content and inefficient tokenization in models like GPT-3.5. Sarvam AI in collaboration with AI4Bharat, has developed "OpenHathi Series" and released the first Hindi LLM from the series, showing GPT-3.5-like performance on Indic languages with a frugal budget.

Key Highlights:

  1. Sarvam AI developed a new tokeniser, combining a custom-trained sentence-piece model with the existing Llama2 tokeniser, to efficiently process Hindi text. This significantly reduces the token count, speeding up training and inferencing processes, and addresses inefficiencies in existing models like GPT-3.5.

  2. The model underwent a unique three-phase training regimen, starting with translation between English and Hindi, followed by bilingual next token prediction, and then SFT on various practical tasks, ensuring effective bilingual understanding and application.

  3. The model excels in practical applications including translation, content moderation, and agricultural communication. It demonstrated superior translation capabilities between Devanagari and Romanised Hindi and English, outperforming many established GPT models.

Superintelligence May Arrive in 10 Years 🦾

OpenAI believes superintelligence could arrive in 10 years where AI models will be able to outsmart humans, and the current techniques like RLHF might become insufficient to ensure AI safety. But the problem is viewed as solvable as many promising approaches are there with lots of “low hanging fruits”.

With this situation, OpenAI is launching a $10M grant program to support technical research towards ensuring superhuman AI systems are aligned and safe. The team is interested in funding research on weak-to-strong generalization, interpretability, scalable oversight, honesty, chain-of-thought faithfulness, adversarial robustness, evaluations, testbeds, and more in AI systems.

Weak-to-strong generalization: An analogy where a smaller, less capable model supervises a larger, more capable model, to understand the dynamics of weak-to-strong model supervision. The key question is whether a strong model will generalize based on the weak supervisor's intent, even with incomplete or flawed training labels.

  1. Empirical Results: Using GPT-2 as a weak supervisor to finetune GPT-4, the team found that they could significantly improve generalization, achieving performance between GPT-3 and GPT-3.5 levels.

  2. Limitations: While the method has limitations (e.g., not working on ChatGPT preference data), other approaches like optimal early stopping and bootstrapping show promise.

  3. Feasibility of Improving: The results suggest that while naive human supervision might not scale well to superhuman models, it's feasible to substantially improve weak-to-strong generalization.

Quick Updates from Spotify and Meta 🤌

  • Spotify is reportedly currently testing an "AI playlists" feature, which allows users to create personalized playlists through prompts. Demonstrated by a TikTok user, it can be accessed in Spotify’s app from the “Your Library” tab, and involves a chatbot-style interface for typing prompts.

  • Earlier announced at its Connect event in September, Meta has started rolling out AI-generated backgrounds to Instagram users in the US. When users tap on the background editor icon on an image, they can either put their own prompt or use ready prompts like “Surrounded by puppies.”

Tools of the Trade ⚒️

  • Routine: An all-in-one productivity app that integrates calendars, tasks, notes, and contacts. It features a natural-language-based console to quickly collect thoughts and capture everything, integration with email, chat, and project management tools, along with work planning and scheduling.

[video-to-gif output image]
  • Sketch2App: Generates code within 30 seconds for your exact UI requirements from hand-drawn sketches using GPT-4 V. Capture your sketch from the webcam, tweak the generated code with exact text edits, and voila!

  • TubeOnAI: Summarize and listen to any YouTube and podcasts in 30 Seconds. Sign in and subscribe to a YouTube or Podcast channel and summaries will be automatically downloaded to your phone which you can read or listen to anytime.

  • Scade: AI-based platform offering over 1,500 tools for effortless business process automation and product development without coding. It caters to B2B, startups, developers, and more, streamlining various functions

😍 Enjoying so far, TWEET NOW to share with your friends!

Hot Takes 🔥

  1. If I was Sundar Pichai I would try to buy Perplexity AI, urgently. Best time was a year ago, second best time is now. It's not good to be the second best product on the market in an area that's 90% (?) of your profit... ~ Jakob Foerster

  2. We're at a point now where autonomous driving in closed systems is easy to do and to deploy. This should make us question any new infrastructure projects involving old-school high speed rail. ~ Vinod Khosla

  3. Trained AI models are not products. All the best models will ultimately be commoditized to open weights / source. Cost of inferencing is also asymptotically approaching quickly to near zero. So where does value get captured in AI? Domain specific products with deep industry context and polish. ~ Joseph Jacks

Meme of the Day 🤡

r/ProgrammerHumor - datingAppsWhyYouNoWork

That’s all for today!

See you tomorrow with more such AI-filled content. Don’t forget to subscribe and give your feedback below 👇

Real-time AI Updates 🚨

⚡️ Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss what’s trending!!

PS: I curate this AI newsletter every day for FREE, your support is what keeps me going. If you find value in what you read, share it with your friends by clicking the share button below!

Reply

or to participate.