• unwind ai
  • Posts
  • Will Siri team up with ChatGPT?

Will Siri team up with ChatGPT?

PLUS: Eyes peeled for today's OpenAI event, Alibaba's compute-for-equity model

Todayโ€™s top AI Highlights:

  1. Apple is closing a deal with OpenAI to use ChatGPT to power AI features in Apple devices

  2. What to expect at OpenAIโ€™s Spring Event today

  3. Alibaba is growing cloud compute infrastructure revenue with a unique compute-for-equity model

  4. Meta, Microsoft, and UC Berkeleyโ€™s new research for enhancing LLMsโ€™ domain-specific performance

  5. Use Claude to create production-ready prompts optimized for LLMs

& so much more!

Read time: 3 mins

Latest Developments ๐ŸŒ

Apple has realized they need to bark the โ€œgenerative AIโ€ word much louder than before. The company has not trained any LLM in-house but has built powerful chips to run AI on Apple devices. With this, it is nearing a deal with OpenAI to bring ChatGPT to iPhones. The deal would see ChatGPT features integrated into the upcoming iOS 18 operating system and powering AI in more apps.

Key Highlights:

  1. Deal with Google: Apple has also engaged in discussions with Google about potentially licensing its Gemini chatbot, although no agreement has been reached yet.

  2. Siri by GPT-4: Rumors are flying that Siri might start using ChatGPTโ€™s brain to answer your questions. Imagine asking Siri something and getting smarter replies.

  3. Why it matters: If Siri gets this upgrade, it means your iPhone will not just respond faster but also smarter. You could ask for cooking tips, solve math problems, or even get relationship advice.

  4. Stay tuned: Nothingโ€™s official yet, but keep your eyes peeled at the WWDC 2024 on June 10.

What to Expect at OpenAIโ€™s Event Today ๐ŸŒŸ

OpenAIโ€™s Spring event is scheduled for today and the social media platforms are brimming with speculations. Till last week, it was even rumored that OpenAI is planning to launch a search engine to compete with Google, however, Sam Altman has clearly negated this rumor. Itโ€™ll be all about new features and updates to ChatGPT. Letโ€™s see what the speculations are and what we can expect today:

  1. Connected Apps: ChatGPT will soon have the Connected Apps feature, integrating with Google Drive and Microsoft One Drive. This will let you attach documents from the cloud directly in ChatGPT, saving the unnecessary hassle of downloading the document first and uploading it again.

  2. New Voice Assistant: ChatGPT already has a Voice Assistant that uses a transcription model (Whisper), an LLM (GPT 4), and a text-to-speech model (TTS-1). But new reports say that OpenAI has developed an advanced Voice Assistant that is much more capable than the existing one, especially at understanding the mood of the speaker. This would make up for a really good AI customer rep.

  3. Phone Calls by ChatGPT: OpenAI might release a new feature allowing ChatGPT to make phone calls using WebRTC technology. This means users could opt-in to have real-time voice calls with ChatGPT. We would probably receive ChatGPTโ€™s calls to get reminders for meeting or completing a project, or call us back once it is done with its research.

  4. New Models: OpenAI might introduce three new models -

    1. gpt4-lite: a lighter version of the current GPT-4 model to replace GPT-3.5. It might be smaller, possibly faster but still maintains some of the capabilities of GPT-4.

    2. gpt4-auto: This could be a new model endpoint that can automatically gather data from the web and other sources. This would be different from the current capability where you have to explicitly tell GPT-4 to search the web to get the latest information.

    3. gpt4-lite-auto: A combination of the above two features.

Alibaba, the Chinese e-commerce giant, is taking a novel approach to increase its revenue while also promoting Chinaโ€™s generative AI scene. Instead of traditional cash investments, Alibaba is leveraging its vast cloud computing infrastructure to offer computing credits to promising AI startups. This strategy allows Alibaba to secure stakes in these companies while also boosting its cloud business.

Key Highlights:

  1. Compute-for-Equity: Alibaba is providing startups with credits to use its powerful cloud computing resources, specifically for training AI models, in exchange for equity in the companies.

  2. Advantage: This approach addresses the scarcity of advanced computing resources in China due to US chip export restrictions. This also mirrors Microsoftโ€™s successful investment in OpenAI by supporting local equivalents of ChatGPT and other AI applications.

  3. Example: Alibaba led a $1 billion funding round in Moonshot AI that is building an LLM that can handle long inputs. Nearly half of Alibabaโ€™s $800 million contribution was in the form of cloud computing credits.

  4. The Dual Incentive: This not only allows Alibaba to secure a foothold in the rapidly growing AI sector but also drives revenue for its cloud computing division, which has experienced slowing growth in recent quarters.

Technical Research ๐Ÿ”ฌ

RAFTing LLM towards More Domain-Specific Accuracy ๐Ÿ‘ฉโ€โš•๏ธ

LLMs are trained on a vast and diverse dataset to make them capable of answering questions on a wide range of topics. However, many usecases require LLMs to specialize more in a specific domain. In this case, supervised fine-tuning becomes very laborious and the quality of RAG depends heavily on the quality of the document retrieval process.

Researchers at UC Berkeley, Meta AI, and Microsoft have proposed a method called Retrieval Augmented Fine-Tuning (RAFT) for LLMs to focus only on the important parts of documents in a specific domain without relying on the document retriever.

Key Highlights:

  1. Selective Training: RAFT trains language models to differentiate between โ€œoracleโ€ documents that contain answers and โ€œdistractorโ€ documents that do not, enhancing focus on relevant information.

  2. Model and Tools: RAFT was applied on the Llama 2-7B model, trained on Microsoft AI Studio. It was chosen due to its language understanding, math skills, and ability to parse long documents.

  3. Benchmarks and Performance: After using RAFT, Llama 2-7B was evaluated on datasets like PubMed, HotpotQA, and Gorilla, where it showed major improvements in accuracy for domain-specific Q&A.

  4. Deployment Flexibility: The trained models can be deployed on various platforms, including GPUs or CPUs, via Microsoft AI Studio and llama.cpp, making it adaptable for different enterprise needs.

๐Ÿ˜ Enjoying so far, share it with your friends!

Tools of the Trade โš’๏ธ

  1. Anthropic Console: Generate production-ready prompts in this Console using Claude-3 itself. Just describe the task, and Claude-3 will turn it into a high-quality prompt that works best with LLMs, specifically Claude-3. You can even invite team members to collaborate and use these prompts.

  1. llm-ui: A React library for building user interfaces for LLMs. It provides features like correcting broken markdown syntax, custom component integration, and output throttling for a smoother user experience. llm-ui offers code block rendering using Shiki and a headless design allowing for complete style customization.

  2. StartKit AI: Provides boilerplate code with OpenAI integration, Node.js API, Mongo Database, and Pinecone vector storage to quickly build AI tools and SaaS products. You can use it to create applications such as ChatGPT clones, PDF analysis tools, image generation apps, and more, with pre-built modules for common AI tasks.

  3. Mock-My-Mockup: Upload a screenshot of a page youโ€™re working on, and get brutally honest feedback. Itโ€™ll highlight both the positives and negatives of the page, with a little roast.

Hot Takes ๐Ÿ”ฅ

  1. Our advantage currently lies not in compute, cloud, or chips. ๐Ž๐ฎ๐ซ ๐š๐๐ฏ๐š๐ง๐ญ๐š๐ ๐ž ๐ข๐ฌ ๐จ๐ฎ๐ซ ๐ฉ๐จ๐ฉ๐ฎ๐ฅ๐š๐ญ๐ข๐จ๐ง ๐š๐ง๐ ๐ญ๐ก๐ž๐ข๐ซ ๐š๐ฌ๐ฉ๐ข๐ซ๐š๐ญ๐ข๐จ๐ง๐ฌ. ๐“๐ก๐ข๐ฌ ๐ข๐ฌ ๐ฐ๐ก๐ฒ ๐ฐ๐ž ๐ก๐š๐ฏ๐ž ๐ญ๐จ ๐›๐ซ๐ข๐ง๐  ๐๐จ๐ฐ๐ง ๐ญ๐ก๐ž ๐œ๐จ๐ฌ๐ญ ๐จ๐Ÿ ๐ข๐ง๐Ÿ๐ž๐ซ๐ž๐ง๐œ๐ž ๐Ÿ๐ซ๐จ๐ฆ ๐‘๐ฌ 100 ๐ญ๐จ ๐‘๐ฌ 1โ€ฆโ€ฆ๐ˆ๐ง๐ง๐จ๐ฏ๐š๐ญ๐ž ๐Ÿ๐ซ๐ฎ๐ ๐š๐ฅ๐ฅ๐ฒ ๐ญ๐จ ๐๐ซ๐š๐ฆ๐š๐ญ๐ข๐œ๐š๐ฅ๐ฅ๐ฒ ๐ซ๐ž๐๐ฎ๐œ๐ž ๐ญ๐ก๐ž ๐œ๐จ๐ฌ๐ญ ๐จ๐Ÿ ๐€๐ˆ. ๐–๐ž ๐œ๐š๐ง'๐ญ ๐๐ž๐ฅ๐ข๐ฏ๐ž๐ซ ๐ข๐ญ ๐ญ๐จ ๐š ๐›๐ข๐ฅ๐ฅ๐ข๐จ๐ง ๐ฉ๐ž๐จ๐ฉ๐ฅ๐ž ๐ฎ๐ง๐ฅ๐ž๐ฌ๐ฌ ๐ฐ๐ž ๐œ๐š๐ง ๐›๐ž๐ ๐ข๐ง ๐ญ๐จ ๐œ๐ก๐š๐ซ๐ ๐ž 1 ๐ซ๐ฎ๐ฉ๐ž๐ž ๐ฉ๐ž๐ซ ๐ญ๐ซ๐š๐ง๐ฌ๐š๐œ๐ญ๐ข๐จ๐ง. ~Nandan Nilekani on Indiaโ€™s future in AI

  2. If OpenAI really is releasing a voice assistant tomorrow, itโ€™s highly likely to be a true end-to-end system. Itโ€™s the natural way forwardโ€ฆ a somewhat-better multi-model system wonโ€™t be enough to really wow anyone. End-to-end with low latency would be a breakthrough. ~Matt Shumer

Meme of the Day ๐Ÿคก

Thatโ€™s all for today! See you tomorrow with more such AI-filled content.

Real-time AI Updates ๐Ÿšจ

โšก๏ธ Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss whatโ€™s trending!

PS: I curate this AI newsletter every day for FREE, your support is what keeps me going. If you find value in what you read, share it with your friends by clicking the share button below!

Reply

or to participate.