• unwind ai
  • Posts
  • Apple to partner with Google after OpenAI

Apple to partner with Google after OpenAI

PLUS: AI models behind Apple Intelligence, Dataset of 1400+ jailbreak prompts

Today’s top AI Highlights:

  1. Apple confirms plans to work with Google to integrate Gemini AI into Apple Intelligence

  2. Apple reveals details on its foundation AI models powering Apple Intelligence

  3. AI safety researchers release a dataset of 1,405 Jailbreak Prompts for research

  4. Build your AI workforce with this tool to automate tedious business tasks

& so much more!

Read time: 3 mins

Latest Developments 🌍

Apple’s big push towards AI was the highlight of yesterday’s WWDC 2024. This also included partnership with OpenAI where Siri would take ChatGPT’s help where it finds ChatGPT would be more useful to your queries.

It was reported earlier that Apple is also in talks with Google to integrate Gemini model in its Apple Intelligence. However, this partnership was never mentioned at the event.

Apple’s SVP Craig Federigh has now confirmed that they are looking to integrate more AI models in the future, including Google’s Gemini. You might be able to choose the model you want Siri to rely on for assistance, giving you more control over how you interact with your Apple devices.

Apple has finally shed light on the AI models behind its new Apple Intelligence system. The company has released a blog post with technical details about the models. Apple uses two main foundation models: a ~3 billion parameter on-device model for tasks that can be handled locally, and a larger server-based model for more complex requests. This server model relies on Private Cloud Compute and runs on Apple silicon servers.

Key Highlights:

  1. Model Architecture

    1. The models are trained using Apple’s AXLearn framework, built on JAX and XLA. This allows for efficient training across various hardware and cloud platforms, including TPUs and GPUs.

    2. The training dataset includes a combination of licensed and publicly available data. Web publishers have the option to opt out of their content being used for training.

  2. Model Adaptation

    1. Apple utilizes adapters, which are essentially small neural network modules, to fine-tune its models for specific tasks. These adapters are small collections of model weights that are overlaid onto the common base foundation model.

    2. They can be dynamically loaded and swapped, allowing the foundation models to specialize on-the-fly for the task at hand while efficiently managing memory and responsiveness.

  1. Performance Comparisons

    1. Apple’s on-device model, with ~3 billion parameters, outperforms larger models like Phi-3-mini, Mistral-7B, and Gemma-7B in human evaluations across benchmarks like Instruction-Following Eval (IFEval) and writing ability tests.

    2. The server model compares favorably to DBRX-Instruct, Mixtral-8x22B, and GPT-3.5-Turbo in similar benchmarks.

  2. Safety and Responsibility - These models achieve lower violation rates than comparable models in terms of harmful content, sensitive topics, and factuality, as measured in various safety-focused evaluations.

A new study has revealed the alarming extent to which LLMs like GPT-4 can be manipulated by “jailbreak prompts,” specifically crafted to bypass safety measures and generate harmful content. Researchers have analyzed over 1,400 jailbreak prompts from various online communities, uncovering the strategies used to trick these AI models into providing dangerous information or performing unethical actions. This research sheds light on the vulnerability of current LLMs and the urgent need for improved safeguards.

Key Highlights:

  1. Jailbreak prompts are becoming increasingly sophisticated and common - Researchers discovered 131 distinct jailbreak communities online, with some users consistently refining these prompts over extended periods. This indicates a growing community dedicated to exploiting vulnerabilities in LLMs.

  2. LLMs are surprisingly vulnerable to these attacks - The study tested six popular LLMs, including ChatGPT, GPT-4, and PaLM2, and found that some jailbreak prompts could successfully elicit harmful responses in over 95% of cases. This demonstrates the real-world risks posed by these attacks.

  3. Current safeguards are insufficient - While some LLM vendors have implemented defenses against jailbreak prompts, these measures are often easily bypassed through paraphrasing or subtle changes to the prompt wording. The research highlights the need for more robust and adaptive safeguards to protect against this emerging threat.

  4. Dataset of jailbreak prompts is publicly available: The researchers have released a dataset containing over 15,000 prompts, including 1,405 jailbreak prompts, collected from platforms like Reddit, Discord, and websites. This dataset will be a valuable resource for researchers and developers working to improve LLM safety.

Two weeks back, Microsoft announced its Copilot + PCs, integrating AI more deeply into our daily lives. Its standout Recall feature was the most criticized one. Recall is a photographic memory for your computer. Every few seconds, it captures snapshots of your screen and saves them to your hard drive, which you can later refer to to find the content you saw before.

This feature was slammed by security experts for its potential to be a privacy nightmare. It was initially turned on by default, meaning users would unknowingly be recorded without their consent. This data, which includes sensitive information like passwords and browsing history, could be easily accessed by hackers.

Responding to this, Microsoft is now making it an opt-in feature. If you don’t proactively choose to turn it on, it will be off by default.

Recall user interface

😍 Enjoying so far, share it with your friends!

Tools of the Trade ⚒️

  1. Hamming’s Prompt Optimizer: Automates 90% of the manual prompt engineering process. It uses use LLMs to generate optimized, structured prompts for tasks with clear inputs and outputs. The first 7 days are free.

  1. Pet.Buddy: Uses AI to provide detailed scientific answers to your pet-related questions and offers effective training techniques. It also delivers unbiased reviews to help you choose the best pet products.

  2. MindPal: Automates various business tasks such as CV screening, content repurposing, and market research using AI agents tailored to your specific needs. You can create custom workflows, integrate with popular tools, and publish AI-powered chatbots on your website to streamline operations, and a lot more.

  1. Awesome LLM Apps: Build awesome LLM apps using RAG for interacting with data sources like GitHub, Gmail, PDFs, and YouTube videos through simple texts. These apps will let you retrieve information, engage in chat, and extract insights directly from content on these platforms.

Hot Takes 🔥

  1. I think the safest thing for AI is to be maximally truth-seeking even if the truth is unpopular. ~Elon Musk

  2. Don't be shocked, but you are already being programmed by AI daily. The algorithms control and harvest your data on Google, Facebook, Insta, OpenAI, Netflix, Apple, TikTok and X. What you think, wear, say, and eat is because of some small influence by an AI algorithm on one of these sites. These algorithms tell us what we want to hear, confirm our biases, and entertain us! AI is already Huxley's Soma! We are already very interdependent on each other 😀 ~Bindu Reddy

Meme of the Day 🤡

That’s all for today! See you tomorrow with more such AI-filled content.

Real-time AI Updates 🚨

⚡️ Follow me on Twitter @Saboo_Shubham for lightning-fast AI updates and never miss what’s trending!

PS: I curate this AI newsletter every day for FREE, your support is what keeps me going. If you find value in what you read, share it with your friends by clicking the share button below!

Reply

or to participate.