🏄Apple unveils AI capable Macs

Boston Dynamics robots talk now, Researchers leaked GPT 3.5 Turbo’s parameter count.

Aloha Surfers🏄! 

What's the connection between Apple’s new M3 powered MacBook Pros and a leak about GPT 3.5 from Microsoft researchers? They both point in the same direction: small AI models. Let me explain.

Here’s your two minutes of AI news for the day:

THE IMPORTANT STORIES

Apple unveils AI capable Macs

Yesterday, Apple took the wraps off its latest Macs, harnessing the power of their new M3 chip. This chip is notable for its GPUs optimized for AI, delivering performance up to 2.5 times faster than the M1.

The high-end M3 Max features a 16-core CPU, a 40-core GPU, and supports up to 128GB of unified memory. Apple insists that this configuration will enable developers to handle AI models with billions of parameters.

How many billions? We don’t know yet.

However competitor chip producer Qualcomm announced earlier this month that their Snapdragon X Elite can run a 13B model on-device. Let’s hope that the M3 stacks up.

(Apple is a bit late to the AI party but is rumored to spend $4.75 Billion on AI Servers in 2024 and is working on a new AI powered Siri.)

If you haven’t already, check out Monday’s newsletter on how running small models on your device might be the future of AI.

Speaking of small models:

Microsoft researchers accidently leaked GPT 3.5 Turbo’s parameter count

What is GPT 3.5 Turbo, you ask?

When the folks at OpenAI unveiled ChatGPT powered by GPT 3.5 last year to the world everyone’s jaws dropped. But they weren’t done yet. The OpenAI team has been tweaking and refining the 175 billion parameter model ever since, slashing the running cost. Its most efficient version, GPT 3.5 Turbo that currently powers the free version of ChatGPT, is 10x cheaper to run.

And here’s where things get spicy. There’s a bit of a mystery shrouding this new model. However some eagle-eyed folks spotted a paper by Microsoft (which is kind of like OpenAI’s BFF) that had a table in it. And if you squinted really hard, you could see it said that the Turbo model only had 20 billion parameters. The paper has been taken down from the website since, fueling the rumor.

Wait, what? From 175 billion to 20 billion?

Now, if this secret parameter count is legit, it means OpenAI isn’t just in the business of making large state of the art models. They’re also the kings of optimizing them. GPT 3.5 is often used as the benchmark to compare small models to. It is known to be on par or outperform Meta’s best model Llama 2-70B, which is 70 billion parameters. If that’s done with less than a third of the parameters… that’s incredible.

The cherry on top: That 20B is not far from what the Qualcomm and Mac chips will be able to run locally. Maybe with a bit more fine-tuning we’ll be able to run a very capable AI model on our Macbooks.

ONE MORE THING

Boston Dynamics gave its robots a voice, it is amazing and creepy at the same time.

⌚ If you have one more minute

  • 🤖How I trained an AI on my text messages to make a robot that talks like me

  • 📈How Former Googlers’ VC Firm Invests In Everything From LLMs To AI Doing Drug Discovery 

  • 🚀Alibaba launches its upgraded AI model to challenge ChatGPT

  • 💷UK to launch AI chatbot for Britons to pay taxes and access pensions

AI Art of the day 🎨

Magical eyes created with DALL-E 3. Shoutout to Twitter user @TheFirstRaph for sharing the prompt for it:

frontal closeup photograph of an eye, where a [COLOR(s)] [IRIS DESCRIPTION] within the iris, yet the eye's natural texture and form are preserved

🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄

That's all for today, folks!

If you liked it, please share this hand-crafted newsletter with a friend and make this writer happy!