🏄Alexa, let's chat

Amazon's Alexa gets more human-like with generative AI

Hello Surfers🏄! 

“Alexa! Write me an intro for this email!” I say out loud as I sip my morning brew and prop my leg up on the coffee table. This might soon be the way I write this section, but for now, I continue typing it out on my trusty keyboard.

Quick service announcement: Next week, "Riding the Wave" will take a break from arriving in your inbox as I sail the Adriatic Sea and pet dolphins, far away from civilization and 5G towers. But don’t worry; I will be back on the 2nd of October.

Here’s your one minute of AI news for the day:

ONE PIECE OF NEWS

🤖Amazon’s Alexa gets major AI boost

Amazon demoed its new, AI-boosted Alexa yesterday. No more awkwardly shouting "Alexa" every time you want to continue your chat about last night's game or your feelings on pineapple pizza.

From now on, the company claims Alexa will be able to jump back into chats, and learn your preferences like your favourite football team or pizza topping.

In the battle of tech titans, Amazon's looking to remain king of the home device market.

In a live demo, Dave Limp, senior VP of Devices and Services, showcased a new mode, "Alexa, let's chat,” where you can have a long back-and-forth conversation on various subjects, with the executive pausing to address the audience then seamlessly resuming the chat.

Although details were sparse, Amazon said experiences will be designed to protect privacy, security, and user control.

Amazon's ambition doesn't stop there; The company is already partnering with BMW to develop conversational in-car voice assistant capabilities.

The news marks a major step in bringing conversational AI into everyday life. Merging speech and language models allows us a peek into the not-so-distant future of all-knowing personal AI assistants.

We’re not done and won’t be done until Alexa is as good or better than the ‘Star Trek’ computer,” Limp says. “And to be able to do that, it has to be conversational. It has to know all. It has to be the true source of knowledge for everything.”

ONE MORE THING

OpenAI’s DALL-E 3 image generator will soon be available in ChatGPT Plus and ChatGPT Enterprise.

It’s a step towards the next generation of language models: multi-modal LLMs. If you’ve missed out on the newsletter two days ago and want to know what’s a multi-modal LLM, you can read it here.
This tweet contains a video demo, to watch the video, click on the tweet.

⌚ If you have one more minute:

  • What an AI-Generated Medieval Village Means for the Future of Art

  • Ray Dalio says AI will greatly disrupt our lives within a year—you should be both excited and scared of it

  • OpenAI is building a red teaming network to tackle AI safety - and you can apply

AI Art of the day 🎨

This image was generated by the soon-to-be-released DALL-E 3, OpenAI’s next-generation image generator. Last year, when text-to-image AI models began rolling out, DALL-E 2 was among the most promising platforms. However, Stable Diffusion and Midjourney soon gained a competitive edge. DALL-E 3 aims to put OpenAI back among the top competitors in the image generation race. Based on this demo image, they might have a chance.

🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄

That’s it folks!

If you liked it, please share this hand-crafted newsletter with a friend and make this writer happy!