🏄AI regulation: Is the EU shooting its own foot?

The EU’s AI regulation: innovation killer or protector of society?

Hello Surfers🏄! 

I read through all the EU AI regulation shenanigans. It feels good to live in a part of the world where the protection of citizens and their rights is of paramount importance. However, the EU is being left behind on innovation.

Here’s your one minute of AI news for the day:

ONE PIECE OF NEWS

🇪🇺The EU’s AI regulation: innovation killer or protector of society?

“Ugh, EU! Again with the regulations?” I grumbled, watching Silicon Valley elites on my screen lament the European Union's AI regulation proposal, accusing it of stifling innovation. And, annoyingly, they might be onto something. A 2022 study splashed cold data on our faces: in the last six years, the US published 73% of large foundational AI models, while the EU … less than 10%.

After gas and oil, a new technological dependency threatens to emerge.

Is the proposed AI Act just the EU shooting its one good foot?

On one hand, the AI Act seems like society’s knight in shining armor protecting the liberty of its citizens. It outright bans applications of AI, such as social scoring (giving citizens points for their behaviour) or facial recognition in public spaces. Kudos for that.

It also clearly tags AIs as "high-risk" or "low-risk." If your AI plays in the high-risk sandbox, expect to juggle a multitude of checks and balances, from risk assessment to maintaining transparency and ensuring human oversight.

Sounds great, right? Why the fuss, then?

Here’s the catch: compliance ain’t cheap. While the Googles and Microsofts of the world can casually open their deep pockets, the few EU AI startups pushing innovation are left gasping for air.

Then there’s the storm, brewed by the European Council’s move to shove all foundational models—yep, ChatGPT, Claude, PaLM, and friends—into the high-risk bucket. With a gazillion potential use-cases, asking developers to assess and micromanage each risk is like asking me to quit caffeine: unrealistic and downright crazy.

Add to this quagmire the dilemma of data accountability. Developers like OpenAI might be clueless about the practical applications of their models by other firms, while users and intermediaries may lack intimate knowledge of the training data. A chaotic merry-go-round of “who's to blame” will ensue, without some form of information sharing solution.

The EU also casts a side-eye toward copyright breaches in training data, advocating for copyright holders to be able to opt out from models training on their work. Which sounds great, but can be challenging to implement.

The final verdict? The EU’s at a crossroads, juggling innovation ambitions with citizen protection. As the dust settles, one can only hope they find the golden balance. Because, honestly, we need both good regulation that can set an example and room for innovation!

ONE MORE THING

Augmented reality is getting crazy.

This is where AR is at. Click on the tweet to see the video.

⌚ If you have one more minute:

  • AI Can Predict Schizophrenia Via Hidden Linguistic Patterns

  • Google Cloud Launches New Generative AI Capabilities For Healthcare

  • ‘Keep Your Paws Off My Voice’: Voice Actors Worry Generative AI Will Steal Their Livelihoods

  • German antitrust head warns AI may boost Big Tech's dominance

AI Art of the day 🎨

Even though I mostly feature pictures made by Midjourney, Stable Diffusion or DALL-E, the realm of image generators is vast and bursting with variety. This image was created by zeng_wt with the Kandinsky model.

🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄🌊🏄

That’s it folks!

If you liked it, please share this hand-crafted newsletter with a friend and make this writer happy!