Not too long ago, a New Zealand-based supermarket was miffed to find its AI meal bot going haywire. As an alternative of offering healthful recipe strategies utilizing its merchandise, it had begun suggesting dishes comparable to “bleach-infused rice shock” and “mysterious meat stew” (with the mysterious meat being human flesh).
Whereas this may occasionally have been a little bit of enjoyable for web pranksters who prompted the bot with ever extra outlandish elements, it additionally raises a rising concern. What can occur when AI falls into the flawed palms?
Simply the 12 months earlier than, researchers used an AI educated to seek for useful new medication to generate 40,000 new chemical weapons in simply six hours.
Even when AI does what it’s educated to do, we’ve already seen many examples of what can occur when algorithms are developed with out oversight, from harmful medical diagnoses to racial bias to the creation and unfold of misinformation.
The <3 of EU tech
The most recent rumblings from the EU tech scene, a narrative from our sensible ol’ founder Boris, and a few questionable AI artwork. It is free, each week, in your inbox. Join now!
With the race to develop ever extra highly effective giant language fashions ramping up, at TNW 2023, we requested AI specialists: “Will AGI pose a risk to humanity?”
Whether or not you consider in an apocalyptic terminatoresque future or not, what we will all agree on is the truth that AI must be developed responsibly. Nevertheless, as per common, innovation has vastly outpaced regulation. As coverage makers battle to maintain up, the destiny of AI is basically depending on the tech group coming collectively to self-regulate, embrace transparency, and maybe most exceptional — truly work collectively.
After all, this poses extra work for firms growing AI. How do you construct a framework for growing accountable AI? How do you steadiness this with the necessity to innovate and sustain with expectations from board members and traders?
At TNW 2023, we spoke with Lila Ibrahim, COO of Google’s AI laboratory DeepMind. She shared three important steps for constructing a accountable future for AI and humanity.
Printed