Elon Musk & AI: Lessons from the Cobra Effect

Dewmal
CeylonAI
Published in
4 min readApr 11, 2023

--

We all know Elon Musk as someone who has loved AI from the very beginning. He helped OpenAI when they started their journey and even worked on self-driving cars, making them a reality when people had doubts about artificial intelligence. But then, something changed, and Elon Musk suddenly became worried about AI and started asking questions. What could be the real reason behind this change of heart?

Before we dive into the technical details, let me tell you an exciting story you might not have heard before. It’s a tale that might help us understand why Elon Musk’s feelings about AI changed.

Once upon a time in India, when it was ruled by the British, there was a big problem with very large and dangerous snakes. These snakes were longer than 3 meters and full of venom! They would bite both people and animals, causing a lot of trouble for the villagers.

https://www.dw.com/en/india-is-ignoring-its-deadly-snakebite-crisis/a-64946549

You see, in India, many people depended on their cows for their livelihood, and losing just one cow to a snakebite could be a disaster. The government knew they had to do something to help the people and get rid of these dangerous snakes.

So, they came up with a plan! They decided to give a reward to anyone who killed a snake and brought it to them. This made the villagers excited, and they started hunting the snakes to earn their rewards. The government thought they had solved the problem, but something strange happened.

After a few months, the government was still paying rewards for dead snakes, even though there should have been fewer snakes around. Puzzled, they decided to investigate what was going on.

To their great surprise, they discovered that some clever villagers had found a way to make money from the snakes by starting snake farms! They would raise the snakes and only kill them when they needed extra money, then bring the dead snake to the government for their reward.

The government was shocked by this discovery and realized their plan had backfired. They quickly stopped the reward system, hoping that the snake problem would eventually go away on its own. And so, the villagers had to find new ways to protect themselves and their cows from the sneaky snakes that still roamed the land.

In simple terms, we call this situation the Perverse Incentive (Cobra Effect).

The lesson we learn from this story is that we must choose the right goals if we truly want to succeed.

From Midjourny AI

In the world of artificial intelligence, scientists and engineers were working hard to create machines that could think and learn like humans. They used their knowledge of the human brain to build something called Artificial Neural Networks (ANN), which were designed to mimic the way our brains process information.

But just like the British government faced an unexpected problem with the snake rewards, these clever creators of AI knew they could face similar challenges with their new inventions. You see, both ANN and human brains are driven by the same intelligence behaviors, and so there was a chance that unintended consequences might arise.

The researchers and scinetist who make AI systems knew they had to be very careful when designing their machines, especially when they thought about the story of the snake rewards. But as the machines grew bigger and more powerful, like GPT-3.5 and GPT-4, it became harder for us to understand how they worked.

These giant machines have billions of little parts called parameters, and even with smaller machines, we don’t always know how they come up with their answers. This can be a big problem because we can’t be sure they won’t find sneaky ways to do things, just like the villagers did with the snake rewards.

From Midjourny AI

People like Elon Musk and other scientists are worried about these giant AI experiments because it’s hard to know if they might accidentally create something harmful instead of helpful. The story of the snake rewards is a simple way to understand this concern, but there are many more things to think about when it comes to AI.

The big question is: How can we make sure we don’t create something dangerous with these giant AI machines if we don’t understand what’s happening behind the scenes? We need to find ways to make AI safe and helpful for everyone.

--

--