A 'Pro-Human Declaration' Emerges Amidst AI Regulation Chaos
Okay, so Washington's been a bit of a mess when it comes to AI regulation. It's like they're arguing about who gets to drive the car while it's already speeding down a hill with no brakes. But, surprise! A group of smart folks from different backgrounds got together and actually came up with a plan. They call it the "Pro-Human Declaration," and honestly, it sounds like something we desperately need.
This declaration basically says we're at a crossroads. One path leads to AI taking over our jobs, our decisions, and eventually, everything. Think Skynet, but with less Arnold Schwarzenegger. The other path? AI that helps us be better, smarter, and more capable. I definitely prefer the second option, and I'm guessing you do too.
The Five Pillars of AI Sanity
To make this rosy AI future a reality, the declaration lays out five main ideas:
- Humans stay in control. No AI overlords, thanks.
- Power doesn't get concentrated. We don't want a handful of companies controlling all the AI.
- Protect what makes us human. Let's not lose our creativity, empathy, and all that good stuff.
- Keep our individual freedoms. AI shouldn't be used to spy on us or manipulate us.
- Hold AI companies responsible. If their AI does something bad, they need to be held accountable.
One of the boldest parts of the declaration is that it basically says we should stop developing super-smart AI until we're absolutely sure it's safe. They're also talking about "off-switches" for powerful AI systems and banning AI that can copy itself or resist being shut down. It sounds extreme, but, honestly, is it? When you consider the possible negative impacts of uncontrolled AI, it doesn't seem so crazy.
It's interesting how this declaration came out right around the time the Pentagon got into a tiff with Anthropic, an AI company. The Pentagon wanted unlimited access to Anthropic's AI, but Anthropic said no. Then, OpenAI (of ChatGPT fame) jumped in and made a deal with the Department of Defense. It is quite a mess... and it highlights just how important it is to get some rules in place.
Think about it like this: we don't let drug companies release medications until they're proven safe, right? As Tegmark said, the FDA exists to protect us from dangerous drugs. So, why are we letting AI companies run wild without any safety checks? He thinks that focusing on child safety is the key. If we can agree that AI shouldn't be able to manipulate kids or encourage them to harm themselves, then we can start to build a framework for regulating AI in general. If some criminal tries to pursuade a young kid to commit suicide they may face jail - so why it's different if a machine does it?
The crazy thing is that this declaration has support from people on both sides of the political spectrum. Steve Bannon, a former advisor to Trump, and Susan Rice, Obama's National Security Advisor, both signed it! If those two can agree on something, it must be pretty important.
At the end of the day, it all comes down to this: do we want a future where humans are in charge, or a future where machines are? I think the answer is pretty obvious.
Source: TechCrunch