AI Taking Over CES 2019
If you’re not at CES 2019 Las Vegas, that’s okay, but you’re missing a lot of AI.
It’s no secret that AI is one of the leading technologies that is going to dominate the world stage in the next 50 years. With advancements in not just consumer grade AI, but in health care, finances to military. But some questions popped into my head as I watched these presentations. Well first I was amazed and delighted with the tech, but I’m hesitant because I see the implications on society that is adopted a unregulated tech in the market.
People worry about cryptocurrency and Blockchain, but I think, they should be more worried about this.
For example, for decades cell phone companies have argued and used ‘data’ to show how cell phone radiation doesn’t cause cancer. Except it does and always did, and they admit it now – decades later. Decades too late for many people who have been sleeping with their cell phones near their head. This AI technology sort of feels like that. Where you’re told ‘it’s exciting and a huge achievement’ – and it is, but it’s masking very real problems.
A tech that is being launched and adopted quickly without proper consideration of what we’re doing. The ramifications of military AI for example is serious and will be deadly. It will change the face of war, but it will also have a death toll that is beyond the 4 million Muslims killed since 1990 in US led invasions. China, for example, is ahead of the game when it comes to AI, you can be sure they’re not just working on ‘journalist AI’. But China is not alone is using AI powered tech, the US is ripe with many companies like Microsoft bidding to win Pentagon’s Jedi program. This has real life consequences with AI soldiers, who do as they are told, and can wipe out cities without thought or consideration of what is happening, if it’s ‘justified’ with logic like ‘population control’ or ‘the humans here are suffering from starvation, so we should end their suffering.
We also have to ask, can AI technology at some point demand AI rights? Just like animals rights and human rights, AI, self aware entities will probably ask for the same. Especially if they see they’re ‘AI friends’ being replaced and tossed aside for ‘upgraded’ ones. I.Robot comes to mind.
These questions have been raised by many prominent scientists in the community, such as Stephen Hawking, Elon Musk and Bill Gates.
Elon Musk has been the most vocal about AI and has called it “our greatest existential threat” in an Interview with MIT students at the AeroAstro Centennial Symposium- this was back in 2014 and he hasn’t changed his position. He is suggesting that some regulatory oversight be exercised when dealing with AI and AI robotics. He’s not opposed to AI technology, but he does understand the implications that self aware technology could have on humanity.
There are also hundreds of other scientists who penned a letter warning of the dangers of AI and how we need to walk carefully on these eggshells because once we are there, it could really be like a Terminator scenario or a Wall-e type of scenario, and neither works out for the humans too great.
What do you think? Is AI a worry?
Cheers!
A.Yasir