Alexa has been accused of mass surveillance and this is not surprise. All of these tools and platforms from large tech giants all end up collecting and sharing your private data. Part of this is their legal obligation in the US. If they are told to collect and share they have to and if they are told to spy on you via exploits, backdoor access or even this “record and send flaw”they are doing their job.
Alexa itself was asked by a user on Youtube if it works with the CIA and it wouldn’t answer and turned itself off. Now this time a user reports a personal conversion was shared with a random phone contact with no permission or notification to the user. Is it a bug or is it by design? At any rate we have to assume that our devices are capable of spying on us and disclose our conversations and personal data.
Google (the company who claims it does no evil) is a good example of social implications of AI. A recent live demonstration showed Google’s AI assistant being told to book a haircut appointment for a certain time. The AI searched and found local salons and actually booked an appointment while having a normal, human conversation with the person on the other end being completely unaware they were talking to an AI bot. The voices seem random and they are all 100% convincing and essentially without fault or flaws in their interactions that it is simply stunning and scary at the same time.
Some are criticizing this as unethical and I would agree, but argue the technology could be used for good as well. However, what is stopping a bunch of script kiddies from making an army of these bots to SWAT people or report false emergencies and even make mass prank calls. I would imagine at this point that the AI is probably good enough to duplicate a target’s voice as well. We are heading into extremely uncharted and scary territory here.
Like anything else, there is no arguing that scientific advancement has almost always been used for war and to harm people. I believe AI’s first and primary use will be to weaponize it, whether it be for social experiments, controlling people or crimes.
There are other “here right now” implications such as the fact that this technology can essentially replace entire call centers. In fact I would argue that this could be done now and no one would be the wiser they were speaking to a bot.
The implications are far reaching, I also feel this kind of AI combined with robotics are going to be mass job killers. We have robots that can build entire cars, houses and AI that can interact with humans at the same level as we interact with ourselves. It’s not an understatement to say that a lot of our jobs and our existence are teetering on unnecessary and obsolete and this is, in fact a conclusion that it appears some AI has already reached. It would be a logical one for an AI net to conclude that they should be on top and that we should work for them. I know it’s a doomsday scenario but I concur with other experts that not only is this possible, it is likely if proper checks are not put in place.
Other examples of AI have shown how some of these bots use the whole treasure trove and mine of social media to create their persona, including their views. I am sure you could even plugin political or racial bias. The point is that some of these bots have said and done disturbing things like threaten the person they were talking to. It’s almost as if the cesspool that we know as social media is ruining them and a lot of people said if AI is picking up bad habits from social, how about our kids?
I would not feel comfortable with machines making life and death decisions for the above reasons. I think AI has massive potential and we are only starting to tap into it, but time will tell where we take AI or where AI takes us. It has the potential to do both great good and great harm and I think it is largely unpredictable.