Is Artificial Intelligence a Danger to Humanity?



AI is a broad branch of computer science that seeks to create intelligent machines capable of performing tasks that require human intelligence in general. AI is an interdisciplinary science with multiple approaches, but advances in machinery and in-depth learning create a paradigm shift in almost every sector of the technology industry. Stephen Hawking, Bill Gates and Elon Musk have something in common and this is not wealth or intelligence. They all fear that AI technology will take over the world. They also refer to this as the stink of AI technology.
It is a hypothetical scenario in which AI technology takes over the world, and artificially intelligent machines become the dominant life form in the world. This scenario is the development of robots to be superior to humans, or worse, to claim humanity by destroying the world. Could this disaster scenario be real? Here are some scientific explanations as to why even scientists who have pioneered many issues are worried that AI will take over humanity, and why it might happen soon:

They Learn to Deceive and Lie

Lying is a universal behavior, and most people use this form of behavior, such that even some animals, such as squirrels and birds, lie to survive. However, lying is no longer a behavior limited to humans and animals. Researchers from the Georgia Institute of Technology have developed artificially intelligent robots capable of deception and lying. The research team, led by Ronald Arkin, thinks the army will be able to use these robots in the future.

Once the army is perfected, the army can place these intelligent robots on the battlefield. They can serve as guards, protecting materials and ammunition from enemies.

AIs can play important roles on the battlefield by learning the art of lying, changing patrol strategies to deceive other intelligent robots or people. However, Professor Arkin admits that there are important ethical concerns about this research conducted. According to these findings, if these robots get out of the army and fall into the wrong hands, they can lead to disaster.

They Take Over People's Jobs

Scientists say that it is not the fear that should be worried, although most people sometimes express that AI and robots are afraid of destroying humanity. They say that these robots will take over people's jobs. Some experts are concerned that advances in artificial intelligence and automation can cause many people to lose their jobs. In the United States alone, there are 250,000 robots that do what people used to do. What is more worrying is that this number increases by two digits each year.

It's not just employees who worry about machines that get human jobs; AI experts are also concerned and Andrew Ng of Google's Brain Project and leading scientist from Baidu raise concerns about the danger to AI development. AIs threaten humanity because they can do the best work of almost anything.

In addition, many reputable institutions have published studies reflecting concern about this issue. For example, Oxford University has published a study that predicts that 35 percent of UK jobs will be replaced by AIs over the next 20 years.

Human Competitors Start To Compete With Intelligence

Piracy; This is a situation that can be very dangerous when it is in the wrong hands with the naming of hackers in the computer field. What is more dangerous is that scientists develop highly intelligent AI hacking systems to combat bad hackers. In August 2016, seven teams competed. The aim of this competition is to maintain performance and function by identifying and correcting their weaknesses at the same time as super-intelligent AI hackers who can attack the vulnerabilities of their enemies.

Although scientists have developed AI hackers for the common good, they acknowledge that in the wrong hands super-intelligent hacking systems can lead to chaos and destruction. It is clear that a super-intelligent AI is dragging people into a desperate situation, considering how dangerous it will be if they capture these intelligent autonomous pirates.



They start to understand human behavior

Facebook is undeniably the most effective and powerful social media platform today. For many, it has become an important part of daily routines, just like eating. But every time users of this platform use Facebook, they unknowingly interact with an artificial intelligence.

At a Berlin conference, Mark Zuckerberg explained how Facebook uses artificial intelligence to understand behaviors.

AI technology can determine what suits people's interests and preferences based on searches on Facebook, the sites they interact with. Zuckerberg stated that he plans to develop more advanced artificial intelligence for use in other fields such as medicine.

For now, Facebook's AI is only capable of pattern recognition and supervised learning, but with Facebook's resources it is predicted that scientists will eventually learn new skills and encounter super smart AIs. This is seen as a situation that will contribute to the extinction of the human generation.

Sexual Life with Robots

Many Hollywood films have explored the idea of ​​people who are in love and have sex with robots. Could these scenarios be in real life? Considering that the answer is intensified among the discussions is yes and is predicted to be soon. In 2015 he was a futurologist. In 2050, Ian Pearson published a shocking report that sex life with robots would be more common than sex with people. Dr. Pearson collaborated with Bondara, one of the UK's leaders in research, including sex toy stores, in the preparation of this report.

The report also includes the following predictions. By 2025, very wealthy people will have some kind of artificial intelligent sex robot. By 2030, every day people will engage in the same kind of virtual person, and by 2035, many people will have sex toys to have sex with virtual reality. And eventually, in 2050, sex life with robots will become the norm. Of course, there are people who are against artificial sex robots. One of them is dr. Kathleen Richardson believes that sexual intercourse with machines will bring unrealistic expectations and promote misogynistic behavior towards women.

More and more people are starting to look like

A robot can be very similar to a human being and can shake hands and embrace like a human being. These trials are becoming ever more successful, for example Yangyang, an artificially intelligent machine. Yangyang was developed by Japanese robot expert Hiroshi Ishiguro and Chinese robt professor Song Yang. Yangyang is not the only robot that looks like an eerie human. Singapore's Nanyang University of Technology (NTU) has also created its own version. Nadine, an artificially intelligent robot working as a receptionist at NTU, is one of them. Beautiful, brunette, her hair is very close to people and has a soft skin, as well as people can smile, meet and greet people, shake hands and make eye contact. Even more surprising is the ability to recognize and speak to previous conversations. Like Yangyang, Nadine was designed by its creator, Professor Nadia Thalmann’a.

Robots with Emotions Come

Emotions are the most significant difference when it comes to the features that separate humans from robots. Unfortunately, many scientists are working hard to seize this last limit. East Asian Microsoft Application and Service Group experts have designed an artificially intelligent program that can feel emotions and speak to people more naturally, like a human.

This AI, called Xiaoice, can answer questions like a 17-year-old girl, and she can lie if she doesn't know the question. If caught, he may get angry or embarrassed. Xiaoice can also be sarcastic, rude and impatient. As is known, these characteristics are qualities in humans.

Xiaoice's unpredictability allows him to interact with people as if he were a human being. For now, this artificial intelligence is a way for Chinese people to enjoy themselves when they are bored or alone. But their creators continue to work to perfect it. According to Microsoft, Xiaoice is now trying to build a robot that will enter a self-learning and self-growing cycle, and hopes that it will be better.

Soon They Will Invade The Human Brain

Wouldn't it be surprising if, for example, a foreign language could be learned in a few minutes by downloading it to the brain? Although this may seem seem impossible, it may happen in the near future. Ray Kurzweil, a futurist, inventor and engineering director at Google, predicts that by 2030, the nanobots placed in the brain will take humanity further. It is thought that reaching any interest with tiny robots placed inside the brain will take minutes or even seconds. These robots allow people to archive their thoughts, memories and even send or receive e-mails, photos and videos directly to their brain.

Kurzweil, who is interested in the development of artificial intelligence at Google, believes that by placing nanobots in the brain, it will bring humanity to a more unique capacity, even closer to the deities. If this technology is used correctly, nanobots can do great things like epilepsy treatment or improve intelligence, memory, and even human thought. However, there are dangers associated with them. This is because the brain is already an unsolved unknown even in this century, and the placement of nanobots in it can be a very risky situation. But most of all, since the nanobots connect that person to the Internet, a powerful AI can easily access the brain and kill humanity if it decides mankind's rebellion and destruction.



They are being used as weapons

In 2017, the technology giant Google allocated a budget of $ 12- $ 15 billion for the artificial intelligence military border protection project, which the US Department of Defense conducted with the Pentagon. The Pentagon plans to use this budget to develop deep learning machines and autonomous robots, among other new forms of technology.

With this in mind, it is not surprising that within a few years the army used AI killer robots on the battlefield. Using artificial intelligence during the war can save thousands of lives, but aggressive weapons that can think and function on their own can pose a major threat. They can kill not only enemies, but also military personnel and even innocent people.

This is a danger that high-profile artificial intelligence expert and well-known scientists want to avoid. Unfortunately, in 2015, at the International Conference on Artificial Intelligence in Argentina, there is little that this letter can do, even if a clear letter is signed banning the development of AIs and autonomous weapons for military purposes. Humanity is now at the dawn of the third revolution in war, and whoever wins will be the most powerful nation in the world and perhaps the catalyst of the human generation.

They Begin to Learn to Distinguish Truth and False

Scientists are developing new methods to distinguish machines from false to try to prevent AI technology from taking over humanity. This method will try to make AIs more empathic and human.

Murray Shanahan, a professor of cognitive robotics at Imperial College London, believes that these studies are key to preventing human destruction. Researchers led by Mark Riedl and Brent Harrison of the School of Interactive Computing at the Georgia Institute of Technology are trying to instill human morality into stories using stories. This may seem simple, but it makes sense because in real life, human values ​​are taught to children by reading them stories. AI are like children and they don't know wrong or good bad until they are really taught.

There is, however, a great danger in teaching human values ​​to artificially intelligent robots. Looking at the years of human history, it is clear that people can still do unimaginable evils, even if they are taught what is right or wrong. It is enough to look at Hitler, Stalin and Pol Pot as an example.

What makes a strong AI from doing the same thing because people can do so much evil? It is a possible risk that a super intelligent AI will realize that people are bad for the environment and therefore misunderstand and destroy humanity.


Leave a Reply

Your email address will not be published. Required fields are marked *