From Siri to driverless cars, artificial intelligence (AI) is moving fast. While science fiction often portrays AI as robots with human-like features, AI can cover everything from Google's search algorithms to IBM's Watson and autonomous decoys.
Today it is known as narrow AI (or weak AI) because artificial intelligence is designed to perform a task (for example, only facial recognition or only internet calls or only driving). However, the long-term goal of many researchers is to create general AI (YGZ or strong AI).Although Dar AI will outperform humans regardless of its specific task, such as playing chess or solving equations, AGI will outperform humans in almost every cognitive task.
* HOW CAN ARTIFICIAL INTELLIGENCE BE DANGEROUS?
Most researchers agree that a superzeki artificial intelligence is unlikely to exhibit human emotions such as love or hate, and that there is no reason to expect the artificial intelligence to be intentionally bona fide or malicious. Instead, experts are most likely considering these two scenarios as they ponder how artificial intelligence could become a risk:
1. AI is programmed to do something destructive: autonomous weapons are artificial intelligence systems programmed to kill. In the hands of the wrong person, these weapons can easily cause mass casualties. In addition, an artificial intelligence arms race can lead to an artificial intelligence war that leads to mass casualties, albeit unintentionally. In order to avoid being blocked by the enemy, these weapons would be designed to be extremely difficult to simply turn off, so that people could reasonably lose control of such a situation. This risk is one that exists even with narrow AI but grows as AI intelligence and autonomy levels increase.
2. AI is programmed to do something useful, but develops a destructive method to achieve its goal: this can happen when we cannot fully adjust AI's goals to ours, which is strikingly difficult.If you ask a submissive smart car to get you to the airport as quickly as possible, it can take you there, chased by helicopters and covered in vomit, doing what you don't want, literally, but what you don't want. If a super-intelligent system is tasked with an ambitious geoengineering project, it could damage our ecosystem as a side effect and see people's attempts to stop it as a threat that can be met.
As these examples show, the concern about advanced AI is competence, not malice. A super-intelligent AI will be extremely good at achieving its goals, and if those goals are not compatible with ours, we have a problem. You're probably not a Yesil Ant enemy who is maliciously stepping on ants, but if you're in charge of a hydroelectric green energy project and there's an anthill in the area that shouldn't be stepped on, it's too bad for ants. The main goal of AI security research is to never put humanity in the place of these ants.
Comments