This topic has come again and again… Artificial Intelligence and existence of humanity…whether the sci-fi movies depicting the horrors or the rogue side of Artificial Intelligence affecting human beings, or the various books written on this topic. Is Artificial Intelligence (AI) research driving humanity into an apocalypse or demise of the human civilization? This is the notion which has driven movies like The Terminator, Transcendence, The Matrix, Ex Machina, and many others. AI is likely to become “greater than the collective intelligence of every person born in the history of the world,” and others including humans – “will exist just to serve its intelligence,” warn the characters in the movie Transcendence.
While such movies are based on fictions, they are scary to think of, but the recent warnings from those who understand technology and AI are scarier. Stephen Hawking, the most eminent physicist as of today, said that “the development of full artificial intelligence could spell the end of the human race.” He wrote in The Independent in 2014, “Success in creating AI would be the biggest event in human history, unfortunately, it might also be the last, unless we learn how to avoid the risks.” Apple’s co-founder, Steve Wozniak, stated that “computers are going to take over from humans,” depicting the future “scary and very bad for people.” The Chief Executive of Tesla, Elon Musk, recently said AI was “potentially more dangerous than nukes.” Musk thought that there should be a regulatory oversight, at the national or international level, and confirmed that his decision to invest in the Artificial Intelligence Company, DeepMind was to keep an eye on AI, as he believed there is “potentially a dangerous outcome there.” Bill Gates, the co-founder of Microsoft also is “in the camp that is concerned about super intelligence.”
Technology today has already taken great leaps in terms of warfare since the Iraq war which broke out in 2001. Unmanned drones are very common and used for surveillance and rapid attacks on targets. Small robots are used to disable IEDs (Improvised Explosive Devices). There is huge funding by US military to research on self-aware and autonomous robots in order to de-risk loss of life for human beings during war.
The dangers of AI are immense. A Google Company, Boston Dynamics released a video recently showing a mobile 6 foot tall, 320 lb humanoid robot named Atlas, running around in the woods. The US Department of Defence is sponsoring the project and the company plans one more version, a more agile humanoid in the near future. This just seems to be the beginning and a lot to follow. Today, with the rise of terror outfits across continents stashed with huge amounts of funding, it is possible that some of the AI technologies land up with them and are misused.
Worried at the rate of growth of Artificial Intelligence technology, Elon Musk, known as the most eccentric entrepreneur in terms of risk-taking has granted roughly $7 million for global research to use AI beneficially. While investment in the benefits of AI is a good move, how does the universe regulate usage of AI? Do we need to regulate AI research? These are pertinent questions that we have.
A recent article ‘Regulating Artificial Intelligence Systems’ by Matthew Scherer develops a case for thinking that AI is very difficult to regulate. Here are some key challenges for AI regulation:
- The main challenge in AI is to define AI itself and determine what is to be regulated as the partitions at times between rationality and irrationality, knowledge base and intelligence are too fuzzy.
- Often AI research and development is discreet and could occur within infrastructure not visible to regulators.
- Diffusiveness of development across continents is one more issue, e.g a program being developed across America, Europe, Asia and Africa by different programmers could be very difficult to be controlled by as they occur in geographically and jurisdictionally separate states.
- These projects may involve separate pre-existing off-the-shelve components and hardware, the full effect of which might not be realized unless the projects are completed.
- The systems or technologies used by themselves could be so encapsulated that they would be very difficult to reverse engineer or understood by the regulators.
While, the definition of AI is to be fine-tuned, the risk due to discreetness of research could be minimized in the future with such research being done by large corporates like Google, Apple or Facebook and Government which would be easier to monitor. Diffusiveness of research is a practical problem and would require some kind of global coordination among countries and continents to ensure regulation.
However the challenges for a regulatory framework are compounded by the general outcomes of AI research:
- Usually, the result of research is not known to researchers when they initiate work, but gets clearer as they proceed with implementation. Usually, the research and development is iterative and changes in requirements occur throughout the cycle. Therefore the outcome is not foreseeable.
- AI might work in ways which are no more in control of the responsible legal parties.
- Super intelligence could help AI objects or robots to elude control by human beings.
With many challenges to be resolved before AI regulation can be in place, the question is whether humanity has enough time to control the products of their endeavor. Like ‘fire’, human being could reap the benefits of AI till they control the same. Lack of controls could lead to a human extinction leading to the next generation of our evolution – “Humanoids”!