Contents
Let this not happen soon, it is important to think about the consequences now, says philosopher Nick Bostrom: “Machine superintelligence is perhaps the last invention of mankind.”
In the middle of the XNUMXth century, entering commands into a box was considered artificial intelligence. The results were limited to what we entered. Since then, things have changed significantly. Today, development is focused on making machines self-learning. We do not yet know how to implement in machines the algorithmic techniques that the brain uses. But I think it’s only a matter of time.
After many years of hard work, many investments, we will have chimpanzee-level artificial intelligence. Then, after many more years of hard work, we will come to the intelligence of a child. And a few moments later we find ourselves beyond the level of Einstein. The train does not stop at Humanity station. Rather, it will whistle past.
Read more:
- Why the Internet is dangerous: 7 risk factors
Think about it: machine superintelligence may be humanity’s latest invention. Machines will be better at inventions than we are, and they will innovate at digital speed. Think of all the crazy technologies people could discover with more time: cures for aging, space colonization, self-replicating nanorobots, or mind uploading into computers. All this could be discovered by the superintelligence, and perhaps quite quickly. A superintelligence with such technological maturity would be extremely influential. Our future would be determined by his preferences. Now the interesting question is: what are these preferences? To understand this, think of intelligence development as a move towards greater optimization. The superintelligence will seek to reduce the solution of all problems to a set of configurations. In such a case, his ends and the means to achieve them will not always be consistent with our ethics.
Suppose we give artificial intelligence a goal to solve a hard math problem. At some point, he realizes that the best way to solve it is to transform the planet into a giant computer in order to improve his thinking ability. Since we cannot approve of such an option, the superintelligence may decide that people are a brake on the way to solving the problem.
Read more:
- How does science explain consciousness?
Of course, these are contrived examples. But the main idea here is this: when creating a powerful optimization process for achieving a goal, it is worth making sure that the definition of the goal includes everything that we care about. Simply put, when developing a superintelligent artificial intelligence, it is important to be sure that it is on your side and shares our values.
We must not hope that we will be able to keep the superintelligent genie locked up forever. Sooner or later he will come out. Instead of looking for ways to control the intelligence we have created, we need to tune it to know what we hold dear. Then in his calculations he will take into account whether we like the result.
Nick Bostrom, Swedish philosopher, director of the Future of Humanity Institute at Oxford University (UK). A recording of the lecture can be viewed on the website.