Why is superintelligence the last invention humanity needs
Implications and possible futures of superintelligence
The first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
- Irving J. Good, 1965
The question that gives the title to this article could have many answers, because is of philosophical nature. In this article I will talk about my thoughts about this question and give some possible answers. Maybe you also have thoughts about it, therefore it would be a good idea if you could share your thoughts on this question to enrich the conversation.
Initially, we must to define superintelligence. Superintelligence is general intelligence far beyond human level. A superintelligence would be more intelligent than humans, but also more intelligent than artificial general intelligence (AGI). Actually, it is though that to create superintelligence, first we need to create AGI, and give it as goal to create superintelligence.
Now, since we didn’t have created AGI yet, it seems unthinkable to start considering superintelligence. Actually, there isn’t consensus among experts on whether superintelligence and AGI can be possible. Some experts think that it can be created in 2100, others think that much earlier and other think that is impossible. The truth is that it may happen in decades, centuries or never, we simply don’t know, and since we don’t know is better to start thinking about this topics rather than delay them.
Answer #1: It would produce a prosper future for mankind
Imagine that a superintelligence runs society, and this superintelligence has developed amazing technologies and advances. Due to this, humanity is free from poverty, disease and all of our problems, and all humans just enjoy an amazing life, a life that is used not in working but in sports, hobbies and learning, meeting with family, friends, teams, clubs or community groups, simply enjoying. Also superintelligence would enforce strict rules to keep the order.
It seems like a utopia, right? It could be. But in this scenario, the reason why I think that superintelligence is the last invention humanity needs is because we won’t have to harvest anything, just enjoy the fruits of superintelligence. I don’t know if this would be something good or bad. But in this scenario seems like superintelligence can ensure a better future for us than we can for ourselves. Although, is very important to keep asking: What type of future we want with AI?
Personally, I think this is a great future for us. Even though, if we want this future, we need to solve a lot of technical, technological, philosophical and ethical problems. We need to solve the goal-oriented problem, to ensure that superintelligence share it’s goals with ours. To create superintelligence, first we need to create AGI. And a lot of more things. Let me know in the comments whether you like this scenario or not.
Answer #2: We shouldn’t create superintelligence
I have a bittersweet feeling about superintelligence. The answer above was the sweet one. Now comes the sour one.
First. I think we shouldn’t create superintelligence if we don’t fully understand it. The amounts of things of our future that superintelligence can influence is huge. So, would we trust something that we don’t fully understand how it works? This is a tricky question, since also we don’t understand exactly how our brain works, and we trust them and also trust people with their brains.
Second. I think we can have a great future without superintelligence, or even AGI. All the AI we have today is weak, since they only accomplish very specific goals and tasks. If we can improve weak AI to accomplish its goals in an intelligent, efficient and proper way, I think that would improve our future and also is a way to ensure that we are in control. But, will it be worth it to have control of technology but have a less prosperous future than if superintelligence were in control? This are the type of philosophical and ethical questions we need to solve.
If we don’t create a superintelligence, could something similar to it emerge?
As I mentioned above, we simply don’t know whether AGI or superintelligence is possible. There is a possibility to happen, but also to not happen. But if we fail at creating AGI, is there another possibility to get something similar to it? Mind uploading is the answer. Imagine if we find the way of uploading a people’s mind to internet. This would be very similar to creating an AGI which is wandering around the internet. I think mind uploading would be more trustworthy, since is basically the mind of a people, and it’s easier to trust in people than in machines regardless of their intelligence.
This possibility is explored in the movie Trascendence, which I recommend you to watch.
Although, we don’t know whether AGI or ASI is possible or not, we need to think about this topics, because we are talking about AI, which is a technology with huge potential, but as any other technology, it can build or destroy. With this technology we must be proactive rather than reactive. We need to plan and think very well the future we want.
I would like to know your thoughts about this question. It would be interesting to talk about it. Share your thoughts in the comment section.
If you liked this article, clap it. Thank you for read me and see you next.