An artificial intelligence (AI) is a computer program which acts - according to its developers - intelligent. That could be anything. Typical examples are route finding algorithms, chess programs, ego shooter computer opponents and classifiers. The last category of AIs (classifiers) is huge and includes programs which try to find out what you were writing (see my bachelor's thesis), try to figure out who is on an image (face recognition) or what was spoken (automatic speech recognition).
Strong and Weak AIs
Science fiction literature distinguishes two kinds of AIs: Weak AIs and strong AIs.
Currently, we only know weak AIs. They can do incredible things (see A.I. in Computer Games and Awesome Robots), but that is nothing compared to a stong AI. Strong AIs are capable of adapting to completely new tasks. They can do creative work. In other words, they can do any task any human could do. They can do research and compose the most beautiful art. As they are machines which will be build by somebody (hence the term 'artificial'), they are better understood than our biological brains. Strong AIs will be able to understand how they work themselfes and very likely be able to improve themselfes.
This is the point where a technological singularity happens. The machines start developing faster than any human can comprehend how they improve. At some point they will even develop faster than humankind together can understand.
Now that you know the context, I would like to share an answer to the following question I've recently seen:
What would change with strong AIs?
What happens when machines take all of our jobs including creative/research/artistic ones?
Something like that would not happen instantly. It is a gradual process. Many answers say something like nothing would be scarce. Well, I think that is wrong for two reasons:
- Scarcity of resources: Oil, gold and diamonds are expensive. But they are not (only) expensive because it takes a high amount of work to get them, but also because the number of diamonds on earth is limited. More or cheaper "work hours" like much better AIs combined with much better robots would not solve that problem. Oh, and please don't forget energy!
- Distribution: I think we produce enough food so that nobody on earth would have to starve. We certainly have enough clean water for everybody. Why do so many people still starve and don't have access to clean water? Because the distribution is not equal. Europe and the US make use of much more resources per person than Africa does. The richest 1% of the US make use of MUCH more resources than the bottom 30% (I guess the numbers are more extreme). Who do you think would own the robots that take the jobs? Who would profit from this much cheaper work hours? I think AIs that would in principle be able to solve any problem a human could solve (e.g. creating art and conducting research) would cause serious social problems if we don't adapt to the new situation. Such AIs would have the potential for a much worse world than we currently live in.
However, with the right politics, it could vastly improve our world. Having the insight how such AIs work, we could make them decide ultimatively unbiased for a greater good. They could be used to arbitrate a dispute as a neutral, intelligent instance. They could accelerate research. They could help us to understand ourselves.
What is the effect of technological singularity in job market?
In a ideal world, everybody would only do the job he or she wants to do. We would eventually work less, but I think we would still work. Humans are resources and as such they will always be valuable in any economy.
Another, darker, scenario is that AIs would gradually remove whole industries, starting with simple ones. Taxi and truck drivers are not necessary. They could step-by-step be replaced by AIs. However, no new jobs would be created for those people. They would have to get aid by the government. But as they would get less money, the economy would focus on the people who have money. That would be the people who own the AIs. At some point people would realize that they will never be able to get a new job. Even worse, their children will never be able to get a job. Extreme poverty would rise as the state gets less taxes (as less goods are consumed, because people have less money). The AIs would predict how every single person would most likely act. How could they do so? Well, you have a smartphone. Your conversations on WhatsApp, Twitter, Facebook, Gmail, ... get tracked and automatically analyzed. You can be predicted to a certain degree. You can be influenced by personalized advertising. A really clever AI will make itself able to act in any possible scenario, replicate itself and make itself less dependent. In this dark scenario people would eventually start at some point to try to get more money from the people controlling the AIs, but how do you force them to do so? The police might also be replaced by AIs.
My guess is that the reality would be somewhere in between. A couple of super-rich people and the rest giving them massages.
A possible solution to those social problems
I've just described that AIs might cause serious social problems. However, I think we can solve those problems by making sure that income inequality cannot get too high. This means there should be very high and effective inheritance tax as well as a taxation system that prevents people from getting too rich / people too poor. The most extrem action to prevent too poor people is an unconditional income, the most extrem action to prevent people getting too rich is an upper limit on what somebody could have.
I think an unconditional income would be a good thing, but its hard to tell how high that should be.
The easiest way to prevent people from getting too rich is adding a tax system that adjusts to income:
- Your first 0 - 2000 Euro / month don't get taxed at all
- Your next 2001 - 3000 Euro / month get taxed with 0% + (100%/2) = 50%
- Your next 3001 - 4000 Euro / month get taxed with 50% + (50%/2) = 75%
- Your next 4001 - 5000 Euro / month get taxed with 75% + (25%/2) = 87.5%
You can (and should - I think my numbers are not well-chosen!) argue about the exact numbers, but I guess you get what I mean. The tax should never be 100%, thus leaving the possibility to get more money. But the difficulty to do so should increase. This effectively prevents that some people get too rich and hence resulting in a very instable system.
It would be important to do so before anybody develops a strong AI.
Related books, movies and talks
Books (all fiction)
- "Zero" by Marc Elsberg: People get controlled by life improvement apps in a very indirect way.
- Out-Series by Andreas Eschbach: A device was developed, that lets people connect their brains. A new form of conciousness develops from that.
- Brave new world by Aldous Huxley: People get distracted from issues by consuming many goods.
Movies (all fiction)
- Transcendence: The mind of one person gets transformed in a computer.
- I, Robot: An AI gets developed and very powerful humanoid robots get controlled by it.
- The rise of the new global super-rich: Technology is advancing in leaps and bounds — and so is economic inequality, says writer Chrystia Freeland. In an impassioned talk, she charts the rise of a new class of plutocrats (those who are extremely powerful because they are extremely wealthy), and suggests that globalization and new technology are actually fueling, rather than closing, the global income gap. Freeland lays out three problems with plutocracy … and one glimmer of hope.
And a coulple of talks by people who are not active within AI / ML research themselves:
- Nick Bostrom: "Superintelligence" - Strong AI is inevitable; we should set the initial conditions up the right way
- Sam Harris and Joe Rogan talking about artificial intelligence
- Max Tegmark and Nick Bostrom speak to the UN about the threat of AI