I am now reading this short book by Luke Dormehl to get a better handle on AI. With all the fuss I've made about the subject, I thought I ought to inform myself on it a little better. This is a readable, journalistic history of AI from its earliest days up to the present, with some speculation about the future at the end. As far as I've read, it has covered the early years of computational logic under Marvin Minsky and others, which proved to be far less fruitful than people had hoped. Now I am moving into neural networks and deep learning, which have transformed the field into its current state. It shouldn't take me long to finish the rest of the book, and I will probably have more to say about the later chapters.
My interest in AI is not at all technical and has more to do with the sociological and philosophical changes that it is precipitating. There are still skeptics around, but I think we are well on the way to superintelligence, and there is already a certain pressure to reevaluate our collective self-conceptions and modes of living. If you have a materialistic view of the world, there is no magic ingredient to human beings that can't be replicated and magnified or reworked into a more effective form. But even without superintelligence, radical changes are occurring, because businesses are requiring fewer and fewer employees with the new technologies available. Developed countries are going to have to rethink their public policies whether they like it or not, because unemployment is slowly becoming the norm. Those who point to formulas of the past, such as boosting economic growth to increase household incomes, are toying with concepts that are nearly obsolete and have no chance of solving the social problems to come. In particular, the American model of working hard and getting ahead financially is increasingly untenable for the majority of workers, because their skills simply are not needed. It seems to me that as the demand for human labor declines, sinecures, basic income, or perhaps even the elimination of currency will replace the current model. At the policy level, little is being done in preparation now, because the political system is reactive to the immediate perceptions of voters who have no idea what is in store for them.
Another aspect of AI, which, fortunately, is being examined at the Centre for the Study of Existential Risk, is that it may result in unexpected disasters unless it is controlled properly. Even if the intentions of AI developers are good, AI may go awry or it may fall into the wrong hands. At this point I am less worried about it going awry than about it falling into the wrong hands. The wrong hands could be those of anyone from amoral technocrats to egomaniacs to religious fundamentalists, the latter including both Islamic terrorists and Christians. This technology is becoming powerful, and power has inevitably been abused throughout history.
Perhaps it is the philosophical aspects of AI that interest me the most. As I've said, we're not as smart as we think we are, but we've never had to deal with anything that clearly exceeds our intellectual capabilities. I expect there to be a series of shocks and rude awakenings that may change how we think about ourselves and our relationship to the universe. One of the reasons why I like the work of E.O. Wilson is that he was the first scientist to suggest that humans are eusocial creatures, like ants. This is simply an extension of Darwinism that, to me, provides the best framework for understanding our moral tendencies. AI researchers are currently a little stumped by the problem of making AI people-friendly, and that seems natural, because AI did not come into existence through a biological, evolutionary process in which morality became a key ingredient of survival. In fact, AI has no survival, reproductive or moral imperatives at all unless we build them into it. What we are about to find out is that most, if not all, of the "values" that we hold dear are mere evolutionary accidents that steered our behavior in a direction that allowed our species to survive up to the present. AI will not inherently possess any superstitions and will not be able to understand ours the way we do. I am wondering whether we will be able to understand the thinking processes of autonomous AI, because ultimately it will be self-teaching and will use methods that it develops on its own. I also think that there will be limitations to interfacing humans with AI, because our little brains have limited capacities. Eventually, assuming no disasters occur, AI will become the new God, but without the religious mumbo jumbo. My preference would be for it to become the keeper of our habitat, and I have no desire to expand the capabilities of my brain or to become immortal. That, in effect, would be death, because I would no longer be who I am now.
No comments:
Post a Comment
Comments are moderated in order to remove spam.