Post by Admin on Jan 25, 2021 15:41:10 GMT
From the Golden Age to… Roombas: 8 Essential Books About Artificial Intelligence
Michael Wooldridge Helps Us Prepare for Our Robert Overlords
By Michael Wooldridge
January 25, 2021
lithub.com/from-the-golden-age-to-roombas-reading-about-artificial-intelligence/
The field of Artificial Intelligence (AI) is concerned with building machines that have the same capabilities that human beings have. If the ultimate dream of AI is ever realized, then we will have machines that can do everything a human being can do: read a book, write a book, understand a joke, make an omelette, ride a bicycle, drive a car, tie a pair of shoelaces—everything.
For now, and for the foreseeable future, however, the ambitions of AI are considerably more modest. Current AI is mostly aimed at getting computers to do very specific tasks which currently require human brains, rather than trying to build “general” AI systems, which can do everything. And the past decade has witnessed a lot of progress in this respect, which is why the field is so feted right now. We have computers that can do a passable job of translating between an astonishing range of languages; computers that can automatically caption pictures and identify key elements within them; and we are tantalizingly close to the dream of driverless cars (although we may well remain tantalizingly close for some decades to come).
With all the recent excitement and progress, you might reasonably assume that AI is a new discipline, but in fact it has been with us since the mid 1950s. Since then, a range of different approaches to AI have been proposed with the hope that they will provide a silver bullet for building intelligent machines—but with exhausting regularity, each has ultimately proved to be of limited value at best, a dead end at worst. The current wave of AI, called deep learning, is based around using ideas from the microstructure of the brain and nervous system to inform the design of neural networks: massively interconnected networks of very simple computational elements, which collectively are capable of learning to carry out complex tasks.
Whether deep learning and neural networks will prove to be another dead end (and lead to another “AI winter,” in the parlance of the AI community) remains to be seen. I doubt that deep learning alone will be the magic ingredient—it may be part of the recipe, but that recipe will surely contain many other ingredients, some of which we probably can’t even guess at right now.
Michael Wooldridge Helps Us Prepare for Our Robert Overlords
By Michael Wooldridge
January 25, 2021
lithub.com/from-the-golden-age-to-roombas-reading-about-artificial-intelligence/
The field of Artificial Intelligence (AI) is concerned with building machines that have the same capabilities that human beings have. If the ultimate dream of AI is ever realized, then we will have machines that can do everything a human being can do: read a book, write a book, understand a joke, make an omelette, ride a bicycle, drive a car, tie a pair of shoelaces—everything.
For now, and for the foreseeable future, however, the ambitions of AI are considerably more modest. Current AI is mostly aimed at getting computers to do very specific tasks which currently require human brains, rather than trying to build “general” AI systems, which can do everything. And the past decade has witnessed a lot of progress in this respect, which is why the field is so feted right now. We have computers that can do a passable job of translating between an astonishing range of languages; computers that can automatically caption pictures and identify key elements within them; and we are tantalizingly close to the dream of driverless cars (although we may well remain tantalizingly close for some decades to come).
With all the recent excitement and progress, you might reasonably assume that AI is a new discipline, but in fact it has been with us since the mid 1950s. Since then, a range of different approaches to AI have been proposed with the hope that they will provide a silver bullet for building intelligent machines—but with exhausting regularity, each has ultimately proved to be of limited value at best, a dead end at worst. The current wave of AI, called deep learning, is based around using ideas from the microstructure of the brain and nervous system to inform the design of neural networks: massively interconnected networks of very simple computational elements, which collectively are capable of learning to carry out complex tasks.
Whether deep learning and neural networks will prove to be another dead end (and lead to another “AI winter,” in the parlance of the AI community) remains to be seen. I doubt that deep learning alone will be the magic ingredient—it may be part of the recipe, but that recipe will surely contain many other ingredients, some of which we probably can’t even guess at right now.