Five Questions with A.I. Contrarian, Gary Marcus
Gary Marcus, scientist, bestselling author, entrepreneur, and AI contrarian, founded and served as CEO of the machine learning startup Geometric Intelligence, which was recently acquired by Uber. As a Professor of Psychology and Neural Science at NYU, he has published extensively in fields ranging from human and animal behavior to neuroscience, genetics, and AI. His most recent book is “The Future of the Brain.”
1. You have spoken about the lack of common sense, reasoning and planning capabilities in current AI systems. Are there areas of cognitive science that you believe could help address these issues?
Absolutely. I don’t expect next-generation AI systems to be exact replicas of human brains, — which, after all, are deeply flawed in their ways — but I do think that AI research could benefit greatly from the human cognitive sciences, by using the techniques of cognitive science techniques, drawn from areas like cognitive psychology, linguistics, and even philosophy to focus on understanding how people reason about and talk about the everyday world. One of the most obvious things, which most people in machine learning have stubbornly resisted, is that humans uses patterns of rules and exceptions to help navigate the world. For historical reasons, people in neural networks have an allergy to such things, but the consequence has been systems that are deeply superficial.
2. What are your thoughts on brain machine interfaces — devices that translate neuronal info into commands that can operate external technology?
If you had to draw an ethical boundary around this technology, what would it be? These things are already starting to be built and and will only become more powerful with time. One big ethical issue is about fair access; I would hate to see all the rich kids using brain implants to get into Harvard, squeezing out all the poor kids who couldn’t afford them.
3. If you could erase one common media misperception of AI from the collective consciousness, what would it be?
The idea that superintelligence is about to arrive; it’s just not true. It will come eventually, but we are at least a couple decades away and maybe as far away as a century, and the media has really distorted things. On general intelligence, as opposed to narrow intelligence, there has been very little progress.
4. If we are all potential patients of a diagnostic AI in the future, what advice would you give to us?
In the short term, look for agreement between doctor and machine, and where they differ, get a third opinion. For now, anyway, human doctors and machines are differently abled; machines are better statisticians, humans better at reading (eg patient histories) and understanding the bigger picture. In the long term, perhaps several decades hence, machine diagnosticians might well be superior to humans in most cases.
5. What would you do if a large foundation empowered you to build a platform for an A.I./ M.L. platform in service of personalized healthcare? Where would you start?
By building an intellectually diverse team — not just people doing the fashionable thing (deep learning), but also people who understand the strengths of classical AI, and what a hybrid model that synthesizes older and new traditions together might look like.
This interview was originally published in Tech Fancy Issue 9: Medicine Meta-Scene.