Edgy comments on social media are usually tedious. But not always. The billionaire nerd fight between Facebook CEO Mark Zuckerberg and SpaceX and Tesla’s Elon Musk concerns an exceedingly important issue. Attention should be paid.
During a Google Live chat, Zuckerberg took issue with technologists who are raising red flags about the risks posed by artificial intelligence (AI). He doesn’t mention Musk by name, but the inference was clear. In a snarky Twitter response, Musk summoned up about as much condescension as it is humanly is possible to have:
I've talked to Mark about this. His understanding of the subject is limited.
It wouldn’t be surprising if Zuckerberg blocked him on Facebook, but that’s a story for another day and another blog. The important task is to assess the dangers of AI.
It is not an either/or question. AI will be extraordinarily dangerous and extraordinarily beneficial. The key is that it will gain momentum as discoveries and inventions pile upon each other. AI can’t be stopped. It shares these traits with every invention or discovery in human history, from the discovery of fire to quantum computers.
The key here is escalating scale: The discovery of fire (which, no doubt, was quickly followed by the discovery that it’s not a good idea to touch the new discovery) introduced great advantages and dangers to those nearby. But life on earth was not threatened.
The risks continue to grow. The unleashing of the atom increased the benefits and risks by orders of magnitude. But even a nuclear reactor melting down in the Pacific didn’t threaten mankind. AI is arguably more dangerous. And AI is not evolving in a vacuum. It is integrated with and cross-pollinating Big Data, the IoT, and other emerging technologies.
This potent wave of discovery and invention adds something different: uncertainty. The most frightening thing about AI is the possibility of a total loss of control. The machines truly can remove humans from the equation. It is in many ways the logical conclusion of the process that started when the first fire was set way back in that cave, just as The Ramones are the logical conclusion of the first melody tapped out around that first fire. Add less esoteric issues, such as uncertainty about who will control the development of AI, and we end up with something that is very scary.
It is, of course, a good idea to have as many safeguards in place as possible. At the end of the day, however, it is likely that the machines are so much smarter than us that they will get around any roadblocks or compartmentalizing of control with ease.
There is disagreement about the debate, of course. Annalee Newitz at Ars Technica says that Musk and Zuckerberg are missing the point and that AI means different things to different people. That may be true. But it evades the key point, which is that humans are racing to roll out technology that soon will be able to operate without their knowledge, consent or even understanding.
There is no doubt at all that both Musk and Zuckerberg are brilliant. It’s virtually impossible to think that they don’t agree that AI has massive potential benefits and raises profound and troubling questions. The social media back and forth is a product of the times, their almost certainly significant egos, and the fact that Zuckerberg may be thinking of running for president.
Carl Weinschenk covers telecom for IT Business Edge. He writes about wireless technology, disaster recovery/business continuity, cellular services, the Internet of Things, machine-to-machine communications and other emerging technologies and platforms. He also covers net neutrality and related regulatory issues. Weinschenk has written about the phone companies, cable operators and related companies for decades and is senior editor of Broadband Technology Report. He can be reached at email@example.com and via twitter at @DailyMusicBrk.