SHARE
Facebook X Pinterest WhatsApp

Putin Weighs In on AI as Huawei Makes Announcements

The latest person to speak out about artificial intelligence (AI) is none other than Vladimir Putin. The category has taken on a bit of an ominous feel. On one side, the AI ecosystem continues to expand and offer more sophisticated platforms and solutions. On the other is an increasing chorus of people, including some executives […]

Sep 7, 2017

The latest person to speak out about artificial intelligence (AI) is none other than Vladimir Putin.

The category has taken on a bit of an ominous feel. On one side, the AI ecosystem continues to expand and offer more sophisticated platforms and solutions. On the other is an increasing chorus of people, including some executives from the first group, who tell us that this has a high probability of not ending well.

First, some items from the first category: At Huawei Connect 2017 in Shanghai, the company introduced the Enterprise Intelligence (EI) platform. The concept is to bring together a number of disciplines, including AI, which generally are used to create one-off point solutions. The new platforms can be used more broadly. The features fall into several categories: basic platforms including AI, machine learning and other tools; specific AI services; scenario-specific solutions and “heterogeneous computing platforms.”

Venture Beat posted a Reuters report that Huawei will release the AI-powered Mate 10 and Mate 10 Pro smartphones in Munich on Oct. 16.

Google is a big player in AI as well. The Shanghai Daily reported this weekend that the company will open an AI lab in Beijing and is recruiting machine learning and cloud engineers. It’s not a surprise: In May, Google said that it would increase its AI research in China.

It’s interesting to see that executives and others driving the research also among those most concerned. Business is not moral or immoral. It is amoral: Companies continue to systematically make advances until stockholders, owners, lawyers, regulators or other authority centers tell them to stop or to work under different rules. That’s okay when the end goal of the research is on how to make brakes that stop cars more quickly or televisions that deliver clearer images. The three differences with AI and related high-tech initiatives are that they are tools (that can create end products that are beneficial or dangerous), that progress comes in huge chunks and that we don’t know where this will end.

Of course, how to handle AI (or, alternately, how frightened to be) is not a new debate. Most recently, it was the subject of a public dustup between Elon Musk and Mark Zuckerberg. It’s unclear if that spat is over.

It’s hard to image any more important a person weighing in on the topic. It has happened though: Russian President Vladimir Putin. Here, according to RT, is what Putin told Russian kids during a broadcast science class as the school year began:

Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.

Putin pledged to share Russian AI research with other nations in “the same way we share our nuclear technologies today.” That promise, of course, is hardly reassuring.

It’s a fascinating area that almost seems as if industry and science are creating a new race. Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence, posted an op-ed on Friday in The New York Times. He begins by repeating rules for robots written by sci-fi author Isaac Asimov in 1942: Robots must not injure people or by inaction cause them to be injured, they must obey humans except if doing so would violate the first rule, and they must protect themselves unless doing so violates the other two laws.

Etzioni builds on this with three AI laws: Systems must follow all laws to which human operators are liable; an AI system must disclose “clearly” that it is not human and that it cannot “retain or disclose confidential information without explicit approval from the source of that information.”

Asimov’s rules (updated for AI) and Etzioni’s additions are a good start. The challenge will be putting those and others in place and universally agreed upon before they can be edited by the machines into which they are dictated.

Carl Weinschenk covers telecom for IT Business Edge. He writes about wireless technology, disaster recovery/business continuity, cellular services, the Internet of Things, machine-to-machine communications and other emerging technologies and platforms. He also covers net neutrality and related regulatory issues. Weinschenk has written about the phone companies, cable operators and related companies for decades and is senior editor of Broadband Technology Report. He can be reached at cweinsch@optonline.net and via twitter at @DailyMusicBrk.

 

 

Recommended for you...

Enterprise Software Startups: What It Takes To Get VC Funding
Tom Taulli
Aug 25, 2022
Top RPA Tools 2022: Robotic Process Automation Software
Jenn Fulmer
Aug 24, 2022
Metaverse’s Biggest Potential Is In Enterprises
Tom Taulli
Aug 18, 2022
The Value of the Metaverse for Small Businesses
IT Business Edge Logo

The go-to resource for IT professionals from all corners of the tech world looking for cutting edge technology solutions that solve their unique business challenges. We aim to help these professionals grow their knowledge base and authority in their field with the top news and trends in the technology space.

Property of TechnologyAdvice. © 2025 TechnologyAdvice. All Rights Reserved

Advertiser Disclosure: Some of the products that appear on this site are from companies from which TechnologyAdvice receives compensation. This compensation may impact how and where products appear on this site including, for example, the order in which they appear. TechnologyAdvice does not include all companies or all types of products available in the marketplace.