More

    Putin Weighs In on AI as Huawei Makes Announcements

    The latest person to speak out about artificial intelligence (AI) is none other than Vladimir Putin.

    The category has taken on a bit of an ominous feel. On one side, the AI ecosystem continues to expand and offer more sophisticated platforms and solutions. On the other is an increasing chorus of people, including some executives from the first group, who tell us that this has a high probability of not ending well.

    First, some items from the first category: At Huawei Connect 2017 in Shanghai, the company introduced the Enterprise Intelligence (EI) platform. The concept is to bring together a number of disciplines, including AI, which generally are used to create one-off point solutions. The new platforms can be used more broadly. The features fall into several categories: basic platforms including AI, machine learning and other tools; specific AI services; scenario-specific solutions and “heterogeneous computing platforms.”

    Venture Beat posted a Reuters report that Huawei will release the AI-powered Mate 10 and Mate 10 Pro smartphones in Munich on Oct. 16.

    Google is a big player in AI as well. The Shanghai Daily reported this weekend that the company will open an AI lab in Beijing and is recruiting machine learning and cloud engineers. It’s not a surprise: In May, Google said that it would increase its AI research in China.

    It’s interesting to see that executives and others driving the research also among those most concerned. Business is not moral or immoral. It is amoral: Companies continue to systematically make advances until stockholders, owners, lawyers, regulators or other authority centers tell them to stop or to work under different rules. That’s okay when the end goal of the research is on how to make brakes that stop cars more quickly or televisions that deliver clearer images. The three differences with AI and related high-tech initiatives are that they are tools (that can create end products that are beneficial or dangerous), that progress comes in huge chunks and that we don’t know where this will end.

    Of course, how to handle AI (or, alternately, how frightened to be) is not a new debate. Most recently, it was the subject of a public dustup between Elon Musk and Mark Zuckerberg. It’s unclear if that spat is over.

    It’s hard to image any more important a person weighing in on the topic. It has happened though: Russian President Vladimir Putin. Here, according to RT, is what Putin told Russian kids during a broadcast science class as the school year began:

    Artificial intelligence is the future, not only for Russia, but for all humankind. It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.

    Putin pledged to share Russian AI research with other nations in “the same way we share our nuclear technologies today.” That promise, of course, is hardly reassuring.

    It’s a fascinating area that almost seems as if industry and science are creating a new race. Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence, posted an op-ed on Friday in The New York Times. He begins by repeating rules for robots written by sci-fi author Isaac Asimov in 1942: Robots must not injure people or by inaction cause them to be injured, they must obey humans except if doing so would violate the first rule, and they must protect themselves unless doing so violates the other two laws.

    Etzioni builds on this with three AI laws: Systems must follow all laws to which human operators are liable; an AI system must disclose “clearly” that it is not human and that it cannot “retain or disclose confidential information without explicit approval from the source of that information.”

    Asimov’s rules (updated for AI) and Etzioni’s additions are a good start. The challenge will be putting those and others in place and universally agreed upon before they can be edited by the machines into which they are dictated.

    Carl Weinschenk covers telecom for IT Business Edge. He writes about wireless technology, disaster recovery/business continuity, cellular services, the Internet of Things, machine-to-machine communications and other emerging technologies and platforms. He also covers net neutrality and related regulatory issues. Weinschenk has written about the phone companies, cable operators and related companies for decades and is senior editor of Broadband Technology Report. He can be reached at cweinsch@optonline.net and via twitter at @DailyMusicBrk.

     

     

    Carl Weinschenk
    Carl Weinschenk
    Carl Weinschenk Carl Weinschenk Carl Weinschenk is a long-time IT and telecom journalist. His coverage areas include the IoT, artificial intelligence, artificial intelligence, drones, 3D printing LTE and 5G, SDN, NFV, net neutrality, municipal broadband, unified communications and business continuity/disaster recovery. Weinschenk has written about wireless and phone companies, cable operators and their vendor ecosystems. He also has written about alternative energy and runs a website, The Daily Music Break, as a hobby.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles