More

    Google’s AI Ethics Council Fails: The Three Components to Address the Problems with Control Councils

    Apparently, Google’s brand-new AI Ethics Council is falling apart. This is bad because it is efforts like this that stand in the way of our having a real Terminator-like future. Companies appear to continue to make a recurring mistake of not assuring that responsibility matches authority, and that quality efforts have the full backing of executive management. Now, before you ask about how I got to quality from ethics, ethics is a form of quality. It is incredibly subjective though. Something an organization does that is consistent with my beliefs, I may think is ethical, while something that is contrary to those beliefs I may believe is unethical, and my beliefs may have nothing to do with ethics.

    I used to run a Tiger Team for IBM. It was an experiment that actually worked but got killed because the executive who backed it left to go to work for another firm. An organization that stands out, like mine, regardless of its effectiveness, rarely survives the loss of its executive sponsor. A few years later, I watched the software unit I then worked for at IBM kill the Quality Control department, claiming they were redundant and didn’t surface any problem the firm didn’t already know about. This was true, but this department assured the problems were fixed. Once gone, they were covered up and I’ve never seen sales and profits drop so quickly as we released ever buggier products and IT buyers lost their sense of humor for them (I doubt they ever found this humorous). It got so bad across the company that IBM’s brand went negative, something I’ve never seen before. This means folks would then pay more for an unbranded product than they would an IBM branded offering.

    Finally, a few years ago, the concept of a risk manager came up. This was a job that had no real authority but was supposed to protect a company from unnecessary risks. It didn’t work because all it did is create people who were only great at being blamed when the stock market collapsed, largely due to way too risky loan policies.

    In each case, there was a problem with executive support, with a balance of responsibility and authority, and a lack of integration into the process the organization was attempting to assure.

    That takes me to my three components.

    Integration Is Key

    An external effort, much like external audit, is at arm’s length from the company. This means it will be out of the loop when key decisions are made, likely kept in the dark about anything an executive feels the group will disagree with and, when the group comes up with a recommendation, it will likely be ignored.

    This is like taking a page from the life of an analyst, but often we are brought in to provide advice, told the advice is brilliant and it should be done and will be done. As years pass, the commitment to the advice will be reconfirmed but progress to actually fix the problem appears non-existent. My best example of this was when Steve Ballmer took over as Microsoft CEO, he had a bunch of us in to form a council to help him do a better job. After he took over, our advice was blocked from reaching him, and when it did reach him, it only seemed to serve to make him angry. His term failed at least partially because this hand-picked team of advisors, external to Microsoft, was totally ineffective in just getting the message to Ballmer, let alone accomplishing our mission.

    So, the effort needs to be integrated into the command and control process and be focused not just on saying no but on providing real help on how to accomplish what the relevant executive needs done without violating its charter. What I mean here is that often the legal department learns there is no downside to saying “no” so they become a huge impediment to progress, and executives learn to game them, which removes their ability to protect the company. Whether we are talking product quality control or ethics, the effort must be aligned with what the firm wants to accomplish and focus on how to make that accomplishment work, not just saying “no.” I’ve never seen an external group do this effectively because they are simply too far out of the loop and thus ineffective, even at providing timely oversight, let alone enforcing direction.

    Responsibility and Authority

    But internal organizations have issues too, the biggest being a matching of responsibility and authority while avoiding conflicts of interest. This can be a huge problem with internal audit organizations because, often, employees rotate into this organization but then become afraid that if they point out a big problem and upset an executive, they will kill their career. If the group has the responsibility to assure an ethical AI but not the authority to enforce their decisions, they’ll be rolled over. Worse, their positions, though correct, may be seen as heretical, blocking other executives from stepping in.

    This is, I believe, one of the primary reasons why internal audit has fallen off so sharply in terms of effectiveness and findings and why it seems we have an uptick in bad executive behavior. If the group lacks executive backing and protection, which seems to be the case with this Google AI group, it will be ineffective and likely only be in a position to say “I told you so” after something really bad happens.

    Executive Support

    The power in a group like Google’s AI Council typically comes from a top executive. Given the importance of the group, that likely should have been the CEO. Google appears to have little or no confidence in its CEO right now, given the number of internal revolts that seem to result in executive pull backs. One recent example was the Google employees who forced pull back on DoD work. Employees stepping in to force the CEO of the firm to reverse a decision is like the privates in an army deciding to give the general orders. This does horrid things to governance by making it look like orders from superiors are suggestions that can be ignored.

    It’s also foolish because, typically, some other company will step in that is likely less ethical and will deliver an inferior weapon, putting the U.S. at greater risk. When you work on something, you can help steer it. When you don’t, you have no say whatsoever in what results. And while this may allow Google to avoid blame, better would have been allowing Google to help avoid a now more likely dangerous mistake.

    If you don’t protect and back the oversight group, it will be ineffective, but it will look like you are doing something, likely preventing someone else from stepping in, so a lack of support may be worse than not forming the group in the first place.

    Wrapping Up: Why We Must Get AI Ethics Right

    AI ethics is important if we don’t want machines that can think and act far faster than we can do us harm at computer speeds. The Russian hacking of the U.S. elections did a lot of damage, particularly to our confidence in the electoral process. But now think about what deep learning AIs could do across all media and how they could flip an election extremely quickly, far too quickly for a human to act. We need efforts like the Google AI Ethics Council to work. To do that, they need to be integrated with the creation process, have the responsibility to assure ethics and the authority to enforce their decisions, and they need the full support of the executive staff, particularly the CEO.

    In the end, getting this done right is far more critical than getting this done. Google just seems to struggle with that first part.

    Rob Enderle
    Rob Enderle
    As President and Principal Analyst of the Enderle Group, Rob provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.

    Get the Free Newsletter!

    Subscribe to Daily Tech Insider for top news, trends, and analysis.

    Latest Articles