In today’s digital age, artificial intelligence (AI) and machine learning (ML) are emerging everywhere: facial recognition algorithms, pandemic outbreak detection and mitigation, access to credit, and healthcare are just a few examples. But, do these technologies that mirror human intelligence and predict real-life outcomes build a consensus with human ethics? Can we create regulatory practices and new norms when it comes to AI? Beyond everything, how can we take out the best of AI and mitigate the potential ill effects? We are in hot pursuit of the answers.
AI/ML technologies come with their share of challenges. Globally leading brands such as Amazon, Apple, Google, and Facebook have been accused of bias in their AI algorithms. For instance, when Apple introduced Apple Card, its users noticed that women were offered smaller lines of credit than men. This bias seriously affected the global reputation of Apple.
In an extreme case with serious repercussions, U.S. judicial systems use AI algorithms to declare prison sentences and parole terms. Unfortunately, these AI systems are built on historically biased crime data, amplifying and perpetuating embedded biases in AI systems. Ultimately, this leads to questioning the fairness offered by the ML algorithms in the criminal justice system.
The Fight for Ethical AI
Governments and corporations have been aggressively getting into AI development and adoption globally. Today, the availability of AI tools that even non-specialists can set up has been increasingly entering the market.
Amid this AI adoption and development spree, many experts and advocates worldwide have become skeptical about AI applications’ long-term impact and implications. They are concerned about how AI advancements will affect our productivity and the exercise of free will; in short, what it means to be “human.” The fight for ethical AI is nothing but fighting for a future in which technology can be used not to oppress but to uplift humans.
Global technology behemoths such as Google and IBM have researched and addressed these biases in their AI/ML algorithms. One of the solutions is to create documentation for the data used to train AI/ML systems.
After the issue of biases in AI systems, another most widely publicized concern is the lack of visibility over how AI algorithms arrive at a decision. It is also known as opaque algorithms or black box systems. The development of explainable AI mitigated the adverse impact of black box systems. While we have overcome some ethical AI challenges, several other issues like the weaponization of AI are yet to be solved.
There are many governmental, non-profit, and corporate organizations concerned with AI ethics and policy. For example, the Partnership on AI to Benefit People and Society, a non-profit organization established by Amazon, Google, Facebook, IBM, and Microsoft, formulates best practices on AI technologies, advances the public’s understanding, and serves as a platform for AI. Apple joined this organization in January 2017.
Today, there are many efforts by national and transnational governments and non-government organizations to ensure AI ethics.In the United States, for example, the Obama administration’s Roadmap for AI Policy of 2016 was a significant leap towards ethical AI, and in January 2020, the Trump Administration released a draft executive order on “Guidance for Regulation of Artificial Intelligence Applications.” The declaration emphasizes the need to invest in AI system development, boost public trust in AI, eliminate barriers to AI, and keep American AI technology competitive in the international market.
Moreover, the European Commission’s High-Level Expert Group on Artificial Intelligence published “Ethics Guidelines for Trustworthy Artificial Intelligence,” on April 8, 2019, and on February 19, 2020, the Robotics and Artificial Intelligence Innovation and Excellence unit of The European Commission published a white paper on excellence and trust in artificial intelligence innovation.
On the academic front, the University of Oxford accommodates three research institutes that focus mainly on AI ethics and promote AI ethics as a structured field of study and applications. The AI Now Institute at New York University (NYU) also researches the social implications of AI, focusing on bias and inclusion, labor and automation, liberties and rights, and civil infrastructure and safety.
Also read: AI Suffers from Bias—But It Doesn’t Have To
Some Key Worries and Hopes on Ethical AI Development
- The major AI/ML system developers and deployers are focused on profit-making and social control. There is still no consensus about what ethical Al would look like. Many experts worry that ethical AI behaviors and outcomes are hard to define, implement and enforce.
- Powerful technology companies and governments control Al’s development, and their agendas rather than ethical concerns drive them. It has been speculated that they will use Al/ML technologies to create more sophisticated methods to exert influence over human psychology to convince us to buy goods, services, and ideas over the next decade.
- The operation of AI tools and applications in black box systems is still a concern. Besides, applying ethical AI standards under these opaque conditions remains an unanswered question.
- The technology arms race between China and the U.S. will do more to the development of Al than to the ethical AI issues. Plus, these two superpowers define ethics in different ways. Ethics always takes a back seat when it comes to acquiring power.
- AI/ML development has clearly shown its progress and value. However, human societies have always found ways to mitigate the issues from this technological evolution so far.
- AI tools and applications have been doing amazing things beyond human capabilities to date. Further innovations and breakthroughs will only add to this.
- The limitless rollout of new Al systems is inevitable. Along with that, developing AI strategies that can mitigate harm is also inevitable. Indeed, we can utilize ethical Al systems to identify and rectify issues arising from unethical AI systems.
- In recent years, the global initiatives on ethical Al have been productive. It moves human societies toward adapting to further AI development based on mutual benefits, safety, autonomy, and justice.
- Imagine a future where even more AI tools and applications emerge to make our lives easier and safer. Al will radically enhance every human system from healthcare to travel; therefore, it is likely that support for ethical Al will substantially grow in the coming years.
- A consensus has been building around ethical Al, particularly in the biomedical community, with the help of open-source technology. For several years, extensive study and discourse in this vital area of ethical Al have been bearing fruit.
- No technology survives if it broadly delivers harmful and futile results. The market and legal systems will eventually kick out unethical Al systems.
The Responsibility of Ethical AI
Tech giants like Microsoft and Google think governments should step in to regulate AI effectively. Laws are only as good as how they are enforced. So far, that responsibility has fallen onto the shoulders of private watchdogs and employees of tech companies who are daring enough to speak up. For instance, after months of protests by its employees, Google terminated its Project Maven, a military drone AI project.
We can choose the role we want AI to play in our lives and enterprises by asking tough questions and taking stern precautionary measures. As a result, many companies appoint AI ethicists to guide them through this new terrain.
We have a long way to go before artificial intelligence becomes one with ethics. But, until that day, we must self-police how we use AI technology.
Read next: Top 8 AI and ML Trends to Watch in 2022