The advances in artificial intelligence are mind-boggling, to say the least. This technology has the potential to revolutionize virtually every field including medicine, education, the military, finance, engineering, advertising, and cybersecurity. In fact, AI is changing the lives of people all over the world with phenomenal advances in graphics, video, fraud detection, voice recognition, and shopping.
However, a potential dark cloud over AI is a concern about ethics: how does AI affect individuals, communities, governments, and the world?Are the decisions made by AI routines and products unbiased, fair, and transparent? Or is the software laced with hidden standards and racial and other biases? Does AI respect privacy and security and can it be trusted to make the right decisions?
Artificial intelligence is already embedded throughout the lives of individuals and society. Since AI first appeared on the scene, there have been concerns that these systems are biased, lack accountability and fairness, are unsafe, and there is little to no transparency about how decisions are made.
Naturally, when the average person thinks about artificial intelligence they think about the worst case. What if a robotic super soldier decides it’s in its best interest to refuse to fight or to turn on its creators? How do you control an autonomous tank or aircraft when these machines make their own decisions about when and where to strike?
Let’s look at some examples of some of the questions that are raised by the current AI technology:
- How do you ensure that personal assistants such as Google Home and Alexa don’t record and store every conversation they overhear?
- How is privacy enforced when AI can examine hundreds of thousands of patient medical records very quickly to determine new ways to fight disease?
- If an AI discovers that a person is depressed, is it appropriate for it to intervene without permission to attempt to change their emotional state?
- Is it ethical to harvest the data from millions of Internet users to provide more targeted ads so businesses can make more money?
Similar questions come to mind for every application of artificial intelligence. The limits and boundaries are still being defined and are often not obvious.
Ethical dilemmas
Let’s examine a few test cases, beginning with self-driving trucks. The trucking industry accounts for nearly 6% of the full-time jobs in the United States and it generates more than $700 billion a year. Astoundingly, these truck drivers travel more than 400 billion miles each year, there are over 15 million trucks driving all over the country, and they burn 12% of the fuel used in the United States. (1) The trucking industry is one of the largest, if not the largest, employer in the country.
The concept of self-driving trucks dramatically changes the model of transporting goods from one place to another, especially over long distances. The concept is to use driverless, autonomous vehicles drivingon long-haul runs on freeways. Imagine a convoy of 50 to 100 trucks, each separated by just a few feet, using artificial intelligence to plot the safest, fastest, and most fuel-efficient route while zipping along at over 100 mph.In some models, only two human drivers, one in the front and one in the back, would be on board each trip to deal with the unexpected.
There are many ethical concerns with this new paradigm of trucking. What happens to the tens of millions of people employed in the trucking industry? Do they get retrained or do they have to find new jobs? What is the role of labor unions in this transformation? Whose responsibility is it to get these questions answered?
This is only one example of how AI will affect the job market. According to the McKinsey Global Institute Report, by the year 2030 about 800 million people worldwide will become unemployed because of AI-enabled robots.(2) On the upside, technology will create new jobs and new opportunities, and many people will have the option to transition into new careers.
In another example, artificial intelligence can be used to aid in police work. AI-based facial recognition can help identify criminals who are on the run and find missing persons.But ethical questions persist. If the police use AI to analyze images and videos from street, ATM, and dash cameras, does it violate the right to privacy? For example, if you’re going to scan through the faces of 20,000 people at a sporting event in a stadium, would you need a search warrant? Or can the police simply set up shop and scan at will?
As with many ethical questions, the cost – the potential loss of privacy – must be weighed against the benefit, in this case more effective policing. Who gets to make the decision and what criteria do they use?
In online shopping, the quality of the customer journey is key, andpersonalization is required to provide a great experience for the consumer. However, this requires the gathering and sharing of immense quantities of personal data to build up a profile of an individual. This information is required for artificial intelligence to deliver a customized experience for each customer. Can this data be accessed by law enforcement without a warrant? Can it be shared without violating privacy? Who owns the information – the consumer, the advertiser, the online store, or the company managing the database?
Is AI biased?
Artificial intelligence depends on the information that it receives to make decisions. Initially, AI “learns” about the environment in which it operates by examining historical data. For example, an AI created to help diagnose cancer would need to scan the medical charts of thousands of cancer patients to understand enough to come to useful conclusions. A system designed to screen job candidates would need to review thousands of resumes and other information to gain an understanding of the patterns to associate with good and bad hiring decisions.
Amazon ran into a bias problem when they designed an AI hiring and recruitment application. Their system was trained with 10 years’ worth of data and as it turned out, most of that data came from male candidates. As a result, their application preferred hiring men over women. Needless to say, Amazon canceled the project as soon as this was understood.
The dependence on historical data is one of the weak spots of artificial intelligence because the information could be, and often is, biased. This is illustrated by Amazon’s experience: if in the past the business had the tendency to hire mostly men, then the AI would be biased towards hiring males.
AI is not perfect
Additionally, AI is not perfect and can make mistakes. In 2016, Microsoft released a chatbot for Twitter called Tay. After a period of one day, Tay learned from other Twitter users to use racist slurs and spout Nazi propaganda. Of course, Microsoft shut down the application as soon as it learned of the problem.
Additionally, it’s possible for artificial intelligence to cause unintended consequences. In the most extreme cases in movies such as Terminator, The Matrix, and Colossus the Forbin Project, humanity itself is put at risk by AI-enhanced machines. How ethical is it to create an artificially intelligent machine that has the capacity to kill, harm society, or even destroy humanity?
How do we decide ethical questions such as these?
The ethical issues that present themselves when adopting AI are complex and multifaceted. Doing the right thing might cost a little more, require additional time, or need more thought, but it can help prevent some of these ethical issues from cropping up.
The Potter Box Method, designed by Ralph Potter Jr, a professor at the Harvard Divinity School, gives guidelines to help people to make decisions involving ethics. In this method, you look at facts, values, principles, and loyalty – in that order. It’s a simple way to work through an ethical dilemma and make a reasonable decision.
Facts: Begin by looking at the facts of the situation. What do you know to be true? Put together your facts without making any value judgments or hiding any truths. Collect as much information as you can, understand how things came about, and know the current situation.
Values: Understand the values involved and examine the goals of the AI by learning and contrasting the benefits and costs as well as the impacts on leadership. You can look at the situation from an aesthetic viewpoint, a professional angle, logically, or from a moral standpoint. For example, ask yourself how the innovation benefits humankind, impacts the bottom line, or affects the current workforce. You can also do a SWOT analysis by consideringthe strengths, weaknesses, opportunities, and threats of your implementation.
Principles: What moral methods of insight or thinking are pertinent to the circumstances? You could imagine being in different roles and then finding the best solution for each of them or you could examine the consequences of the initiative and ask which of these is best for the most people. Many managers simply perform a cost-benefit analysis, which is a valid way to look at things, but doesn’t provide an ethical viewpoint. By looking at the innovation from different methods of thinking, you gain insight which helps you make better decisions.
Loyalties: Make a list of anyone who is a stakeholder and prioritize their loyalties. Include your vendors, customers, suppliers, stockholders, and members of the board, as well as the community. Imagine the problem from the viewpoint of each of these to get a different viewpoint and gain understanding of how it will affect them.
By going through this exercise, you are forced to look at a problem from different angles and viewpoints. This helps you resolve ethical questions and give you a good basis on which to make a values-based decision.
What about superhuman intelligence?
Currently, all artificial intelligence is narrowly focused on specific applications. However, a lot of research is being done into what is known as general AI, which is artificial intelligence that is not specifically programmed for a single task. Instead, it is intended to operate more like human intelligence.
In movies, general AI systems evolve into superhuman intelligences, becoming smarter and more capable than humans. What happens then? Will these computers become self-aware? Will they make human beings obsolete? Will they work together with humanity or will they create their own goals that are not necessarily aligned with ours?
The point at which an AI surpasses human intelligence is known as the “technical singularity.” There is some concern that this could occur as soon as 2030, based on the current pace of innovation. This is a scary concept to many people because the motivations of an entirely machine-based entity are unique and unknown.
Further, if an AI became self-aware, whether in robotic form or not, should it be granted human rights or citizenship? Way back in 1942, Isaac Asimov created the three laws of robotics to address the issue of ethics regarding synthetic beings called robots.
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given to it by human beings except where such orders would conflict with the first law.
- A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
This was one of the first attempts to address the concept of ethics from the viewpoint of the synthetic being. These fascinating stories described violations and flaws in this ethical system and how that affected individuals and society.
Ethically, even if we can create superhuman intelligence, should we? Are we wise enough to create synthetic beings that are more capable, more intelligent, and more rational than ourselves?
These and other ethical questions beg to be answered before we take that final leap and create a possible successor to humanity. If we don’t, then we may find those new beings don’t see a place for humans anymore.
Conclusion: Ethics must be part of any AI implementation
Artificial intelligence is not just about computers, code, and data. There’s more to the picture than a cost-benefit analysis or gaining market share. The ramifications of AI on individuals, societies, and the world require resolving ethical issues as part of project planning.
You must look at the impacts of your AI implementation from several viewpoints. Is it the best solution for the business? Is it good for your customers, stockholders, vendors, and others? Are there any downsides to the implementation?
This is especially true because AI has the possibility of affecting the lives of everyone for better or for worse. By keeping ethics in mind, you can work toward a solution that improves the lives of people and does more good than harm.
- How Big Is The Trucking Industry? Freight Contract Carriers Inc. https://www.fcc-inc.com/how-big-is-the-trucking-industry/
- Automation threatens 800 million jobs, but technology could still save us, says report. The Verge. James Vincent. November 30, 2017. https://www.theverge.com/2017/11/30/16719092/automation-robots-jobs-global-800-million-forecast