Which of the following scenarios depicts a researcher breaking an ethical code?

Though artificial intelligence is changing how businesses work, there are concerns about how it may influence our lives. This is not just an academic or a societal concern but a reputational risk for companies, no company wants to be marred with data or AI ethics scandals that impacted companies like Amazon. For example, there was significant backlash due to the sale of Rekognition to law enforcement. This was followed by Amazon’s decision to stop providing this technology to law enforcement for a year since they anticipate the proper legal framework to be in place by then.

This article provides insights on ethical issues that arise with the use of AI, examples from misuses of AI, and best practices to build a responsible AI:

What are the ethical dilemmas of artificial intelligence?

Automated decisions / AI bias

Al algorithms and training data may contain biases as humans do since those are also generated by humans. These biases prevent AI systems from making fair decisions. We encounter biases in AI systems due to two reasons

  1. Developers may program biased AI systems without even noticing
  2. Historical data that will train AI algorithms may not be enough to represent the whole population fairly.

Biased AI algorithms may lead to discrimination of minority groups. For instance, Amazon shut down AI recruiting tool after using it for one year. Developers in Amazon state that the tool was penalizing women. About 60% of candidates AI tool chooses were male, which was due to patterns in data on Amazon’s historical recruitments.

To build an ethical & responsible AI, getting rid of biases in AI systems is necessary. Yet, only 47% of organizations test for bias in data, models, human use of algorithms.

Though getting rid of all biases in AI systems is almost impossible due to existing numerous human biases and ongoing identification of new biases, minimizing them can be businesses’ goal. 

If you want to learn more, feel free to check our comprehensive guide on AI biases and how to minimize them using best practices and tools. Also, a data-centric approach to AI development can help address bias in AI systems.

Autonomous things

Autonomous Things (AuT) are devices and machines that work on specific tasks autonomously without human interaction. These machines include self-driving cars, drones, and robotics. Since robot ethics is a broad topic, we focus on unethical issues that arise due to the use of self-driving vehicles and drones.

Self-driving cars

The autonomous vehicles market was valued at $54 billion in 2019 and is projected to reach $557 billion by 2026. However, autonomous vehicles pose various risks to AI ethics guidelines. People and governments still question the liability and accountability of autonomous vehicles.

For example, in 2018, Uber self-driving car hit a pedestrian who later died at a hospital. The accident was recorded as the first death involving a self-driving car. After the investigation of the Arizona Police Department and the US National Transportation Safety Board (NTSB), prosecutors have decided that the company is not criminally liable for the pedestrian’s death. This is because the safety driver was distracted with her cell phone, and police reports label the accident as “completely avoidable.”

Lethal Autonomous Weapons (LAWs)

LAWs are one of the weapons in the artificial intelligence arms race. LAWs independently identify and engage targets based on programmed constraints and descriptions. There have been debates on the ethics of using weaponized AI in the military. For example, in 2018, United Nations gathered to discuss the issue. Specifically, countries that favor LAWs have been vocal on the issue. (Including South Korea, Russia and America.)

Counter arguments for the usage of LAWs are widely shared by non-governmental communities. For instance, a community called Campaign to Stop Killer Robots wrote a letter to warn about the threat of an artificial intelligence arms race. Some renowned faces such as Stephen Hawking, Elon Musk, Steve Wozniak, Noam Chomsky, Jaan Tallinn, and Demis Hassabis also signed the letter.

Unemployment due to automation

This is currently the greatest fear against AI. According to a CNBC survey, 27% of US citizens believe that AI will eliminate their jobs within five years. The percentage increases to 37% for citizens whose age is between 18-24. 

Though these numbers may not look huge for “the greatest AI fear”, don’t forget that this is just a prediction for the upcoming five years. 

According to Mckinsey estimates, intelligent agents and robots could replace as much as 30% of the world’s current human labor by 2030. Depending upon various adoption scenarios, automation will displace between 400 and 800 million jobs, requiring as many as 375 million people to entirely switch job categories.

Comparing society’s 5-year expectations and Mckinsey’s forecast for 10 years shows that people’s expectations of unemployment are more pronounced than industry expert’s estimates. However, both point to a significant share of population being unemployed due to advances in AI.

Misuses of AI

Surveillance practices limiting privacy

“Big Brother is watching you.” This was a quote from George Orwell’s dystopian social science fiction novel called 1984. Though it was written as science fiction, it may have become a reality as governments deploy AI for mass surveillance. Implementation of facial recognition technology into surveillance systems concerns privacy rights. 

According to AI Global Surveillance (AIGS) Index, 176 countries are using AI surveillance systems and the liberal democracies are major users of AI surveillance. The same study shows that 51% of advanced democracies deploy AI surveillance systems compared to 37% of closed autocratic states. However, this is likely due to wealth gap between these 2 groups of countries.

From an ethical perspective, the important question is whether governments are abusing the technology or using it lawfully. “Orwellian” surveillance methods are against human rights.

Some tech giants also state ethical concerns on AI-powered surveillance. For example, Microsoft President Brad Smith published a blog post calling for government regulation of facial recognition. Also, IBM stopped offering the technology for mass surveillance due to its potential for misuse, such as racial profiling, which violates fundamental human rights.

Manipulation of human judgment

AI-powered analytics can provide actionable insights on human behavior, yet, abusing analytics to manipulate human decisions is ethically wrong. The best known example of misuse of analytics is the data scandal by Facebook and Cambridge Analytica. 

Cambridge Analytica sold American voters’ data crawled on Facebook to political campaigns and provided assistance and analytics to the 2016 presidential campaigns of Ted Cruz and Donald Trump. Information about the data breach was disclosed in 2018, and the Federal Trade Commission fined Facebook $5 billion due to its privacy violations.

Proliferation of deepfakes

Deepfakes are synthetically generated images or videos in which a person in a media is replaced with someone else’s likeness. 

Though about 96 % of deepfakes are pornographic videos with over 134 million views on the top four deepfake pornographic websites, the real danger and ethical concerns on society about deepfakes are how it can be used to misrepresent political leaders speeches. 

Creating a false narrative using deepfakes can harm people’s trust in the media (which is already at an all time low). This mistrust is dangerous for societies considering mass media is still the number one option of governments to inform people on emergency events (e.g., pandemic).

Artificial general intelligence (AGI) / Singularity

A machine capable of human level understanding could possibly be a threat to humanity and such research may need to be regulated. Although most AI experts don’t expect a singularity (AGI) any time soon (before 2060), as AI capabilities increase, this is an important topic from an ethical perspective.

When people talk about AI, they mostly mean narrow AI systems, also referred to as weak AI, which is specified to handle a singular or limited task. On the other hand, AGI is the form of artificial intelligence that we see in science fiction books and movies. AGI means machines can understand or learn any intellectual task that a human being can. 

Robot ethics

Robot ethics, also referred to as roboethics, includes how humans design, build, use, and treat robots. There have been debates on roboethics since the early 1940s. And arguments are mostly originated in the question of whether robots have rights like humans and animals do. These questions have gained increased importance with increased AI capabilities and now institutes like AI Now focus on exploring these questions with academic rigor.

Author Isaac Asimov is the first one who talked about laws for robots in his short story called “Runaround”. He introduced Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its existence as long as such protection does not conflict with the First or Second Law.

These are hard questions and innovative and controversial solutions like the universal basic income may be necessary to solve them. There are numerous initiatives and organizations aimed at minimizing the potential negative impact of AI. For instance, the Institute for Ethics in Artificial Intelligence (IEAI) at the Technical University of Munich conducts AI research across various domains such as mobility, employment, healthcare, and sustainability.

Some best practices to navigate these ethical dilemmas are:

Transparency

AI developers have an ethical obligation to be transparent in a structured, accessible way since AI technology has the potential to break laws and negatively impact the human experience. To make AI accessible and transparent, knowledge sharing can help. Some initiatives are:

  • AI research even if it takes place in private, for profit companies, tends to be publicly shared
  • OpenAI is a non-profit AI research company created by Elon Musk, Sam Altman, and others to develop open-source AI beneficial to humanity. However, by selling
    one of its exclusively models to Microsoft rather than releasing the source code, OpenAI has reduced its level of transparency.
  • Google developed TensorFlow, a widely used open-source machine learning library, to facilitate the adoption of AI.
  • AI researchers Ben Goertzel and David Hart, created OpenCog as an open-source framework for AI development
  • Google (and other tech giants) has an AI-specific blog that enables them to spreads its AI knowledge to the world.

Explainability

AI developers and businesses need to explain how their algorithms arrive at their predictions to overcome ethical issues that arise with inaccurate predictions. Various technical approaches can explain how these algorithms reach these conclusions and what factors affected the decision. We’ve covered explainable AI before, feel free to check it out.

Inclusiveness

AI research tends to be done by male researchers in wealthy countries. This contributes to the biases in AI models. Increasing diversity of the AI community is key to improve model quality and reduce bias. There are numerous initiatives like this one supported by Harvard to increase diversity within the community but their impact has so far been limited.

This can help solve problems such as unemployment and discrimination which can be caused by automated decision making systems.

Alignment

Numerous countries, companies and universities are building AI systems and in most areas there is no legal framework adapted to the recent developments in AI. Modernizing legal frameworks at both country and higher levels (e.g. UN) will clarify the path to ethical AI development. Pioneering companies should spearhead these efforts to create clarity for their industry.

What are the three golden rules that all ethical standards have regardless of the discipline or research methods used?

Three basic principles, among those generally accepted in our cultural tradition, are particularly relevant to the ethics of research involving human subjects: the principles of respect of persons, beneficence and justice.

What is the name of the organization that evaluates research proposals based on whether the research will potentially harm research subjects?

Under FDA regulations, an Institutional Review Board is group that has been formally designated to review and monitor biomedical research involving human subjects. In accordance with FDA regulations, an IRB has the authority to approve, require modifications in (to secure approval), or disapprove research.

Which of the following statements is correct regarding the difference between how historians and sociologists study history?

What best describes the difference between how sociologists and historians study history? Historians are usually experts on a given time period or place, while sociologists examine variations in time and place to make sense of larger patterns.

What is the name of the organization that evaluates research proposals to assess the potential for harm to study participants?

Institutional review boards (IRBs) or research ethics committees provide a core protection for human research participants through advance and periodic independent review of the ethical acceptability of proposals for human research.