Ethical Issues of Artificial Intelligence (AI)

Author: Alexandre Palma

As a definition, we could say that Artificial Intelligence (AI) is the simulation of human intelligence by machines, and Machine Learning (ML) is seen as a subset of AI. ML essentially uses algorithms that parse data and learn from it.

Big Data as the noun says, consists of large volumes of data that can be analyzed to obtain key insights (patterns or trends) that mainly will help board members taking better decisions. Accommodating all these concepts we can interconnect them, and then ML algorithms can use big data to learn and create a more dynamic AI system.

Today we experience AI use in a part of our everyday lives. For example, voice assistants like Alexa or Siri, or when we see ads that suggest some products that we need and we have been searching for recently, this is only due to AI. AI has countless applications in our lives, and then we need to start to reflect on ourselves if this is really a good technological development and if it is until what point.

Then we have some questions like “Was it ethical to use AI that made a real person think they were talking to another real person (as opposed to an AI system)?”; “Artificial assistants like Siri and Alexa are abused?”; “What were the potential ramifications if an AI system like this was abused?”; “ Until what certain point is it permissible for AI to capture all of our information and even our voice or data?” ; “AI will certainly provoke unemployment, what happens after the end of jobs?”. Despite AI being a great technological development, we need to think about these ethical aspects and always take precautionary steps in a way to prevent major issues from arising.

Examples of Ethical Issues

Data Privacy Concerns

Privacy (and consent) for using data has long been an ethical dilemma of AI. We need data to train AIs, but where does this data come from, and how do we use it? Sometimes we assume that all the data is coming from adults with full mental capabilities that can make choices for themselves about the use of their data, but we don’t always have this.

For example, Barbie now has an AI-enabled doll that children can speak to. What does this mean in terms of ethics? There is an algorithm that is collecting data from your child’s conversations with this toy. Where is this data going, and how is it being used?

Ownership

In fact, AI was created to simulate our behavior and thoughts, so who will be responsible for some of the things that AI is creating? Ais can create art and music, so when she does things like that who owns that material, and who has the intellectual property rights to it?

Automation and Jobs

Automation has already had a large impact on low-skill jobs, and AI may enhance automation´s effects in industries. Tesla, for example, a leader in self-driving cars, is trying to create a self-driving vehicle that drives just as well as human-driven vehicles.

In the USA, close to 5 million jobs in manufacturing were lost between 2000 and 2016, most of these jobs because companies were able to use robots instead humans to do the same job.

AI becomes a more integrated part of society, jobs will become increasingly different from what they are today, we need to plan for when some jobs do become obsolete and try to mitigate the negative aspects of job loss.

Bias in the Use of AI

Evidence suggests that AI models can embed and deploy human and social biases at scale. However, it is the underlying data than the algorithm itself that is to be held responsible. Models can be trained on data that contains human decisions or on data that reflects the second-order effects of social or historical inequities. Additionally, the way data is collected and used can also contribute to bias and user-generated data can act as a feedback loop, causing bias.

AI is moving beyond “nice-to-have” to becoming an essential part of modern digital systems. As we rely more and more on AI for decision-making, it becomes essential to ensure that they are made ethically and free from unjust biases. We see a need for Responsible AI systems that are transparent, explainable, and accountable. AI systems increase in use for improving patient pathways and surgical outcomes, thereby outperforming humans in some fields. It is likely to merge, co-exist or replace current systems, starting the healthcare age of artificial intelligence and not using AI is possibly unscientific and unethical.

Regulation and Policy

To prevent some of these ethical risks it is important to create regulations. There are already some guidelines for it, we have the General Data Protection Regulation (GDPR) and The Institute of Electrical and Electronics Engineers (IEEE).

The Institute of Electrical and Electronics Engineers (IEEE) is a professional organization that aims to advance technological innovation and excellence. The IEEE publishes almost one-third of the technical literature in the world each year in areas such as electrical engineering, computer science, and electronics. The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems aims to deeply reflect on human well-being, in the designing of automated and intelligent systems. The initiative`s mission is to ensure that every stakeholder involved in the development of the technology is educated, trained, and empowered.

General Data Protection Regulation (GDPR) is a legal framework that sets guidelines for the collection and processing of personal information from individuals who live and outside of the European Union (EU). Approved in 2016, the GDPR went into full effect two years later. Its aim is to give consumers control over their own personal data by holding companies responsible for the way they handle and treat this information. The regulation applies regardless of where websites are based, which means it must be heeded by all sites that attract European visitors, even if they don’t specifically market goods or services to EU residents.

Image Source: https://www.emotiv.com/glossary/gdpr/

Unpredictable events may happen through the continuous progression of AI; technology usually does not advance in a linear fashion, so disruptions will occur with little to no foresight. Industry leaders will have to take up the mantle and navigate the future of AI, trying to maximize the benefits offered by AI while minimizing any of its potential costs.

It does not take someone with technical expertise to understand the ethical implications that AI could have. To obtain further knowledge on the regulations around automation and intelligent systems, individuals can look at what their federal or state government is doing.