This article was originally published on Spiceworks.

 

As AI becomes more interconnected with our daily lives, the resulting ethical questions for both companies and individuals have become more complex. Businesses are coming to realize the importance of ethical AI and the reputational damage that can stem from being associated with a prejudiced algorithm or one that produces unethical outputs, and this is driving change. A decade ago, AI ethics were perhaps an afterthought, regarded only in the most obvious cases of harmful output. Today, ethics are increasingly considered early on in the AI project lifecycle and being incorporated during the requirements gathering process.

Bias: a perennial challenge in AI

A few key ethical issues have been present since the early days of AI and continue to be important in a business context as technology evolves. The first is bias

To fully understand the problem of bias, let’s start at the beginning of the lifecycle of an algorithm – a set of instructions and logical rules that execute to achieve an outcome, essentially the building blocks of AI. One of the first stages of creating an algorithm is gathering data on which to train the model with the challenge of making it robust.

If an algorithm is trained on biased data, its output is likely to be biased, and the impact can be far-reaching.

In many cases, priority goes to the quantity of training data over its quality or representativeness (in terms of both the content itself being representative, and coming from a diverse and representative set of sources). An algorithm may be given diverse content from the internet or other public sources as training data, and as we all know, the quality of web content cannot always be ensured. Within a set of data scraped from the web, certain populations might be over- or under-represented, there may be bias in how content is presented, and content itself may even be false. If an algorithm is trained on biased data, its output is likely to be biased, and the impact can be far-reaching

The risk of malicious manipulation of algorithms

Another issue in AI ethics that could become more prominent as technology evolves is the malicious use of algorithms. This issue is perhaps more straightforward and less prevalent than the issue of bias, making it a less significant threat in a business context.

It’s always possible for bad actors to train an algorithm with malicious intent, and some experts warn that floods of biased data or misinformation could be deliberately released to manipulate otherwise ethical algorithms. But for the vast majority of companies using AI algorithms, if output is corrupt or unethical, it’s a result of unexpected algorithmic behavior, not the result of an intentionally malevolent action. Algorithms often function as black boxes, and even experts and data scientists are not able to control them entirely.

How can bias be corrected and prevented in AI?

As AI technology is increasingly adopted across companies of all sizes and deployed in new ways across the business, how can these ethical issues be corrected, and even prevented? With bias being such a considerable risk for companies using AI at present, we’ll focus on 3 main approaches to correcting for bias when training and using algorithms:

1) The first option involves retraining algorithms using a corrective data set. If an algorithm is producing false or biased information – for example, it only returns examples of male figures when prompted with the word “hero” – corrective action would involve retraining the algorithm with a more representative data set. In this example, we would give the algorithm a new data set that more prominently features female heroes from across history, literature, pop culture and more. Of course, this approach requires a human to identify skewed output in the first place, and provide a corrected training data set – which still creates opportunities for bias.

2) Advances in AI are not only raising new ethical questions – they’re also creating new solutions to ensure ethical AI. A second approach to correcting bias is to use AI control processes and algorithms to counter-audit original generator algorithms. These control processes ensure the output of original algorithms is correct, ethical, and in line with a company’s guidelines. While research is ongoing, this approach requires less human involvement than retraining algorithms requires. The ultimate goal would be to have these control processes fully integrated within AI models from the start to ensure ethical output. The technology isn’t there yet, but it’s certainly an interesting space to watch in AI ethics.

3) Another area of ongoing development involves breaking down algorithmic models for greater transparency, permitting potential bias to be corrected along the way. At the moment, most AI algorithms are difficult to control because they function like black boxes: their inner workings are not easily interpreted by humans, making it challenging to change a model’s structure and modify how it works from an ethics perspective. Researchers are currently working on developing milestones within the structure of an algorithm’s model. This would make it possible to clearly observe and understand how the algorithm functions at each milestone, and adjust the model or the weighting to influence the output. 

As AI evolves and advances, so do potential ethical risks

At the moment, no machine or algorithm has unequivocally managed to pass the Turing Test – the AI test of fame to determine if a machine can demonstrate intelligence indistinguishable from that of a human – though some (disputed) attempts have occurred in recent years. In the next decade, we may very well witness an intelligent system able to pass this test, which would in theory mean that we would not be able to distinguish between communicating with this system and another human. 

It can be difficult to distinguish AI from human intelligence, but ethical issues and complexities will become even more significant as algorithms become more sophisticated and their capabilities approach that of a human.

GPT-3 may be a key advancement in getting there. One of the largest language models in use and widely considered to be a breakthrough in AI, it’s capable of generating sentences and can even write article summaries or generate full stories, creative in nature, based on a prompt of a few lines. 

With the advances in AI signalled by the arrival of GPT-3 and other NLP models from the “Transformers” generation, certain ethical issues also surface. For example, these models’ output often follows the tone or style of the prompt, which can be problematic: even if the algorithm creator tries to remove bias and toxic language, the model is still capable of generating problematic content if fed with harmful or malicious prompts.  

Even with today’s version of GPT-3, it can be difficult to distinguish AI from human intelligence, but ethical issues and complexities will become even more significant as algorithms become more sophisticated and their capabilities approach that of a human. 

Transparency is the way forward for ethical AI

Minimizing ethical risk in AI and reducing bias is rooted in transparency. We must make our algorithms more transparent, we must introduce model milestones that make it possible to understand and correct the output at each stage, and we must study the diversity of biases that occur so that we can eradicate them. Of course, it’s not feasible for any one person or one team to do this alone. The entire AI community needs to collaborate to identify and implement standardized frameworks and control systems which do not exist today. We can achieve this through open sourcing models and training mechanisms. This will allow a wider set of people to determine together how our models, and their behaviors, might need to change to ensure an ethical future for AI.

Menu