BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI And Equality: Let's Get It Right This Time

Forbes Technology Council
POST WRITTEN BY
Sandhya Venkatachalam

Killer robots: bad. Human control: good. 

As we witness an acceleration of innovation in artificial intelligence (AI), the existential debate over whether humans will lose control of increasingly super-intelligent machines has resurfaced in full force. But this is not something that will just happen. Rather, it will be influenced by the ethical and practical decisions we take every day. The key question is, if we are making machines that think like humans, what types of humans do we want them to be?

For example, it was troubling that Microsoft’s AI bot Tay started to tweet racist and inflammatory statements in less than 24 hours or that Google’s deep learning model confused apes and dark-skinned people in photos. Microsoft admitted to a critical oversight in the release of a “chatbot,” and Google apologized, promising to take action to avoid similar mistakes in the future.

But should we even be leaving it to individual companies to define the values we infuse in AI? Letting pure market forces define ethics didn’t work so well the last time there was a technology breakthrough of this order of magnitude. Two decades after the internet revolution, which was heralded as the Great Democratizer, over half of the world is still offline. It has been primarily those already well-off and well-educated who have taken advantage of the internet to achieve greater success. And we are still working on data privacy and usage issues.

Furthermore, I would argue that the stakes are even higher this time. We are creating a world where machines will understand and anticipate what we want to do -- and in the future, they'll do it for us. And the speed with which AI is permeating every industry and every organization simultaneously is something we simply did not experience in the internet era. Those without access to AI from the very beginning will likely have no chance to catch up later, especially while income inequality continues to widen. Finally, while sometimes overdramatized, there is absolutely a basis for the concerns people have over potentially losing control of AI, especially since, right now, many of the brilliant people who are on the creating cutting edge AI do not yet actually understand how it works.

So, let’s get it right this time. But what can we do?

If history is a guide, the two biggest things we got wrong in the internet era were lack of inclusion and lack of impartiality (i.e., as a global society, we didn’t get enough people access quickly enough and did not ensure that internet-based technologies and solutions were unbiased). How do we avoid this with AI?

If we are to address the first issue of inclusion, we need to understand what is the equivalent of “getting everyone online” in the context of AI. When technology revolutions occur, usually something that was previously very expensive gets much cheaper and therefore becomes ubiquitous (e.g., Moore’s law). If “connecting and communicating” became dirt cheap with the internet, I would argue that “analyzing and predicting” will become dirt cheap with AI. Who will benefit from this? People and organizations that have unique access to data. It is no accident that the top five global AI vendors -- Google, Facebook, Amazon, Microsoft and Baidu -- all have massive proprietary data sets. They all utilize open source AI algorithms and models that they have developed but guard the data closely.

How can we improve inclusion while still promoting innovation?

  1. Encourage individuals and organizations to maintain control and security of their own data.
  2. Create the concepts of public data sets vs. private data sets (similar to private property vs. state-owned land); maintain and share public data
  3. Support open source (e.g., Google’s TensorFlow) but at every layer of the AI-technology stack.
  4. Educate the general public, not just the Silicon Valley elite, on the reality of AI; encourage free technical courses.

The second, and much more subtle, issue deals with bias. If predictions are the focus of AI and data is the key to those predictions, what happens when data sets are incomplete or skewed? For example, many AI machines are currently biased to discover new drugs that work better for white men because most accumulated health data comes from white men.

Or what about when the data itself is a reflection of biased human decisions? Google’s online ad system is guilty of showing ads for high-income jobs to men much more often than to women, and at one point, a Google image search for “CEO” produced results that featured just 11% women, even when 27% of United States chief executives are female.

Forbes Technology Council is an invitation-only community for world-class CIOs, CTOs and technology executives. Do I qualify?

Or when entire groups are accidentally excluded from training and testing AI models, like Nikon’s confusion about Asian faces and HP’s skin tone issues in their face recognition software? What if an entire group is excluded from something like training autonomous vehicles? Would the cars respond differently to those groups?

And finally, what happens when the values that are programmed into AI machines represent biases that are different from the accepted norms of society or from societies with norms different than ours?

These issues clearly reach beyond technology and aim at the heart of who we are as a society. But they underscore the following:

  1. Diverse groups and viewpoints must be represented in all stages of AI development.
  2. Data will almost always be skewed, so methods must be developed to detect and correct for bias.
  3. Standardization of the human assumptions or rulesets implicit in any AI model is a good start to developing best practices.

Obviously, these issues have no easy answers. But we do have a chance to get it right this time. If we don’t, AI will go from being potentially the greatest problem solver of all time to the greatest facilitator of inequality in history. And it will be humans, not killer robots, who will have inflicted the worst casualties.