Loving AIs: Bringing Unconditional Love to Artificial General Intelligence

Julia MossbridgeCognitive, Conversations, Futurism, Perspectives, Presentations, Science & Technology, Video Leave a Comment

[videogallery id=”LovingAIs”]

Among the thousands of amazing breakthroughs and the blistering pace of change these days–from colonies on Mars to genetic splicing in your kitchen–there is, perhaps, no more mind boggling example of how fascinating and complicated our brave new world has become than that of the work on artificial general intelligence (AGI). Though it is a field that goes back decades, recent advances in computing power and cognitive computing strategies have led some to think that we’re just a few decades away from the first fully-functional, environmentally-responsive, self-aware learning machines. That is, unlike the “narrow AI” of today that does one thing well (e.g., detect fraud on your credit card), these general AIs might be the synthetic intelligences we’ve seen for years in movies.

But will they be truly self-aware (i.e., will they have subjectivity)? And will they have positive care and regard for humans (or could they become Skynet)?

As part of our work on the Innovation Lab advisory board of the Institute of Noetic Sciences, I had the pleasure of sitting down with OpenCog founder and leading AGI researcher Ben Goertzel and IONS Innovation Lab director Julia Mossbridge to discuss LOVING AIs, a project aiming to design and develop an “unconditional loving” module for AGIs. In this interview, I dive into the rabbit hole of AGI with Ben and Julia to discuss the state of the field, what it would mean to program unconditional love into AGIs, and some thorny implications for the brave new world we’re entering.

Community Reflections

 
The Zero Law

by Corey deVos
Excerpted from The Future of Artificial Intelligence

We live in fascinating times. For decades we have seen an explosive exponential growth of technology, and the effects of this growth are only now beginning to surface. As a result, what seemed like science fiction even just a few years ago is rapidly becoming reality. Particularly when it comes to artificial intelligence, which has recently hit a new level of sophistication and usability, as seen in highly capable “digital assistants” like Siri, Cortana, and Google Now.

It is an age of technological miracles, and the repercussions for the future are only beginning to make themselves known.

As artificial intelligence becomes ever more ubiquitous in our lives, some of our most respected scientists, engineers, and philosophers are beginning to caution us about the possible consequences of this still-fledgling technology. Stephen Hawking, Bill Gates, and Elon Musk recently warned us about the possible militarization of A.I., which threatens to send us spiraling into the most horrifying and destructive arms race the world has ever seen — think less Siri, more Skynet.

This is by no means a new concern, of course. Isaac Asimov predicted this dilemma way back in 1942 with his famous “Three Laws of Robotics”, which attempted to get ahead of the problem by formulating a set of logical parameters for rational ethical behavior that could be programmed into any artificial intelligence:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

And, of course, the preceding “zeroth law”:

0.  A robot may not harm humanity, or, by inaction, allow humanity to come to harm.

But it’s one thing to program a machine with human values — what happens if the machines begin programming themselves and formulating their own values? Would those values grow through similar stages that human values grow through? More importantly, do we have any reason to believe that we would play any meaninful role in those values?

So when it comes to the future of artificial intelligence, we seem to have more questions than answers. Are atoms, molecules, and mathematics alone enough to produce machines with genuine human-equivalent intelligence? Can that intelligence ever become truly conscious and possess the “inner light” of interior self-awareness? Will artificial intelligence be capable of determining its own morals, ethics, and values? Will those values transcend and include the continued existence of the human race, or will this intelligence share so little resonance with us that our very survival could be threatened?

Julia Mossbridge

About Julia Mossbridge

Julia Mossbridge, M.A., Ph.D. is a Visiting Scientist at the Institute of Noetic Sciences (IONS), the CEO and Research Director of Mossbridge Institute, LLC, and a Visiting Scholar in the Psychology Department at Northwestern University. She is best known for being the inventor of Choice Compass, which is based on a patent-pending process that helps smartphone users tap into their mind-body connection via their heart rhythms as they contrast two life choices.

Ben Goertzel

About Ben Goertzel

Ben Goertzel is chairman of the board of the OpenCog Foundation, and a renowned researcher and author in contemporary AI. He has dedicated his career to AI and its various applications in fields such as gaming, robotics, bioinformatics and financial prediction, and more specifically to “creating benevolent superhuman artificial general intelligence,” as he says on his website.

Robb Smith

About Robb Smith

Robb Smith is a leading thinker on the Transformation Age and the global Integral movement. He is the creator of the augmented leadership platform Context, co-founder and CEO of Integral Life and founder of the Institute of Applied Metatheory.

Leave a Comment