What AI Could, Should, and Would Do

I’m a father of three smart and sweet girls. They are at an age where my wife and I control most aspects of their lives. But that wouldn’t be the case forever.  I know that. And if you are a parent, you know that. And if you’re not a parent, I bet your parents know that.

I want my kids to find their true potential in life, and that can only happen if I let them go discover that potential on their own. At the same time, I want them to be safe, happy, and healthy – things that I can control now.

I am also a researcher in the field of AI. And I feel the same about it. Most of the way so far, we have been able to control and understand that AI, but lately we have started venturing into the areas where we need or want to let AI go on its own and explore its true potential.

But just like letting my kids go, I’m both excited about what it would do and worried about what it may end up doing.

What would and could AI do as we loosen that control, and what should it do? 

As Stephen Hawking said, “The rise of powerful AI will be either the best, or the worst thing, ever to happen to humanity.” We just don’t know which one it is yet.

Our AI, like my kids, is still mostly under our control. But like my kids, it’s growing up fast.

We are at a crossroads in our relationship with AI where what we choose now can have a huge impact on the future of AI and that of humanity. 

So the question is — how do we make good choices? Let’s start by examining two extreme visions of AI.

There is a famous book by H. G. Wells that’s turned into a movie a couple of times, called The Time Machine. In the 2002 version of this, they show a vision for an AI. In this case, a virtual librarian named Vox 114, in the form of a hologram. The time is several hundred years in the future when the world has nearly ended, but this AI has survived. It can answer questions, and even engage in some existential and philosophical discussions. At a later point, we are moved to many more thousands of years in the future and the AI Vox 114 is still there.

Isn’t this a great version of AI? Something that knows it all, can help any human and can sustain any natural or human disaster into the future?

A year later, another sci-fi movie came out, which was a sequel to a popular franchise called The Terminator. In these movies, a different version of AI is envisioned.

The Terminator tells us a story of a dystopian future when the machines have risen up against the humans. They have developed superintelligence and determined that the biggest threat to humanity is the humans themselves, so it goes on a mission to destroy all humans. In its mind, it is doing what it was meant to do – help us, but that ends up getting translated to killing us.

Now, you might ask, couldn’t we just turn that off? Well, it’s not that simple.

Around the time these two sci-fi movies had come out, Swedish philosopher Nick Bostrom was busy doing thought experiments to tease out what a super smart AI could end up doing. In his book Superintelligence he shows us that one of the first things such an AI will do is to ensure its survivability by disabling any attempts by humans to stop it. 

OK, but what about a kill switch or self-destruct logic? Can we program something in the AI so it doesn’t go beyond some point where it could harm us? Again, Bostrom provides us logical answers why that won’t work either.

In fact, we already have proof of something like this being possible. 

Recently, the army was doing a simulation where a drone had to destroy the target by overcoming any obstacle. At some point the drone figured out that one obstacle was the drone operator because that operator could ask the drone not to attack, thus taking away its ability to accomplish the mission, which is to destroy a target. So, it decides to take out the operator. Of course, this is not what we ever want, so they added some code making sure the drone doesn’t do that. But now the drone learned a different way to disable the operator – take out the communication network so the operator can’t send the terminating signals to the drone.

Why do things like these happen? Why would a supersmart system not have what we call commonsense?

Because commonsense is a manifestation of our values. Values that we have developed over thousands of years.

These artificial systems are not operating on the same kind of value judgments as humans. They are working to optimize their outcomes and meet certain goals, but they are not being considerate about what else they may be harming.

I think about my kids one day going out in the real world and me trying to control all aspects of their lives or teach them what’s right or wrong then, like the army trying to control this drone while still expecting it to be supersmart about accomplishing its missions. It’s just not going to be possible. If you want your kids or your AI to learn good value judgment, the time to do that is before letting them go out.

But how do we do that? Focus on not just what’s right or wrong, but how do we understand them and learn to do the right thing.

Let’s look at a couple of examples to see what we can learn from parenting to help AI systems do better.

Take for example this answer extracted for advice on seizure. It seems quite reasonable. There is even a reputable source cited. Looks good, right? But if my child or a student gave me an answer and says ‘trust me’, I would push back and ask them to explain themselves.

Here is a recent example from my middleschooler’s algebra homework. When she just wrote the answer, I had to ask her to show me the work. That allows me to understand how she understood the problem and what’s her approach to solving it. 

And this goes both ways. Each of my three kids are unique and they each have their own style of learning. Two of them are twins, and despite sharing pretty much everything, they are still different. So as a parent, I also need to understand how they learn and what I could do to help them learn better.

So we need the AI to be transparent to us and we need us to be educated enough to work with this AI through all kinds of diverse ways. We need AI education for all. And that means you all.

Back to that answer that the AI generated. Instead of taking it on its face value, ask it how it arrived to that answer.

And when you do that, you realize that it actually made a critical mistake. Turns out that this answer was taken without the crucial context of ‘Do not’. That means you could be doing exactly the opposite of what you are supposed to do in a critical health situation. See, how important it is to have our AI provide us transparency and for us to educate ourselves about how AI works?

Let’s look at another example. Recently we were working on building an image identification task and we found that certain kinds of images were hard to classify. Why? Because they were rare. 

Take this image for example. It is of a black woman doctor. Our classifier kept identifying it as a nurse because it has seen many examples of a woman of color being a nurse, but not enough examples of a doctor. [2] This is an inherent bias that AI systems often exhibit, and one could argue that they are perpetuating the biases that we humans have. 

What do we do with humans in this case? Because I want to teach my girls who one day will be women of color that they could be doctors too. We get creative. We tell them stories of what could be possible and not just what has been possible so far. A woman could be a president in this country even if there hasn’t been one so far. And a woman of color can be a doctor even though there are not that many examples.

So that’s what we did.

Here are some images we have generated using AI. We provided these fake images to our classifiers and reinforced the possibility of this minority class being legitimate. This dramatically improved the AI’s ability to identify women of color as doctors.

These are just some of the examples to demonstrate how we train our kids with better values that could translate to building responsible AI systems. But just like there is no definitive book for parents to raise their children, there is no one method to build responsible and value-driven AI. Instead, we can rely on some of the universal principles to guide us.

Universal principles like these: the three monkeys of Confucius. Speak no evil, see no evil, hear no evil.

Or when it comes to robots, Issac Asimov’s three laws:

The first one says that a robot can’t harm a human. 

The second one says it needs to follow a human’s instructions unless those instructions cause it to harm a human.

And the third one says it needs to take care of itself unless that causes it to harm a human or not follow their instructions.

Of course, such laws are not perfect. Asimov’s own work shows that these three laws have loopholes.

For example, what do you do when you have to choose between saving one human over the other? A robot’s action of saving one will result in indirectly hurting the other. This violates the first law, but not taking that action would also violate the law. Such paradoxes are already starting to emerge as we build self-driving cars and other decision-making systems using AI.

So these laws are not enough. They are starting points and important safety guardrails, but simply teaching your kids that they should follow the law is not enough to have them learn, grow, and find their true potential.

So on top of Asimov’s laws, I propose three principles that come from parenting.

When the kids are small, we want to make sure they listen to us. Obeys us when they have no knowledge or value system of their own. 

Conformity. It states that AI must understand and adhere to the accepted human values and norms.

As the child grows older and starts to discover the world on their own, we want them to look up to us when there are issues or questions.

Consultation that states that to resolve or codify any value tensions or trade-offs, AI must consult humans. This acknowledges that simply having a starting set of values or guardrails is not going to be enough. We also need a mechanism through which we can learn how to operate in those moral dilemma situations. Don’t get me wrong. I don’t think we humans have figured it out either. Similarly, it’s not like the parents know everything. They are humans too. But the kids, and in this case, the AI needs to consult those with more knowledge, more experience, and certainly more say about our value system. 

Finally, as the child is really ready to be out on their own, we like them to be our partners. So they can keep growing and learning while still being grounded in your values.

Collaboration that states that AI must be in collaboration mode by default and only move to take control with the permission of the stakeholders. This principle is more about us than the AI. Just because kids are grown up and leave the house that doesn’t mean the parents are done. Once we have kids, we never stop being parents. Similarly, as much as we want the AI to do amazing things for us, we should not let go of control completely. Or at least we should have a way to get back our agency.

Conformity, Consultation, Collaboration. It’s like teaching your kids that they should not forget family values. That when things get too hard to deal with, they can count on you. And that while they will build their own lives, you will always be there for them, and you want them to be there for you as well.

This is not easy. Building AI that gives us all the benefits and does no harm to the world is not easy. 

Letting go of your kids so they could achieve their full potential while you control even less and less of their lives is not easy. But that’s what we need to do. And that’s how we will ensure that AI would mostly do what it should do and not just what it could do. 

I have been a parent for more than a decade, but I’m still figuring out how to do it right and better. I feel the same about AI.

And while nobody is born expert as a parent, every parent has to figure out their own way. Similarly, maybe not all of us were ready to have this AI child that could disrupt our lives so much. But here we are at this crossroads. It may be an obligation, but it’s also an immense opportunity. So whether you are a developer, policymaker, or a user of AI, it’s time to get educated about what this AI is capable of doing and how to teach it good values that align with ours. 

We all have a part to play here because like it or not, AI is our collective child. And it’s growing up.

[1] Shah, C. and Bender, E. M. (2024). Envisioning Information Access Systems: What Makes for Good Tools and a Healthy Web?. ACM Transactions on the Web (TWeb), 18(3), pp 1–24.

[2] Dammu, P., Feng, Y., & Shah, C. (2023, August 19-25). Addressing Weak Decision Boundaries in Image Classification by Leveraging Web Search and Generative Models. Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). Macao, S.A.R.

About the Author

Dr. Chirag Shah, Professor in the Information School at the University of Washington.

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW