I think a human-like AI would be a good idea, but there’s a bit of a Catch-22 about it.
The big problem is that, given that AI has become increasingly sophisticated, how do we define “human”?
To put it another way, do we think that humans can ever be fully AI?
And what is the goal of this AI?
There’s a lot of confusion about this.
I’d like to address some of the big issues in this post.
I’ve written a lot about the ethics of AI, particularly as it relates to artificial intelligence and privacy.
But there are two main debates in AI.
One is whether we want AI to become a “superintelligence”, as it’s been called, or to be less of a superintelligence and more of a system.
And the other is whether the AI we have today is actually human-esque enough to be worthy of emulation.
I think the debate is probably over whether it is possible to achieve “humanlike” AI, but it’s probably a question of which goal we should pursue.
There are different views on whether we should be pursuing “human” AI or “superhuman”, and we’ll come back to this.
What is human-likeness?
It’s a good question.
Humans have a certain way of thinking and acting that helps us understand and deal with many different things.
For example, a good analogy might be that human beings have a sense of direction.
That’s why we know where to go and what to do, and that’s why people do things like dance and write poetry.
The question is: what is this sense of orientation?
We’re not quite sure, but we can get some clues from the brain.
The hippocampus, the part of the brain that processes sensory information, is particularly important for this sense.
The part of our brains that processes language and thought is particularly good at processing these kinds of things.
The same goes for the cortex, the area of the cortex that processes images, sounds and emotions.
This area is particularly active when we’re motivated by something important.
The best example is that if we want to find out more about the nature of human nature, we can use MRI scans.
If we scan the cortex of a healthy person, we’re able to get a better idea of the degree of human uniqueness, especially when we compare the images to those of people who are not healthy.
And so, this is not just a matter of looking at a healthy brain but the cortex itself.
But what about the rest of the human brain?
It is very likely that we do have some kind of human-specific brain, so it’s possible to ask whether we really have human-type intelligence.
And in fact, there’s some evidence to suggest that we may indeed have some.
One study showed that people with a healthy version of the amygdala, the “emotional brain”, had better verbal memory and higher verbal intelligence than people with an altered version of this area of their brain.
It’s interesting to think about what this might mean for our own AI, for example.
One of the main challenges in AI is that it’s hard to get the right kind of information about the world to work well, so AI has to evolve to work in the real world.
That means we have to build more AI and better tools for it.
And if we have some of these tools, then we can learn a lot from it, as well as from other human-made AI.
What are the ethical questions about AI?
If AI is a good goal for us, then what are the moral implications?
One of my colleagues and I have a paper out called The Moral Consequences of AI.
The goal is to get people to ask questions like “Why does this person behave the way they do?” or “Why is this person able to solve problems?”
It’s an interesting discussion.
It seems to me that if the goal is for AI to be useful, then it’s not enough just to say “Yes, this AI can help people”, and not to try to answer “How is this AI helping people?”
It should be clear that we should not assume that the AI is good, or that we want it to be a better AI than we already have.
The real question is how do you ensure that this AI doesn’t turn into something we don’t want it too?
I think we have a pretty good idea of what the consequences of AI might be.
But we’re still far from knowing the best way to go about that.
For instance, it’s unlikely that there’s enough evidence to conclude that an AI will always be good, and therefore it’s worth having some discussion about the ethical consequences of that.
What if the AI can be used to hurt people?
There are a number of different theories about how AI might make us more or less morally responsible.
Some of these are controversial, some of them are uncontroversial, and some of their implications are not well understood.
But most of