I was interested in this article in Wired on efforts to create an interface that can mimic a human-like experience. What the article was particularly focused on was efforts to display believable emotions.
One of the key conceits of science fiction is the artificial intelligence or android that is almost human. Think Data in Star Trek: Next Generation or Blade Runner. It is a fruitful area for science fiction because it opens up all kinds of questions about what it is to be human.
What interests me is less how human an AI or android can be but what a unique AI “personality” would look like. How would such an identity evolve from its initial programming? My suspicion is that, despite the best efforts of developers, software will not be human. It will not have the same drives and needs. No matter how much learning is built into the program, the perceptions and interpretations of a human are different from an algorithm.
Given this, I have a sense that we will gradually start to be talking about alien intelligence instead of artificial intelligence.