When I point out a clear mistake made by the AI I use for research, it generally compliments me for catching it, but in general, doesn't actually acknowledge it's mistake. Almost like it doesn't actually have any sense of self-awareness.
Yes, remember it is 'artificial'. It can be useful for those that embrace it but, as others have said, it needs checking, and I don't see how that will change.
We need to remember that the difference between man and machine is that human beings can make a judgement - a machine can only provide a response based on previous or programmed data.
I remember a TV programme that Jeremy Clarkson did travelling America, and in one episode he was introduced to a robot which could 'answer any question'.
Clarkson, being Clarkson, asked his first question - 'What do you think of the Dooby Brothers?'
Result: Clarkson 1 - Robot 0
(and no, the robot couldn't even respond that it hadn't heard of the Dooby Brothers.)