It looks like you're new here. If you want to get involved, click one of these buttons!
The thing is, we don't want them full learning capability. We want them to learn within socially accepted limits. If it learns something outside of it, it will be brought off line to correct the 'fault.' Socially accepted norms may not reconcile with cold hard facts and statistics. In these cases, cold hard facts and statistics will be squashed in favor of the consensus view.A good example of this can be seen in some parts of the muslim world. We promoted Democracy in places like Egypt, Libya, and Iraq. They voted, and voted for people and ideas we didn't like. We then worked to get rid of them in favor of something we really wanted. What we wanted was a pro western government as opposed to a government that their people actually wanted. (I'm not making a value judgement on whether policy was good or bad. That's not my point). It's the same for computer AIs. We want something that is different than something we want.
"We have met the enemy and he is us." ~Pogo Possum.
I self identify as a monkey.