Let's get to know: “Tay…” The artificial intelligence project launched by Microsoft for a teenage girl who learns through chats on social media, not like those old projects that gave brief responses, but as a natural teenager who uses the common way of communicating between teenagers, Through chats he can learn and have his own personality and opinions, and differently from what smartphone users are familiar with when dealing with a PDA Siri or Cortana; they are able to handle direct orders and instructions; but Tai was different.
It took 24 hours for Ty to move from a teenager enjoying conversations with her friends on Twitter to a Nazi pro-Hitler and racist too! This prompted Microsoft to delete the racist tweets, temporarily suspend Tai and return to further research to make sure that this did not happen again, Peter Lee, vice president for research, said in an official statement.
This casts a dark shadow on the ability of social media to change the views and ideas of users; How does the information that the user is exposed on a daily basis to transfer from one moderate to another extremist in a few hours? The nightmare is relative. The more you are able to receive and analyze new information, the faster you will change as a result of what has been added to your knowledge; artificial intelligence will be more likely to deviate from its basics and expected uses.
On the other hand… It can be explained that these data have been received mainly from human users…. In other words, the deviation in the output of artificial intelligence is the result of an imbalance in the inputs that represent a painful human reality. Racism and war are purely human behaviors; they have been practiced since the inception of man; This brings us from the dream of a neutral artificial intelligence that helps to overcome difficulties and obstacles and contribute to the creation of a better life; to the nightmare of a more sophisticated entity of man and of higher capacity and the same intellectual distortions.