Bias in NLP 101 talks about the biases present in Texts based for Men and Women. For this, a dataset based on movies reviews from India is used and worked on by IBM. The intention is on having various situations where we can check if the model predicts a biased result as based on the stereotypical notions developed in the society for both men and women. Since movies replicate the reality in the most significant way their reviews being less in length would get biased. To de-bias the system, a model called DeCogTeller is used which results in the correct form of Gender. This project is a Talk Session on Machine Learning and Natural Language Processing and also brings out the challenges companies like Google, Amazon and Facebook faced while recruiting women.
Link to the presentation: http://bit.ly/ppt_buzzwords