This talk is about how adversarial attacks can manipulate our deep learning modules and create drastic variations in the context of data. It focuses on Large Textual data and how functions of Natural Language Processing will be trained over wrong information. These attacks compromise the deep learning models and alter the meaning of the data. It is very critical to protect our models from such attacks to protect our data. The talk describes of the measures which we can implement for our Natural Language Processing models.

Refer the presentation:

vbuzz 2
Berlin Buzzwords
08.06.2020 19:30 – 20:00