Chapter 12 Use-Cases for NLP

Author: Matthias Aßenmacher

Since it is all fine and good to know the theory and how everything’s working on the inside, we thought that this booklet could also benefit from showcasing some exemplary use case(s). Initially we had planned to include two chapters on different use cases, but during the course a student assigned to one of these chapters dropped out of the course. So we were left with only one (but highly motivated) student willing to pursue this issue.

The following chapter will contain a use case on Natural Language Generation (NLG) which is a task that (oftentimes) strongly relies on an encoder-decoder style architecture. Recently, these types of models could benefit a lot from the use of Attention mechanisms (Bahdanau, Cho, and Bengio (2014)) and the proposal of the Transformer architecture (Vaswani et al. (2017)) presented in Chapter 8. Nevertheless the task of NLU requires more than just good architectures, it is also necessary to handle the output of these models in a suitable way in order to generate meaningful text.

So without further ado: The following chapter will showcase the challenges of NLU by touching upon the the tow topic Chatbots and Image Captioning, where the latter represents a hybrid task of Computer Vision and NLP.

References

Bahdanau, Dzmitry, Kyunghyun Cho, and Yoshua Bengio. 2014. “Neural Machine Translation by Jointly Learning to Align and Translate.” arXiv Preprint arXiv:1409.0473.

Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. “Attention Is All You Need.” In Advances in Neural Information Processing Systems, 5998–6008.