728x90 attention model1 모델 발전과정 2 - Attention (Bahdanau / Luong) Attention https://arxiv.org/pdf/1409.0473.pdf 0. Abstract The models proposed recently for neural machine translation often belong to a family of encoder–decoders and encode a source sentence into a fixed-length vector from which a decoder generates a translation. In this paper, we conjecture that the use of a fixed-length vector is a bottleneck in improving the performance of this basic encoder.. 2023. 2. 28. 이전 1 다음 728x90