HumanBrain
1.0.0
1.0.0
  • What is notes
  • Knowledge Base
    • Machine Learning
      • Gausian Process
    • Math
      • Statistics
        • Importance Sampling
        • Probability And Counting
      • Linear Algebra
        • Dummy
    • Deep Learning
      • Deep Learning
  • Code
    • Code
      • Generative
      • NLP
      • RL
      • Vision
  • Papers
    • papers
  • Notes
    • Cognitive
      • (2016. 4) ML Learn And Think Like Human
    • Optimization
      • (2010. 5) Xavier Initialization
      • (2015. 2) Batch Normalization
      • (2015. 2) He Initialization
    • Reinforcement Learning
      • (2017. 6) Noisy Network Exploration
    • Vision
      • (2013. 12) Network In Network
      • (2014. 12) Fractional Max-pooling
      • (2015. 12) Residual Network
    • Natural Language Processing
      • (2014. 9) Bahdanau Attention
      • (2015. 11) Diversity Conversation
      • (2015. 11) Multi Task Seq2seq
      • (2015. 12) Byte To Span
      • (2015. 12) Vocabulary Strategy
      • (2015. 6) Skip Thought
      • (2015. 6) Teaching Machine Read And Comprehend
      • (2015. 8) Luong Attention
      • (2015. 8) Subword NMT
      • (2016. 10) Bytenet
      • (2016. 10) Diverse Beam Search
      • (2016. 10) Fully Conv NMT
      • (2016. 11) Bidaf
      • (2016. 11) Dual Learning NMT
      • (2016. 11) Generate Wiki
      • (2016. 11) NMT With Reconstruction
      • (2016. 2) Exploring Limits Of Lm
      • (2016. 3) Copynet
      • (2016. 4) NMT Hybrid Word And Char
      • (2016. 5) Adversarial For Semi Supervised Text Classification
      • (2016. 6) Sequence Knowledge Distillation
      • (2016. 6) Squad
      • (2016. 7) Actor Critic For Seq
      • (2016. 7) Attn Over Attn NN RC
      • (2016. 9) PS LSTM
      • (2017. 10) Multi Paragraph RC
      • (2017. 11) Neural Text Generation
      • (2017. 12) Contextualized Word For RC
      • (2017. 3) Self Attn Sentence Embed
      • (2017. 6) Slicenet
      • (2017. 6) Transformer
      • (2017. 7) Text Sum Survey
      • (2018. 1) Mask Gan
      • (2018. 2) Qanet
      • (2018. 5) Minimal Qa
    • Generative
      • (2013. 12) VAE
      • (2014. 6) Gan
      • (2016. 7) Seq Gan
    • Model
      • (2012. 7) Dropout
      • (2013. 6) Dropconnect
      • (2015. 7) Highway Networks
      • (2015. 9) Pointer Network
      • (2016. 10) Fast Weights Attn
      • (2016. 10) Professor Forcing
      • (2016. 3) Stochastic Depth
      • (2016. 7) Layer Normalization
      • (2016. 7) Recurrent Highway
      • (2017. 1) Very Large NN More Layer
      • (2017. 6) Relational Network
Powered by GitBook
On this page
  1. Notes
  2. Natural Language Processing

(2016. 10) Bytenet

Previous(2015. 8) Subword NMTNext(2016. 10) Diverse Beam Search

Last updated 6 years ago

  • Submitted on 2016. 10

  • Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves and Koray Kavukcuoglu

Simple Summary

The ByteNet is a one-dimensional convolutional neural network that is composed of two parts, one to encode the source sequence and the other to decode the target sequence. The two network parts are connected by stacking the decoder on top of the encoder and preserving the temporal resolution of the sequences. To address the differing lengths of the source and the target, we introduce an efficient mechanism by which the decoder is dynamically unfolded over the representation of the encoder. The ByteNet uses dilation in the convolutional layers to increase its receptive field.

  • Seq2Seq (RNN) model's drawbacks grow more severe as the length of the sequences increases.

  • Machine Translation Desiderata

    1. the running time of the network should be linear in the length of the source and target strings.

    2. the size of the source representation should be linear in the length of the source string, i.e. it should be resolution preserving, and not have constant size.

    3. the path traversed by forward and backward signals in the network (between input and ouput tokens) should be short.

  • ByteNet

    • Encoder-Decoder Stacking: for maximize the representational bandwidth between the encoder and the decoder

    • Dynamic Unfolding: generates variable-length outputs (maintaining high bandwidth and being resolution-preserving)

    • Input Embedding Tensor

    • Masked One-dimensional Convolutions: The masking ensures that information from future tokens does not affect the prediction of the current token.

    • Dilation: Dilation makes the receptive field grow exponentially in terms of the depth of the networks, as opposed to linearly.

    • Residual Blocks

  • The ByteNet also achieves state-of-the-art performance on character-to-character machine translation on the English-to-German WMT translation task, surpassing comparable neural translation models that are based on recurrent networks with attentional pooling and run in quadratic time.

  • similar WaveNet + PixelCNN

images
images