Computer Science News

UCL Machine Learning graduate to present at leading Deep Learning conference

UCL Machine Learning graduate Michal Daniluk is to present his work at the 5th International Conference on Learning Representations (ICLR) held in Toulon, France, in April 2017; the most important Deep Learning conference in the field.

Michal will present a paper – “Frustratingly Short Attention Spans in Neural Language Modeling” based on his MSc Machine Learning project, supervised by Tim RocktäschelJohannes Welbl and Sebastian Riedel within the Machine Reading Group

Michal is the receipient of the MSc Machine Learning Programme Director’s Award (2015/2016) for Outstanding Project Report (Second Place), and received funding from UCL to travel to ICLR in Toulon France to present his paper.

An abstract is below and the paper can be found at https://arxiv.org/abs/1702.04521.

"Neural language models predict the next token using a latent representation of the immediate token history. Recently, various methods for augmenting neural language models with an attention mechanism over a differentiable memory have been proposed. For predicting the next token, these models query information from a memory of the recent history which can facilitate learning mid- and long-range dependencies. However, conventional attention mechanisms used in memory-augmented neural language models produce a single output vector per time step. This vector is used both for predicting the next token as well as for the key and value of a differentiable memory of a token history. In this paper, we propose a neural language model with a key-value attention mechanism that outputs separate representations for the key and value of a differentiable memory, as well as for encoding the next-word distribution. This model outperforms existing memory-augmented neural language models on two corpora. Yet, we found that our method mainly utilizes a memory of the five most recent output representations. This led to the unexpected main finding that a much simpler model based only on the concatenation of recent output representations from previous time steps is on par with more sophisticated memory-augmented neural language models."

 


Posted 22 Feb 17 17:06
  • 2019: 5 items
  • 2018: 44 items
  • 2017: 69 items
  • 2016: 65 items
  • 2015: 49 items
  • 2014: 43 items