Sequential Recommendation using both Self-attention & Knowledge Graph
This is a research I have currently been doing on Recommender System. Evaluated on Steam reviews, Amazon Beauty/Games, MovieLens dataset using metrics like Hit@k, NDCG@k, AUC. Written in Python, Pytorch.
  • Implemented Translation-based Recommender using Numpy, compared with baseline model: Bayesian Personalized Ranking.
  • Implemented Self-attentive Deep Neural Networks (sasRec) using ideas from Transformer, achieved state-of-the-art performance.
  • Introduced hidden variable to sasRec to capture heterogeneous relationships among items.
  • Generated knowledge graph (KG) embeddings using KB4REC, implemented graph convolutional network for learning structural proximity among entities in KG and sequential recommendation.
  • Implement Tranformer as graph convolution network for learning structural proximity and long-term dependency in one network (undergoing).
  • Visual Question Answering & Image Captioning
    Reimplemented and Improved VQA 2017 & 18 champion's work. Evaluated on BLEU1 & BLEU4 score. Written in Pytorch
  • Image captioning: Implemented pre-trained ResNet50 as encoder and LSTM as decoder, used teacher forcing for training, experiemnted with generation techniques: deterministic, stochastic, beam search
  • VQA17: Implemented faster-RCNN to extract bottom-up attention, implemented GRU to generate top-down attention weights, used stochastic teacher forcing for training.
  • Improvements: Experimented with pre-trained word embeddings: Word2Vec, Fastext and GloVe. Fine-tuned bottom-up attention, used Adamax for optimization, applied dropout & gradient clipping, experimented with different fusion methods.
  • [PDF], [CODE]