Natural Language Processing
Python Library to quickly initialise custom Transformer models
AI Lab Container
GPU container for rapid and flexible prototyping for AI applications
Optimising TensorFlow Performance
Tutorial at PyCon SG 2019 on faster training with TensorFlow 2.0, covering a variety of techniques.
JupyterHub @ SUTD
NVIDIA GPU cluster to support deep learning undergraduate courses and research: Moodle and JupyterHub, running on a multi-node Kubernetes cluster on-premise.
Setup process and configuration files are publicly available on GitHub.
Python Library to easily record and graph various NVIDIA GPU metrics during model training.
Developed on-boarding examples to assist research teams in Singapore to do deep learning training on NSCC, Singapore’s National Supercomputing Cluster.
Open Source contributions
NLP Library by HuggingFace
#1763, #1668, #1590
model.fit() training for TF XLNet, TF XLM models
Added automatic mixed precision support for training examples and inference benchmarks
DL Library by Google
Added docstring for tf.train.experimental.enable_mixed_precision_graph_rewrite
Optimizing BERT Training Performance in tf.keras
Exploration of various techniques and the impact on training throughput of BERT.
Chrome Extension to combat Fake News
Chrome Extension to educate users on evaluating the credibility of a web page.
Real-time object detection and classification of recyclable waste.