Machine Learning System (MLSys) Research
Machine learning has been widely used in a wide spectrum of modern application domains such as recommendation, computer vision, natural language processing, etc. The performance of inference is crucial to deploy pretrained models into production. With the development of new machine learning models and hardware architectures, a set of new challenges emerge from efficient executing of inference jobs for both large models and small models. On the one hand, ultra-scale models pretraining becomes increasingly popular in recent years, while how to deploy these large models efficiently onto hardware platforms with as minimum resource usage is still not well studied. On the other hand, small-scale models face the performance issues of non-computation overhead, which becomes an increasingly important factor for end-to-end performance with the boost of computing power from both general-purpose (e.g., GPUs) and customised ML accelerators. At FSA, we are particularly interested in tackling the essential performance problems above for both extreme large-scale and small-scale models on a diverse range of hardware platforms. Our team has a unique capability of collaborating with top industry research labs to use real-world enterprise scenarios to constrain and refine our solutions that can scale to millions of users.
Along with our top international academic and industry lab collaborators (Google Brain, Microsoft and Alibaba), FSA aims to explore principles and key technologies of multi-scale multi-dimensional machine learning inference system optimisation through cross-stack co-design (compiler, runtime and hardware accelerators). The scope of our MLSys research includes but not limited to ML compiler design and optimisations, software-hardware co-design, runtime optimisation techniques, and customised acceleration for novel deep learning models.
Publications
- AStitch: enabling a new multi-dimensional optimisation space for memory-intensive ML training and inference on modern SIMT architectures. ASPLOS 2022.
- Randomness In Neural Network Training: Characterising The Impact of Tooling, MLSys 2022, with Google Brain.
- COMET: A Novel Memory-Efficient Deep Learning Training Framework by Using ErrorBounded Lossy Compression, VLDB'22
- MalFox:Camouflaged Adversarial Malware Example Generation Based on Conv-GANs Against Black-Box Detectors, IEEE Transactions on Computers, 2022
- Enabling Highly Efficient Capsule Networks Processing Through Software-Hardware Co-Design. IEEE Transactions on Computers, 2021
- ClickTrain: efficient and accurate end-to-end deep learning training via fine-grained architecture-preserving pruning. ICS 2021.
- η-LSTM: Co-Designing Highly-Efficient Large LSTM Training via Exploiting Memory-Saving and Architectural Design Opportunities. ISCA 2021
- Shift-BNN: Highly-Efficient Probabilistic Bayesian Neural Network Training via Memory-Friendly Pattern Retrieving. MICRO 2021
- MAPA: Multi-Accelerator Pattern Allocation Policy for Multi-Tenant GPU Servers, SC 2021.
- Toward efficient interactions between Python and native libraries, ESEC/FSE’21.
- A novel memory-efficient deep learning training framework via error-bounded lossy compression, PPoPP'21
- Enabling Highly Efficient Capsule Networks Processing Through A PIM-Based Architecture Design, HPCA' 20.
- BSTC: A Novel Binarized Soft Tensor Core Design for Accelerating Bit-Based Approximated Neural Nets, SC 2019.
- LP-BNN: Ultra low latency BNN inference with layer parallelism, ASAP 2019.
- SuperNeurons: Dynamic GPU Memory Management for Training Deep Neural Networks., PPoPP 2018
- NUMA-Caffe: NUMA-Aware Deep Learning Neural Networks, TACO 2018
- MIC-SVM: Designing A Highly Efficient Support Vector Machine For Advanced Modern Multi-Core and Many-Core Architectures, IPDPS 2014