Prof. Yufei Ding
We name our lab to highlight PICASSO's spirit of life-long creation and self-surmounting. We believe "research is like art". On one hand, both researchers and artists need to master a set of skills to make our work attractive and appealing. On the other hand, what really matters, in the end, is what we want to say to the world.
The long-term research goal of the PICASSO group is to build automatic, intelligent programming systems that could leverage, integrate, and advance math, physics, and computer science. The target territory to realize our research dream includes Machine Learning (ML) and Quantum Computing (QC). Our recent efforts cut across multiple programming system technologies, ranging from program verification and testing to high-level algorithmic optimization and autotuning, to domain-specific programming language designs, kernel library implementations, advanced compilation constructions, and computer architecture designs.
In particular, two key research principles lay the foundation for our research: "abstraction" and "automation". The former is the key enabler for many high-level and large-scope optimization opportunities that were not possible within a single technical stack, and it is also our muse when exploring cross-stack synergy and integration. The latter also encodes our research philosophy: researchers shall focus on original work rather than "incremental" work in slightly changed settings. One of the key ingredients for avoiding such repeated work is automation.
“Learn the rules like a pro, so you can break them like an artist.”
— Pablo Picasso (1881- 1973)
News
Co-PI for NSF NQVL Pilot lead PI Eric Hudson, UCLA
Co-PI for NSF ExpandQISE with PI Sonia Lopez Alarcon, RIT
PI for NSF FMitF fund together with co-PI Jens Palsburg, UCLA.
Subaward for DoE QSC Center led by ORNL.
ASPLOS Program Vice Chair, 2025
ASPLOS Best Paper Selection Committee, 2024
ASPLOS Program Vice Chair, 2024
Cisco Research Grant, 2024
Amazon Research Awards Fall, 2024
Samsung MSL Research Fund, 2024.
Samsung MSL Research Fund, 2023.
Cisco Research Grant, 2022
Amazon Research Awards Fall, 2021
Co-PI for Noyce Initiative gift fund for UCSB’s Quantum Team
Review Editor in Frontiers in High-Performance Computing - Cloud Computing
Associate Editor for ACM Transactions on Quantum Computing
Yuke Wang is awarded the NVIDIA Graduate Fellowship, 2022
SAMSUNG Research Gift Fund, 2022
Best Paper Nominee @ ISCA 2022
Cisco Research Grant, 2021
NSF Funding for "FET: NSF Workshop on Software-Hardware Co-design for Quantum Computing", 2021
Collaborative NSF funding with Tevfik Bulton for “FMitF: Track I: Scalable and Quantitative Verification for Neural Network Analysis and Design”, 2021
VMware Early Career Faculty Grant, 2021
NSF CAREER Award for "A Top-down Compilation Infrastructure for Optimization and Debugging in the NISQ era", 2020
Distinguished Paper Award @ OOPSLA 2020
Awards committee for the Grande Finals of the ACM Student Research Competition, 2021
Gushu Li received the Quantum Information Science and Engineering Network (QISE-NET) Triplet Fellow Award, 2021
Awards committee for IEEE Computer Society TCHPC Early Career Researchers Award, 2020
IEEE Computer Society TCHPC Early Career Researchers Award for Excellence in High-Performance Computing, 2019.
Collaborative NSF funding for “CC* Compute: A high-performance GPU cluster for accelerated research”, 2019.
Alibaba Gift Money for "Domain Specific Accelerator Research", 2019, 2020
NVIDIA GPU Grant, Xilinx FPGA Grant, Intel FPGA Grant.
NCSU Computer Science Outstanding Dissertation Award, 2018.
Recent Publications
[HPCA'25] ”Perent Interconnect for Mitigating Manycore CPU C#29 Prepush-Multicast: A Proactive and Cohmmunication Bottleneck”, Jiayi Huang, Yanhua Chen, Zhe Wang, Christopher J. Hughes, Yufei Ding, Yuan Xie
[ASPLOS'25] ”QECC-Synth: A Layout Synthesizer for Quantum Error Correction Codes on Sparse Architectures”, Keyi Yin, Hezi Zhang, Xiang Fang, Yunong Shi, Travis Humble, Ang Li, Yufei Ding
[MICRO'24] ”Surf-Deformer: Mitigating Dynamic Defects on Surface Code via Adaptive Deformation”, Keyi Yin, Xiang Fang, Travis Humble, Ang Li, Yunong Shi, Yufei Ding
[SC'24] ”RecFlex: Enabling Feature Heterogeneity-Aware Optimization for Deep Recommendation Models with Flexible Schedules”, Zaifeng Pan, Zhen Zheng, Feng Zhang, Bing Xie, Ruofan Wu, Shaden Smith, Chuanjie Liu, Olatunji Ruwase, Xiaoyong Du, Yufei Ding
[USENIX ATC'24] ”OPER: Optimality-Guided Embedding Table Parallelization for Large-scale Recommendation Model”, Zheng Wang, Yuke Wang, Boyuan Feng, Guyue Huang, Dheevatsa Mudigere, Bharath Muthiah, Ang Li, Yufei Ding
[ISCA'24] ”Soter: Analytical Tensor-Architecture Modeling and Automatic Tensor Program Tuning for Spatial Accelerators”, Fuyu Wang, Minghua Shen, Yufei Ding, Nong Xiao
[ASPLOS'24] ”OnePerc: A Randomness-aware Compiler for Photonic Quantum Computing”, Hezi Zhang, Jixuan Ruan, Hassan Shapourian, Ramana Rao Kompella, Yufei Ding.
[ASPLOS'24] ”Accelerating Deep Learning Training with Epilogue Visitor Tree”, Zhaodong Chen, Andrew Kerr, Richard Cai, Jack Kosaian, Haicheng Wu, Yufei Ding, Yuan Xie.
[ASPLOS'24] "MECH: Multi-Entry Communication Highway for Superconducting Quantum Chiplets", Hezi Zhang, Keyi Yin, Anbang Wu, Hassan Shapourian, Alireza Shabani, Yufei Ding.
[ASPLOS'24] "RAP: Resource-aware Automated GPU Sharing for Multi-GPU Recommendation Model Training and Input Preprocessing", Zheng Wang, Yuke Wang, Jiaqi Deng, Da Zheng, Ang Li, Yufei Ding.
[ASPLOS'24] "ZENO: A Type-based Optimization Framework for Zero Knowledge Neural Network Inference", Boyuan Feng, Zheng Wang, Yuke Wang, Shu Yang, Yufei Ding.
[MICRO'23] "QuComm: Optimizing Collective Communication for Distributed Quantum Computing", Anbang Wu, Yufei Ding, Ang Li.
[MICRO'23] "RM-STC: Row-Merge Dataflow Inspired GPU Sparse Tensor Core for Energy-Efficient Sparse Acceleration", Guyue Huang, Zhengyang Wang, Po-An Tsai, Chen Zhang, Yufei Ding, Yuan Xie.
[USENIX ATC'23] "TC-GNN: Bridging Sparse GNN Computation and Dense Tensor Cores on GPUs", Yuke Wang, Boyuan Feng, Zheng Wang, Guyue Huang, Yufei Ding.
[OSDI'23] "MGG: Accelerating Graph Neural Networks with Fine-grained intra-kernel Communication-Computation Pipelining on Multi-GPU Platforms", Yuke Wang, Boyuan Feng, Zheng Wang, Tong Geng, Ang Li, Kevin Barker, Yufei Ding.
[ISCA'23] "OneQ: A Compilation Framework for Photonic One-Way Quantum Computation", Hezi Zhang, Anbang Wu, Yuke Wang, Gushu Li, Hassan Shapourian, Alireza Shabani, Yufei Ding.
[ISCA'23] "Q-BEEP: Quantum Bayesian Error Mitigation Employing Poisson Modeling over the Hamming Spectrum", Samuel Stein, Nathan Wiebe, Yufei Ding, James Ang, Ang Li.
[ISCA'23] "ECSSD: Hardware/Data Layout Co-Designed In-Storage-Computing Architecture for Extreme Classification", Siqi Li, Fengbin Tu, Liu Liu, Jilan Lin, Zheng Wang, Yangwook Kang, Yufei Ding, Yuan Xie.
[MLSys'23] "ALCOP: Automatic Load-Compute Pipelining in Deep Learning Compiler for AI-GPUs", Guyue Huang, Yang Bai, Liu Liu, Yuke Wang, Bei Yu, Yufei Ding, Yuan Xie.
[VLDB'23] "SPG: Structure-Private Graph Database via SqueezePIR", Ling Liang, Jilan Lin, Zheng Qu, Ishtiyaque Ahmad, Liu Liu, Fengbin Tu, Trinabh Gupta, Yufei Ding, Yuan Xie.
[TACO'23] "MPU: Memory-Centric SIMT Processor via In-DRAM Near-Bank Computing", Xinfeng Xie, Peng Gu, Yufei Ding, Dimin Niu, Hongzhong Zheng, Yuan Xie.
[MICRO'22] "AutoComm: A Framework for Enabling Efficient Communication in Distributed Quantum Programs", Anbang Wu, Hezi Zhang, Gushu Li , Alireza Shabani, Yuan Xie, Yufei Ding.