Hello, It's Me

Jixin Han (kalfazed)

I'm a

Never leave that till tomorrow which you can do today -- Benjamin Franklin

My Journey

Education

2017-2022

Ph.D. - Waseda University, Tokyo, Japan

While pursuing my Ph.D. degree, I mainly focus on the program logic and mathematical formal verification. Building a set of semantics of programming logic and verify the correctness of program optimization. My supervisor is Prof. Keiji Kimura, who gave me invaluable guidance and support during my doctoral course.

2014-2017

Master - Waseda University, Tokyo, Japan

During my master course, I studied the automatic compiler optimization. The research is about developing a compiler that can exploit the parallelism of a program from both fine-grained and coarse-grained granularity to get the best performance.

2010-2014

Bachelar - Beijing Launguage and Culture University, Beijing, China

I started from department of media and communications but transferred to Faculty of Computer Science and Engineering after paticipated some workshop of computer science in the second year of my college life. Those presentations and speeches motivated me to start my journey in this area.

Experience

Jan.2024 ~ Jun. 2024

Tech Lead, T2 Auto, Perception model deployment and integration team

T2 Auto is a startup company, where most of the software prototypes are under development. When I was in T2 Auto, I belong to the perception team and work as a Technical Lead to lead the team from the aspect of both techniques and management. My job was to develop a training and deployment prototype of BEVFusion meta-architecture. Then optimized the model into NVIDIA architecture to make sure the inference time was fast. The BEVFusion model uses LiDARs and cameras as input sensors. I tried several different tactics to reach high precision and fast inference speed. For example, develop customized BEVPool methods to accelerate the projection from camera view to BEV grid, or use deformable attention as a substitute for BEV Pool.

Besides, in the aspect of deployment, I also develop some benchmark tools for both the training model and the deployed model. These benchmark tools contain layer-wise precision analysis, coarse/fine-grained speed analysis, and so on.

3D object detection and tracking in T2 perception team
BEVFusion method, which fuse camera and lidar to generate feature map in BEV space
Apr.2023 ~ Present

Visiting Researcher and Part-time Lecturer, Waseda University

As a part-time lecturer at Waseda University's Graduate School of Fundamental Science and Engineering, I teach one class per week. My subject is C/C++ programming. At the same time, I am also a visiting researcher at Waseda University's research institute. My research interests include Transformer optimization and architecture design.

Sparse 3D Conv (spconv) mechanism by using Implicit GEMM Convolution to optimize processing of sparse data
Operation fusion method for quantized model, which is optmized via Quantization-Aware Training (QAT)
Jan.2023 ~ Present

Online Webinar Lecture (Autonomous driving, CUDA, TensorRT deployment)

I am coaching an online class in China similar to "Udemy", and it offers a course related to "High Performance Implementation Using CUDA and TensorRT in Autonomous Driving". The motivation for starting this course is to help people quickly understand the key to reaching high performance in edge computing. Though this course mainly focuses on NVIDIA, the idea described in this course is also applicable to other edge devices.

This course starts from the basics of parallel computing and computer architecture, such as describing how CUDA Core and Tensor Core work and using the "Roofline Model" to analyze the computation efficiency of operations. Then I go deep into the deep learning compilers and describe how these compilers optimize models, including showing the quantization and pruning algorithm, and analyze how to fix the precision drop after optimizations, and so on. Finally, I also show how to deploy various OSS SOTA models using CUDA acceleration methods and TensorRT API.

Apr.2022 ~ Dec.2023

Deep learning Engineer/Researcher, Honda Motor Co., Ltd., Intelligent Solution Laboratory

When I was in Honda, I was developing a multi-task DNN model for autonomous driving. The tasks include detection, segmentation, key-points, optical flow, and tracking. To train the network with resource resource-limited datasets, I tried to use several dataset enhancement techniques, such as active learning, and pseudo pseudo-labeling, to optimize the training. Recently, I have started developing BEV. The frameworks used are Darknet and PyTorch.

Additionally, to perform real-time inference on in-vehicle hardware with limited computational resources, I optimized the perception's multi-task model to suit the hardware structure. I used C++ to perform parallel processing and asynchronous execution, used CUDA to speed up DNN pre-processing/post-processing, and used TensorRT to optimize each layer of DNN. I also created some custom DNN operations in C++ and CUDA to exploit higher parallelism. Other optimizations include reducing the size of the model using quantization , pruning and replacing low-density calculations with other calculations within the Transformer and CNN.

Apr.2020 ~ Mar.2021

Research Assistant, Graduate School of Information Science and Technology, Waseda University

During this period, I focused on the research, which is about mathematically proving the correctness of program behaviors, where the programs must meet the relaxed memory consistency and persistency on volatile memory(VM) and non-volatile memory(NVM). Proving the memory propagation of a parallel or concurrent program is a little bit tricky because of the non-deterministic behavior of the program. For example, in a concurrent program, the order of memory operation issues could be different from the order that these memory operations are done, and the order of status changes on hardware. It is common to use some synchronization methods (barrier, fence, etc) to restrict the order so that the program behavior is deterministic somehow. However, an aggressive synchronization combination may lead to low performance of a program. As a result, the program cannot benefit from concurrency.

To handle this problem, I published an abstracted memory model to show the relaxed memory propagation flows, and a set of operational semantics to formulate the non-deterministic behavior of concurrent programs. I also proposed a mechanism to fully exploit the parallelism of memory propagation in concurrent programs, and prove the correctness of these programs mathematically. This research was finally published in a computer science journal and was awarded in the computer science academic community.

Abstracted memory model design space of parallel and concurrent memory propagation
The formally designed operational semantics of memory propagation
Being horned as Key Chapter, we were invited to join the IEEE HKN Student Leadership Conference in Boston, Massachusetts, USA
The IEEE Eta Kappa Nu certificate
Apr.2019 ~ Sep.2020

President, IEEE Eta-Kappa-Nu Mu-Tau Chapter

In 2018, being supervised by Prof. Hironori Kasahara, who was president of IEEE Computer Society and Vice president of Waseda University, I and some partners established the first IEEE HKN Chapter in Japan, named Mu Tau Chapter!

IEEE-HKN is short for IEEE Eta Kappa Nu, which is the honor society of the Institute of Electrical and Electronics Engineers (IEEE), and promotes excellence in the profession and in education with ideals of Scholarship, Character and Attitude. It recognizes outstanding students, alumni, and professionals who have made significant contributions to the fields of electrical engineering, computer engineering, and other IEEE-associated fields. IEEE HKN was founded on 28 October 1904, there are over 260 IEEE-HKN chapters worldwide, located at universities and colleges that have accredited programs in electrical and computer engineering and related fields.

I was president of Mu Tau Chapter, and our Chapter has achieved Key Chapter Status for 2019! We were invited to join HKN's Annual Student Leadership Conference, which was held in Nov 2019 in Boston, Massachusetts, USA. We received an award at the conference there.

Apr.2017 ~ Mar.2020

Research Associate, Waseda University, Faculty of Fundamental Science and Engineering

If a compiler can automatically parallelize a program, it is important to make sure the optimized program works the same as the one before the optimization. In industry, it is common to use unit tests to check the code is working correctly as expected by the programmer. However, it is nearly impossible to take over all the possible unexpected behavior and check the correctness, especially for corner cases. Different from the industry, it is generally known as a common approach in academia that uses mathematics to examine whether a compiler can maintain the correct behavior and meaning of a program. As long as we can abstractly define the grammar and semantics of parallel and sequential programs, and formally define the execution of programs, it is possible to logically infer the execution of parallel programs without running them indeed.

During this period, I focused on the validation of a parallelizing compiler, which exploits the parallelism of a sequential program by using a set of techniques, such as calculating the Earliest Execution Condition (EEC) of different blocks in a program, pointer analysis, loop optimization, task fusion, and so on. This research was carried out jointly with INRIA in France and the University of Arizona in the United States, and the contributions were successfully presented at international conferences and in a journal.

Using Earliest Execution Condition(EEC) to show the data and control dependency of different blocks in a program
The translation validation method and verified translation method to prove the correctness of a program optimization

My Drawings

A wondering girl

Drawn on May 22, 2020

The kenomo girl

Drawn on June 27, 2020

Finally, you made it !

Drawn on September 3, 2022

Eevee playing together

Drawn on February 26, 2018

Misaki and Usui

Drawn on August 26, 2020

Marry me

Drawn on June 25, 2021

More