Peijie Chen

Peijie Chen pay-jyeah Chen

Senior ML Engineer at Noteworthy AI

Noteworthy AI

Biography

Peijie Chen completed his Ph.D. in the Department of Computer Science and Software Engineering at Auburn University. His passion for AI ignited after AlphaGo’s historic win. With a strong background in signal processing, holding an undergraduate degree in Telecommunication Engineering and a Master’s degree in Electrical Engineering, Peijie has delved into deep learning research, including object detection, explainable AI, and multi-modal learning. His academic journey combines rigorous engineering disciplines with a passion for the evolving field of AI technologies.

Interests
  • Artificial Intelligence
  • Deep Learning
  • Movies
  • Cats
Education
  • Ph.D. in Computer Science, 2018 - 2024

    Auburn University

  • M.S. in Electrical Engineering, 2015 - 2017

    Auburn University

  • B.S. in Telecommunication Engineering, 2010 - 2014

    Shenzhen University

Skills

Technical
Python
Pytorch
TensorFlow
SQL
Hobbies
Movies
Cats

Experience

 
 
 
 
 
Machine Learning Engineer Intern
Noteworthy AI
April 2024 – January 2024 Connecticut, US

During my internship, my primary role involved:

  • Developing and enhancing object depth estimation/defect detection models
  • Collaborating with the engineering team to deploy models and algorithms
 
 
 
 
 
Research Assistant
Auburn University
January 2023 – December 2023 Auburn University
  • Conducting research on machine learning
  • Collaborating with students and faculty on research projects
 
 
 
 
 
Machine Learning Engineer Intern
Noteworthy AI
September 2022 – December 2022 Connecticut, US

During my internship, my primary role involved:

  • Developing and enhancing object detection/segmentation models
  • Designing and implementing data collection processes
  • Collaborating with the engineering team to deploy models and algorithms
 
 
 
 
 
Research Assistant
Auburn University
August 2021 – August 2022 Auburn University
  • Conducting research on machine learning
  • Collaborating with students and faculty on research projects
 
 
 
 
 
Research Assistant
NSF Research Experiences for Undergraduates on Smart UAVs
May 2021 – July 2021 Auburn University

My responsibilities included:

  • Supporting students with logistics and mentorship
  • Providing machine learning applications in UAV technology
  • Offering practical advice and problem-solving strategies
 
 
 
 
 
Teaching Assistant
Auburn University
January 2019 – August 2021 Auburn University

I assisted with:

  • Course preparation and grading
  • Student support and mentorship
  • Lab and project supervision
 
 
 
 
 
Co-founder
BooJum Studio
January 2017 – January 2018 Shenzhen, China

I co-founded BooJum Studio, where we:

  • Leveraged machine learning to enhance the art design process
  • Streamlined client-designer interactions
  • Generated a variety of design options through automation
 
 
 
 
 
Intern
Shenzhen Xun Fang Telecom
May 2014 – September 2014 Shenzhen, China

My work involved:

  • Maintaining and optimizing networking systems for WCDMA services

Projects

*
PEEB, Part-based Bird Classifier with an Explainable and Editable Language Bottleneck
We proposed a part-based bird classifier that makes predictions based on part-wise descriptions. Our method directly utilizes human-provided descriptions (in this work, from GPT4). It outperforms CLIP and M&V by 10 points in CUB and 28 points in NABirds.
PEEB, Part-based Bird Classifier with an Explainable and Editable Language Bottleneck
gScoreCAM, What objects is CLIP looking at?
Based on the observations that CLIP ResNet-50 channels are very noisy compared to typical ImageNet-trained ResNet-50, and most saliency methods obtain pretty low object localization scores with CLIP. By visualizing the top 10% most sensitive (highest-gradient) channels, our gScoreCAM obtains the state of the art weakly supervised localization results using CLIP (in both ResNet and ViT versions).
gScoreCAM, What objects is CLIP looking at?
How explainable are adversarially-robust cnns?
We perform the first, large-scale evaluation of the relations of the three criteria using 9 feature-importance methods and 12 ImageNet-trained CNNs that are of 3 training algorithms and 5 CNN architectures.
How explainable are adversarially-robust cnns?
The shape and simplicity biases of adversarially robust imagenet-trained cnns
We perform the first, large-scale evaluation of the relations of the three criteria using 9 feature-importance methods and 12 ImageNet-trained CNNs that are of 3 training algorithms and 5 CNN architectures.
The shape and simplicity biases of adversarially robust imagenet-trained cnns
Layer-Wise Entropy Analysis and Visualization of Neurons Activation
Understanding the inner working mechanism of deep neural networks (DNNs) is essential and important for researchers to design and improve the performance of DNNs. In this work, the entropy analysis is leveraged to study the neurons activation behavior of the fully connected layers of DNNs.

Recent Publications

Quickly discover relevant content by filtering publications.
(2022). How explainable are adversarially-robust cnns?. arXiv preprint arXiv:2205.13042.

Cite

(2020). Layer-Wise Entropy Analysis and Visualization of Neurons Activation. Communications and Networking: 14th EAI International Conference, ChinaCom 2019, Shanghai, China, November 29–December 1, 2019, Proceedings, Part II 14.

Cite

(2020). The shape and simplicity biases of adversarially robust imagenet-trained cnns. arXiv preprint arXiv:2006.09373.

Cite

Contact

Contact me if you have any questions or would like to collaborate.