top of page

Resume

Education

2019 - present

Ph.D. candidate, University of Illinois at Urbana-Champaign

Research on Robotics Control and Hybrid Control Systems built on Reinforcement Learning and Control Theory

2019 - 2020

Master of Science, Applied Mathematics, University of Illinois at Urbana-Champaign

Study on Statistics, Optimization and Algorithms

Work
Experience

2022 - Present

Research Assistant

Lead and conduct research on safe reinforcement learning and robotics control both on software and hardware levels.

2022.05- 2022.08

Software Engineer Internship, TigerGraph

  • Collaborated with the Graph Query Language (GSQL) team to design a cost‑based optimizer, significantly enhancing keyword search efficiency in large‑scale databases; achieved over 50% speed improvement utilizing the optimizer‑based feature.

  • Devised a comprehensive quality testing scheme for GSQL’s optimizer feature, ensuring optimal performance.

  • Developed an automated testing pipeline to streamline the quality assurance process for the optimizer feature, promoting efficiency and reliability.

2021.05 - 2021.09

Robotics Software Engineer Internship, Siasun Robot & Automation Co. Ltd.

  • Partnered with the behavior team to create innovative trajectory planning and tracking algorithms for Automated Guided Vehicles (AGVs), which were successfully implemented in regional hospitals’ medicine distribution systems, enhancing efficiency and accuracy.

  • Developed a vision‑based object tracking algorithm that facilitated the precise and responsive pursuit of targeted objects, improving AGV performance in dynamic environments.

  • Engineered a LiDAR‑based obstacle avoidance algorithm for AGVs, ensuring safe navigation and reducing the risk of accidents in various operational settings.

  • Devised a Model Predictive Control (MPC)‑based trajectory following algorithm for AGVs, allowing for robust and precise path execution, adaptable to real‑time changes in the environment.

Research
& Expertise

  • Safe Reinforcement Learning in Non‑Stationary Environments with Fast Adaptation and Disturbance Prediction

    • Devised an innovative algorithm to address the sim‑to‑real challenge in reinforcement learning, enhancing the robustness of reinforcement learning algorithms and ensuring a seamless transition from simulated to real‑world environments.

    • Formulated safety‑constraint‑based reinforcement learning algorithms to guarantee secure exploration during policy training, minimizing the risk of unintended consequences in complex environments.

    • Created safe reinforcement learning algorithms that effectively interact with the environment while adhering to predefined safety constraints, fostering a secure learning process.

    • Developed a disturbance observer framework for rapid system disturbance identification, enabling prompt adaptation and response to dynamic changes.

    • Thoroughly validated the algorithm in both simulated (e.g., quadrotor) and experimental (e.g., pendubot) settings, demonstrating its effectiveness and versatility across various platforms.

    • Constructed a pendubot hardware model to showcase the practical applications and performance of the developed algorithms, further reinforcing their real‑world relevance and potential.​

  • Synergetic Drone Delivery Network in Metropolis

    • Developed advanced YOLO‑based Convolutional Neural Networks (CNN) for efficient detection of packages on vehicle roofs within simulated environments, improving accuracy and response time.

    • Constructed comprehensive simulation environments in both Unity and Gazebo, offering versatile testing platforms for algorithm and model development.

    • Designed and developed a quadrotor model at the hardware level, showcasing the practical applications and potential of the research in real‑world scenarios.

    • Created an adaptive controller to enhance the robustness of robotic systems, ensuring reliable performance and adaptability in the face of varying conditions and challenges.

bottom of page