• image

      Teaching Experience - Graduate Student instructor

      ROB 498/599 - Computational HRI - Fall 2023

      ROB 498/599 - Design HRI - Winter 2024

      Mapping 2D and 3D techniques, SLAM (Online vs. Full), Amazon Astro, hello-robot Stretch

      Mapping and Map Representations [ Mapping slides ] :

      Covering mapping and map representations, I presented essential concepts: 2D occupancy grids for environment breakdown, 3D occupancy maps for 3D spaces, depth maps for depth info, point clouds for 3D data, height maps for elevation, landmark maps for distinct features, distance maps for path planning, surface representations for geometry, and multi-map representations for versatile task optimization in robotics and computer vision.

      • image
        Mapping_Recording

      Simultaneous Localization And Mapping [ SLAM slides ]:

      I explained SLAM comprehensively, starting with online methods like EKF SLAM and Fast SLAM, moving on to pose graph optimization and loop closure. They then discussed full SLAM techniques like Graph SLAM, ORB SLAM, Factor Graphs, and LIO-SAM, highlighting their role in mapping and navigation. The session culminated with a practical Amazon Astro robot demonstration, illustrating the real-world application of SLAM in autonomous systems

      • image
        SLAM_Recording
    • image

      Autonomous Robot Algorithms

      ROB 422/ EECS 465 - Introduction to Algorithmic Robotics

      A-star Planner, RRT connect Planner, Point Feature Histogram with ICP | Python

      Tools: Pybullet, Python, PR2 mobile manipulator

      A-star Planner: 8-connected nearest neighbors based a-star planner finds the shortest path from the start to the goal. Here, a Pr2 mobile manipulator is shown to find the shortest path and traverse along that path even with challenges.

      RRT Planner: Rapidly-Exploring Random Trees is a motion planning algorithm that efficiently explores the configuration space of a robotic arm to find a feasible path between start and goal configuration. Here, a PR2 mobile manipulator is used to find the feasible path and also find the shortest path using the trajectory obtained using RRT-connect.

      Point Cloud Registration: Here, Point Feature Histogram which is a descriptor used in point cloud analysis is utilised to characterize the local surface properties of the point cloud. This can be used in correspondence estimation between the source and the target point clouds and align it using Iterative Closest Point algorithm.

        image
        A-star_Planner
        image
        RRT_Planner
        image
        Point_Cloud_Registration
    • image

      Autonomous Robotic Arm [ Report ]

      ROB 550 - Robotic Systems Lab

      Computer Vision, Robotic Arm Kinematics, Motion Planning, Camera and Lidar Calibration | ROS, Python

      Tools: 5 DoF Reactorx Robotic Arm, Robot Operating System, OpenCV, Intel realsense L515

      This project leverages an RGB-Lidar camera system to detect blocks and estimate depth, incorporating Forward Kinematics (FK), Inverse Kinematics (IK), and motion planning algorithms to enable autonomous grasping using a 5 Degree of Freedom (DOF) robotic arm. The RGB component captures object color and texture, while Lidar measures distances, facilitating 3D block detection. Depth estimation creates a 3D map for precise positioning. FK computes the arm's end-effector position, while IK calculates joint angles for grasping. Motion planning generates collision-free paths, culminating in autonomous block grasping, representing a significant step in robotics for complex tasks in unstructured environments.

      • image
        Autonomous_Manipulation
        image
        Kinematics
        image
        Vision
        image
        Stacking
    • image

      Autonomous Mobile Robot [ Report ]

      ROB 550 - Robotic Systems Lab

      Sensor Fusion, Occupancy grid mapping, A-star planning, Frontier Exploration | C++

      Tools: MBot, Raspberry PI, OpenCV, RPLidar

      In this project, We developed an M-Bot equipped with wheel encoders and an IMU for odometry estimation, and then integrated 2D lidar sensor data with a particle filter localization system, alongside A* path planning and frontier exploration techniques to achieve Simultaneous Localization and Mapping (SLAM). The wheel encoders and IMU provide essential data for tracking the robot's movement and orientation, while the 2D lidar sensor captures environmental information. The particle filter localization method utilizes this sensor data to estimate the robot's precise position and orientation in real-time. Furthermore, A* path planning is employed to generate optimal paths for the robot to navigate its surroundings, ensuring efficient movement. Additionally, frontier exploration techniques are utilized to autonomously identify and explore unknown areas, contributing to comprehensive SLAM capabilities that are crucial for autonomous mobile robotics in diverse environments.

      • image
        Autonomous_Navigation
    • image

      Masked LIOSAM: LiDAR SLAM in Dynamic Environments [ Code ]

      ROB 530 - Mobile Robotics

      Deep Learning, Pointcloud Detection, Factor Graphs. Trajectory Evaluation, SLAM | ROS, C++

      Tools: Robot operating System, Georgia Tech Smoothing and Mapping library, Rviz, KITTI dataset

      Simultaneous Localization and Mapping (SLAM) is not robust in dynamic environments, where objects move and cause changes in the surroundings, its effectiveness is limited. It is particularly challenging to use SAM methods that depend on visual or point cloud data, as the dynamic objects can be registered in multiple scans. To address this issue, we propose a method that combines Sparsely Embedded Convolutional Detection (SECOND) and LiDAR Inertial Odometry Smoothing And Mapping. Our approach involves masking out the point cloud features detected by SECOND in the Velodyne scan. Then we occlude the features and update it to LIO-SAM to help it develop the map and state estimation. Our experimental results demonstrate the effectiveness of our approach, which achieves good? accuracy in localization compared to the original LIO-SAM method. By enabling SLAM to work more effectively in dynamic environments, this could enhance the safety and reliability of various autonomous systems.

      • image
        Presentation
        image
        Liosam_Pointcloud_Detection
        image
        Dynamic_Feature_Removal
        image
        Masked_Liosam_Birdeye
    • image

      Push and Grasp Manipulation

      ROB 599 - Introduction to Robotic Manipulation

      Planar Pushing, antimodal grasping, KUKA | Python

      Tools: Pybullet, Manipulator, Robot Push and Grasp

      The set of objects to be stacked is fixed and is composed of the red, green and blue rectangular polygonal shapes and the cyan box shape. Since the objects are set very close, they need to be pushed to a certain distance before grasping them. The pose of all the objects are given and the code for pushing and grasping the box was done and simulated as shown.

      • image
        Push_and_Grasp_Manipulator
    • image

      2D to 3D Bounding Boxes using Neural Nets

      EECS 442 - Computer Vision

      Computer Vision, Convolutional Neural Network, Camera matrix | Python

      Tools: Pytorch, YOLOv5, Google colab, KITTI dataset

      In this project, the primary challenge was to perform 3D object detection using a single 2D image, a critical task for ensuring safe navigation in autonomous driving systems. Inspired by Mousavian et al., the approach involved switching the 2D detector from Faster-RCNN to YOLO and changing the regressor model from VGG to MobileNetV2. The core of the project was the creation of 3D bounding box estimation, allowing the system to better understand its surroundings. By utilizing YOLO for 3D detection and MobileNetV2 for the regressor model, we achieved improved performance and capabilities. The effectiveness of the methods was quantified by constructing a bird's eye view of the images and evaluating the intersection of union.

      • image
        2D_to_3D_Estimation