Hello, Welcome

I'm Nahom Abera

A Computer Science Student at Georgia State University.
I'm interested in Software Engineering, Computer Vision, Robotics and Embedded Systems.

Nahom Abera

Education

Georgia State University | Atlanta, GA

Major: Bachelor of Science in Computer Science

Expected Graduation Date: May 2026

GPA: 4.0+/4.0

Relevant Coursework: Algorithms, Data Structures, Cloud Computing, Database Systems, Machine Learning, Mobile Application Development, Linear Algebra, Probability & Statistics, Programming Language Concept, Software Development, System-Level Programming.

Awards: GSU Campus Atlanta 100% Scholarship

Honors: GSU President’s List for Fall 2022, Spring 2023, Summer 2023, Fall 2023, and Spring 2024 Semesters

Experience

Software Engineering Intern

At: Goldman Sachs

Date: June 2025 – Aug 2025

Skills: N/A

Description:

  • Incoming Software Engineering Intern at Goldman Sachs for Summer 2025 at Dallas, TX
  • Risk Division, Market Risk Assesment Team.

Undergraduate Researcher

At: Georgia State University

Date: June 2024 – Present

Skills: Arduino, Computer Vision, C++, Embedded Systems, OpenCV, Python, Raspberry Pi, Research, Robotics

Description:

  • Conducting research in MORSE Studio Lab under Dr. Ashwin Ashok on “Development of Autonomous Robot for Soil Temperature and Moisture Data Collection in Agricultural Fields using Robotic Algorithms, Computer Vision and Robot Operating System (ROS).”
  • Integrated the robot’s mechanical parts, including a chassis, DC motors, mecanum wheels, and a 6-DOF robotic arm with DS18B20 temperature sensor, soil moisture sensor, and GPS module with Arduino and Raspberry Pi control units.
  • Developed C++ code for the Arduino microcontroller to acquire and process sensor data from sensors mounted on robotic arm and transmit it to the Raspberry Pi for advanced processing.
  • Developed Python script on Raspberry Pi to aggregate soil temperature and moisture data from Arduino microcontroller and longitude and latitude coordinates of the robot from GPS module and store the data in timestamped CSV files, and prepare it for analysis.
  • Integrating ROS nodes using Python on Raspberry Pi for chassis control, robotic arm movements, camera streaming and autonomous navigation through real-time decision-making and path planning algorithms.
  • Implementing autonomous navigation using a lawn mower pattern and waypoint-based path planning, with automatic data logging at each waypoint.
  • Applying YOLO-based computer vision algorithms with Python’s OpenCV library to boost autonomous navigation by identifying obstacles, recalculating paths and performing obstacle avoidance maneuvers.
  • Developing interactive visualizations and analysis tools using Matplotlib, Seaborn, and Streamlit to represent and interpret collected soil data.
  • Building a Flask-based web server hosted on the Raspberry Pi to allow users initiate the data collection process and access CSV data files, visualization maps, and analytical graphs.
Research Image 1 Research Image 2 Research Image 3 Research Image 4

Undergraduate Teaching Assistant

At: Georgia State University

Date: Jan 2024 – May 2024

Skills: Python, Object-Oriented Programming, Teaching

Description:

  • Assessed and evaluated students’ Python programming assignments and labs for Principle of Computer Science II Course.
  • Held weekly office hours to provide comprehensive support for students on programming concepts.
  • Utilized effective communication skills to address student inquiries regarding their grades and course-related issues.

Projects

Smart Raspberry Pi Garage Door Control System

Date: May - Jun 2024

Technologies used: Python, Dart, Flutter, Firebase, Raspberry Pi

  • Built system to control and monitor garage door using a Mobile App, Firebase Cloud and Raspberry Pi microcontroller.
  • Setup the hardware system with Raspberry Pi, 4-channel relay, and magnetic switches connected to the garage door opener.
  • Developed Python script to run on Raspberry Pi to read open/close commands from Firebase Firestore, execute them thru the relay switch to open/close the garage door, and update the garage door status based on magnetic switch readings.
  • Set up Firebase Firestore for real-time command/status update and secured login/sign up with Firebase Authentication.
  • Created Flutter Mobile and Web app to send command and view door status, integrated with Firebase Firestore and Auth.

Vexni: Vision Exection Neural Interface

Date: September 2024 - Novemeber 2024

Technologies used: Python, PLY, TensorFlow, OpenCV

  • Developing Vexni, a beginner-friendly Domain-Specific Programming Language (DSL) focused on building a simple Neural Network interface to execute Computer Vision tasks such as Object Detection, Image Classification, Face Detection/Recognition, Optical Character Recognition (OCR) and Image Generation.
  • Designing the programming language using Python, with a custom lexer, parser and interpreter made with PLY to compile high-level commands into executable Python code to enable interaction with Deep Learning and Image Processing libraries.
  • Integrating TensorFlow and OpenCV to perform Deep Learning-based vision tasks and real-time image processing.
Vexni Logo

Jobify.AI

Date: February 2025 - April 2025

Technologies used: Python, React.js, Flask, Beautiful Soup, PostgreSQL, Pinecone, Docker, OpenAI-API, Gmail-API

  • Developing an intelligent system consisting of a web app & a Python backend server to automate job application tracking.
  • Built a Python backend server to be hosted on AWS Lambda & run as cronjob to process incoming emails using Gmail-API, use OpenAI-API to classify them as job app-related or not & extract job details (company name, title, location, job no, job status).
  • Integrating the backend server to semantically match extracted details with existing apps using Pinecone vector db by comparing email embeddings and update or create job entries in PostgreSQL hosted on Supabase.
  • Designing a web app using React.js to be hosted on Heroku that reads data from PostgreSQL to present it with graph visuals (pie charts, bar charts, line charts, and tables) for users to analyze job application status, trends & progress over time.
Jobify.AI Logo

IntelliDrone System

Date: Jan – Mar 2024

Technologies used: Python, OpenCV, NumPy

  • Developed a drone system using Python, and OpenCV to enable the drone to recognize and respond to hand gestures.
  • Integrated face detection algorithms to enable the drone to lock onto and follow a specific individual's face.
  • Developed a selfie drone subsystem using real-time body pose estimation with OpenCV and NumPy, enabling the drone to autonomously start and stop tracking based on full and half-body poses.
  • Utilized Python and OpenCV to visualize the drone's flight path, providing a graphical representation of its movements for analysis and refinement of tracking algorithms.

Hexapod Robot

Date: May 2024 - Present

Technologies used: C programming language, MSP432 microcontroller

  • Currently developing a six-legged hexapod robot with a 3D printed structure, using buck converters and separate power supplies for 12 high-torque servos and the MSP432 microcontroller.
  • Developing C code to manage servo positions, timing sequences, and state changes for various gaits and motions.
  • Implementing I2C communication and PWM control in the MSP432 microcontroller, and utilizing GPIO for button presses and LED indications.
Hexapod Robot Image 1 Hexapod Robot Image 2

FaceTrace

Date: Feb 2024

Technologies used: Python, OpenCV

  • Built FaceTrace, a real-time face tracking program implemented in Python using the OpenCV library.
  • The app captures video from a webcam and employs a pre-trained face cascade classifier to detect faces in each frame.
  • Detected faces are outlined with purple rectangles, and their centers are marked with black circular dots.
  • The program continuously prints the coordinates of the center and the area of the largest detected face.

Motion Detector

Date: Sep 2024

Technologies used: Python, OpenCV

  • Developed a real-time motion detection application using Python and OpenCV, capturing live video streams and detecting moving objects.
  • Utilized K-Nearest Neighbors (KNN) based background subtraction model and morphological transformations to isolate and track moving objects within the video frames.
  • Implemented bounding box functionality to highlight and track the detected motion across frames.
  • Integrated a dynamic user interface for real-time video display, with options to terminate the program using keyboard commands.

e-Signature

Date: August 2024

Technologies used: Python, OpenCV, Flask

  • Developed a Python-based application using Flask and OpenCV to convert handwritten signatures into transparent PNG files for digital use.
  • Implemented image processing techniques, including grayscale conversion and binary thresholding, to create high-quality digital signatures.
  • Built a user-friendly web interface with file upload functionality and real-time processing of signature images, providing multiple output options.
  • Enabled users to download signature images in various thresholding styles for seamless integration into PDF documents.

Uber Eats Clone

Date: Mar 2024

Technologies used: Dart, Flutter, Firebase

  • Developed a food delivery app for Android and iOS devices using Flutter and Dart, containing a dynamic restaurant listing, detailed menu views, and integrated Mapbox for location mapping.
  • Developed order management features including cart review and order history tracking using Firebase Firestore database.
  • Utilized Firebase Authentication to manage user accounts, support email/password login and third-party providers.

Speech Craft

Date: Feb - Mar 2024

Technologies used: Dart, Flutter

  • Led a team of two on the development of a full stack app for Android and iOS devices targeting English speakers learning Spanish.
  • Integrated interactive lessons, quizzes, and vocabulary tools with Flutter's animation capabilities.
  • The design focuses on user engagement and functionality, featuring a login/registration page with toggling options, email and password validation, and vibrant visuals aligned with Material Design principles.

Matchtoria

Date: Mar 2024

Technologies used: Dart, Flutter

  • Developed a card matching game for Android and iOS devices using Dart and Flutter framework.
  • The game utilizes Flutter's GridView to create a grid of 16 cards, two of each featuring unique images, employs the Provider package for state management and floating action button to restart the game at any time.
  • The objective is to flip pairs of cards and match identical images and rotation animation with a congratulatory message will be displayed upon winning.

Linear Regression and Data Visualization

Date: May 2023

Technologies used: Python, NumPy, Pandas, Matplotlib

  • Utilized Python to train a linear regression model and create data visualizations on a product dataset.
  • Cleaned the raw data using Pandas, generated descriptive statistics, and created informative charts using Matplotlib to gain insights into product stock levels and prices.
  • Developed a linear regression model to predict product prices using product stock levels as input.
  • Implemented an 80/20 train-test split and calculated mean squared error to evaluate model accuracy.
Linear Regression Image 1 Linear Regression Image 2 Linear Regression Image 3 Linear Regression Image 4

Skills

Programming Languages
   
C

C++

Dart

Python

SQL
Technologies and Tools

Android Studio , Arduino , Azure , Docker , Firebase , Flask , Flutter
Git , GitHub , HTML/CSS , Linux , Matplotlib , MySQL , NumPy , OpenCV
Pandas , PostgreSQL , Raspberry Pi , React.js, Robot Operating System(ROS), Scikit-learn

Soft Skills

Advanced Mathematics , Algorithms , Data Structures
Computer Vision , Embedded Systems, Mobile App Development
Object-Oriented Programming , Robotics , Software Development , Problem Solving

Interests

What am I interested in?

Robotics is something I’ve always been fascinated by, and this can be seen by the research and the projects I’ve done and am currently working on. I believe robotics is a fast-growing field in the tech industry, next to AI. The immense growth of AI in the past few years has played a crucial role in the rise of robotics, as the two fields are intertwined. Many people think robotics only deals with the development of human, dog, or bird-like structured robots, but that's not the case. Robotics plays crucial roles in healthcare, manufacturing industries, agriculture, space, and deep-sea exploration, and more. For instance, let’s look at self-driving cars in detail. Without the principles and applications of robotics, it would not be possible to produce these self-driving cars. Key Robotics principles like Perception, Path Planning, Simultaneous Localization and Mapping (SLAM) and Motion Control play a crucial role in enabling the vehicle to map and understand its environment and make real-time decisions for efficient and safe autonomous driving.

Contact

Do you want to leave a comment, collaboration request, or any sort of inquiry? Write it below, I'll get back to you.