Ganesh Kommana

I'm a

About Me

I am an impact-driven Machine Learning Engineer and a graduate in Electrical and Electronics Engineering from NIT Tiruchirappalli (NIT Trichy). I thrive on bridging the gap between complex theory and production-grade, end-to-end AI systems. My work is defined by a systematic, hands-on approach to mastering technology, from foundational algorithms to the absolute cutting edge.

This passion is showcased in my detailed learning paths, where I've built everything from 17+ core ML/DL models to sophisticated multi-agent systems with CrewAI, LangGraph, and AutoGen. This deep, project-based knowledge extends to complete Agentic RAG pipelines and end-to-end MLOps on Azure, skills I now apply at o9 Solutions to automate and optimize real-world supply chain forecasting.

πŸ€–

Work Experience

Consultant

o9 Solutions, Bengaluru | Jun 2024 – Present

As a Consultant at o9 Solutions, I optimize supply chain processes by enhancing safety stock and production planning with the o9 Supply Chain Solver, cutting manual effort by 10 hours per week. I build Python-based forecasting models that have reduced error rates by 15%. Additionally, I develop plugins that streamline workflows, drastically cutting processing time from 3000 to 500 seconds. My leadership in root cause analysis and collaboration with cross-functional teams has improved client trust, leading to project renewals and new upsell opportunities.

Skills & Technologies

🐍

Python

πŸ€–

Machine Learning

🧠

Deep Learning

πŸ’¬

Generative AI

πŸ€–πŸ€–

Multi-Agent AI

πŸ”—

LangChain

πŸ•ΈοΈ

LangGraph

πŸš€

CrewAI

βš™οΈ

AutoGen

🎨

Gradio

πŸ—ƒοΈ

SQL

πŸ“Š

DSA

☁️

Azure

πŸ”„

CI/CD

πŸ“¦

MLOps

Featured Projects

My ML Learning Path

I used to see Machine Learning as a collection of complex theories. When I dove into the IBM Machine Learning with Python course, I realized how much more there was to learn. To solidify my understanding, I undertook this comprehensive project to build 17 core machine learning models from scratch.

πŸ”Ή Supervised Learning

AlgorithmDescriptionGitHub Link
Linear RegressionA model to predict vehicle CO2 emissions based on features like engine size.View Code
Logistic RegressionA customer churn prediction model that analyzes feature coefficients.View Code
Decision TreesA multi-class classifier to determine the appropriate medication for patients.View Code
Regression TreesA model trained on a NYC dataset to predict the tip amount for a taxi ride.View Code
Random Forests & XGBoostA performance comparison of two ensemble models for predicting house prices.View Code
Support Vector MachinesA fraud detection system for classifying credit card transactions.View Code
K-Nearest Neighbors (KNN)A classifier to predict a telecom customer's service usage category.View Code

πŸ”Ή Unsupervised Learning

AlgorithmDescriptionGitHub Link
K-Means ClusteringAn implementation on a custom-generated dataset to group data points.View Code
DBSCAN & HDBSCANDensity-based models to find geographical clusters of museum locations.View Code

πŸ”Ή Model Optimization & Evaluation

TechniqueDescriptionGitHub Link
Evaluating Classification ModelsPredicting breast cancer tumor malignancy to practice metric interpretation.View Code
Evaluating Regression ModelsUsing a random forest regressor to interpret performance and feature importances.View Code
Evaluating Clustering ModelsAssessing K-Means results on synthetic data to build intuition.View Code
RegularizationComparing Ridge, Lasso, and ElasticNet on datasets with and without outliers.View Code
ML Pipelines & GridSearchCVBuilding and optimizing a complex classification pipeline with automated tuning.View Code

πŸ”Ή Dimensionality Reduction

AlgorithmDescriptionGitHub Link
Principal Component AnalysisImplemented to project data onto principal axes and reduce dimensions.View Code
t-SNE & UMAP ComparisonComparing advanced visualization techniques against PCA on a 3D dataset.View Code

My DSA Learning Path

After 160 days of sheer consistency and dedication, I completed the GFG 160 Days of DSA Challenge! This wasn’t just a coding journeyβ€”it was a test of discipline and resilience. From debugging endless errors to finally getting that green tick, every single day taught me something new.

Topics Mastered

Arrays & StringsSearching & Sorting RecursionLinked Lists Stacks & QueuesTrees & Graphs Dynamic ProgrammingSliding Window Greedy AlgorithmsTries
View My 160-Day Streak

My DL & RL Learning Path

This portfolio documents my systematic journey through Deep Learning and Reinforcement Learning, starting from the single neuron and culminating in advanced architectures like CNNs, RNNs, and GANs.

1. Neural Network Foundations

ProjectDescriptionGitHub Link
Neuron ComputationsModeling logic gates with single neurons to build intuition for matrix operations.View Code
MLP for Digit RecognitionImplementing an MLP using Scikit-learn for handwritten digit recognition.View Code
Gradient Descent DemoComparing Batch vs. Stochastic (SGD) approaches and the impact of learning rate.View Code
Backpropagation from ScratchA hands-on implementation of backpropagation to train an MLP.View Code

2. Building Models with Keras & TensorFlow

ProjectDescriptionGitHub Link
Keras API Deep DiveBuilding models using Sequential, Functional, and Model Sub-classing APIs.View Code
Optimizers in Gradient DescentA comparative analysis of SGD, Momentum, RMSprop, and Adam.View Code
Regularization TechniquesExploring L1, L2, Dropout, and Batch Normalization to prevent overfitting.View Code
GPU-Accelerated Deep LearningLeveraging GPU acceleration with TensorFlow to reduce training time.View Code

3. Convolutional Neural Networks (CNNs)

ProjectDescriptionGitHub Link
Image ConvolutionsApplying filters to extract features like edges from flower images.View Code
CNN Core ConceptsExploring how padding, stride, and pooling layers affect feature maps.View Code
CNN for CIFAR-10An end-to-end project building and evaluating a CNN to classify images.View Code
Transfer Learning (Waste)Using a pre-trained VGG16 model to classify waste as organic or recyclable.View Code
Transfer Learning (MNIST)Adapting a CNN trained on digits 5-9 to classify digits 0-4.View Code

4. Sequential Models (NLP)

ProjectDescriptionGitHub Link
Movie Review ClassifierBuilding a sentiment classifier using text preprocessing and the Keras Embedding layer.View Code
RNN for IMDB SentimentImplementing a vanilla RNN to classify movie reviews, handling sequential data.View Code
LSTMs and GRUsUsing gated architectures to overcome vanishing gradients in text classification.View Code

5. Generative Deep Learning

ProjectDescriptionGitHub Link
AutoencodersImplementing autoencoders for tasks like image denoising and compression.View Code
Variational Autoencoder (VAE)Building a VAE to generate novel handwritten digits from a learned latent space.View Code
Intro to GANsA hands-on implementation of the original GAN framework (Generator vs. Discriminator).View Code
DCGAN for Anime AvatarsBuilding a Deep Convolutional GAN to generate unique, high-quality anime avatars.View Code

6. Reinforcement Learning

ProjectDescriptionGitHub Link
Predictive AgentTraining a supervised model on data from successful random-play episodes.View Code

My Advanced LLM Learning Path

A portfolio showcasing a progression from foundational prompting to sophisticated reinforcement learning for AI safety and alignment.

Module 1: Foundations in Prompt Engineering

Explored how zero-shot, one-shot, and few-shot prompting strategies can drastically alter the quality of dialogue summarization without changing model weights.

View Project 1

Module 2: Efficient Fine-Tuning (PEFT)

Implemented and compared full fine-tuning vs. the more efficient PEFT (LoRA) to adapt a pre-trained LLM for a custom summarization task.

View Project 2

Module 3: AI Alignment (RLAIF)

Tackled AI safety by using Reinforcement Learning from AI Feedback (RLAIF) and PPO to fine-tune a FLAN-T5 model to generate less toxic content.

View Project 3

End-to-End RAG Chatbot Path

A hands-on journey building a complete Retrieval-Augmented Generation (RAG) system from the ground up, from document ingestion to an intelligent, context-aware chatbot.

Module 1: Unified Document Loading

Built a robust system using LangChain to ingest and standardize data from multiple sources (PDF, CSV, DOCX) for seamless processing.

View Module 1

Module 2: Advanced Text Splitting

Implemented various text splitting strategies (e.g., RecursiveCharacterTextSplitter) to break down documents while preserving context.

View Module 2

Module 3: Document Embedding

Converted text chunks into vectors using enterprise-grade models from IBM watsonx.ai and open-source models from Hugging Face.

View Module 3

Module 4: Vector Database Implementation

Configured and deployed ChromaDB and FAISS to index document vectors and enable high-speed semantic similarity searches.

View Module 4

Module 5: Advanced Document Retrievers

Developed and compared four powerful retriever strategies, including Multi-Query, Self-Querying, and Parent Document Retrievers.

View Module 5

Module 6: The "Why": Context Window

A conceptual lab demonstrating the "context window" limitation, solidifying the fundamental need for RAG systems.

View Module 6

Certifications

Generative AI with LLMs

Completed an in-depth specialization from DeepLearning.AI covering the fundamentals of Large Language Models.

View Credential

Advanced SQL

Mastered complex querying techniques through Coursera, focusing on window functions, CTEs, and advanced data manipulation.

View Credential

Machine Learning with Python

An IBM-certified course covering core ML algorithms, with hands-on implementation using Scikit-learn.

View Credential

Microsoft Azure Fundamentals

Gained foundational knowledge of Microsoft Azure cloud services, covering core concepts, services, security, and pricing models.

View Credential

Let's Connect

I'm always interested in discussing new opportunities and innovative projects. Feel free to send me a message!