Today was all about taking the Titanic Survival Prediction project to the next level through model experimentation and tuning โ๏ธ๐
GridSearchCV
for individual modelsOptuna
for RandomForestClassifierIterating across different models and tuning strategies is critical.
Sometimes the best local model doesnโt win on the leaderboard โ always validate broadly and log everything!
Version | Model | Val Acc | Kaggle | Notes |
---|---|---|---|---|
v1 | Soft Voting (RF + GB + XGB) | 0.8090 | 0.7775 | Baseline Ensemble |
v2 | Stacking (โ LogisticRegression) | 0.8034 | 0.7751 | Meta Model |
v3 | CatBoostClassifier (single) | 0.7753 | 0.7631 | ย |
v4 | GridSearchCV + VotingClassifier | 0.7921 | 0.7799 | ๐ฅ Best Kaggle Score |
v5 | Optuna-tuned RandomForestClassifier | 0.8146 | 0.7751 | Best Local Accuracy |
Data science isnโt about one perfect model โ itโs about learning through every iteration ๐ง
On to the next breakthrough tomorrow ๐ช