๐ง Daily Study Log [2025-07-04]
Todayโs work included structured experimentation in the SCU_Competition, SQL fundamentals study, and implementation + review of MobileNetV2 architecture.
๐ SCU_Competition โ Submission 51โ55
Focus: Feature pruning, voting weight tuning, correlation filtering, and stacking re-evaluation
Goal: Further refine ensemble efficiency and eliminate redundant signals
Highlights:
- Submission 51: VotingClassifier (7:2:1) โ removed 3 low-impact features from 50
โ CV AUC: 0.8833 / Kaggle AUC: 0.8950
- Submission 52: LGBMClassifier only โ no tuning, just raw fit on current features
โ CV AUC: 0.8798 / Kaggle AUC: 0.8879
- Submission 53: Removed bottom 10% of features by importance
โ CV AUC: 0.8828 / Kaggle AUC: 0.8951
- Submission 54: Removed correlated features (threshold=0.95)
โ CV AUC: 0.8835 / Kaggle AUC: 0.8950
- Submission 55: Switched to StackingClassifier (CV AUC fell slightly)
โ CV AUC: 0.8780 / Kaggle AUC: 0.8841
Next Ideas:
- Try LGBM as meta-model in stacking
- Simplify cluster usage and feature combinations
๐พ SQL Practice โ Basic Data Modeling
What I Learned:
- Key concepts: Entity, Attribute, Domain, Relationship
- Differentiated between strong vs. weak entities
- Covered examples like StudentโCourse (M:N), EmployeeโDepartment (1:N)
- Understood ERD design, PK/FK logic, and how entities convert to tables
Reflection:
- This foundation is crucial before jumping into JOINs or queries
- Visualizing relationships helps in schema planning for real-world datasets
๐ Paper Review โ MobileNetV2 (Day 4)
Focus: Final architecture summary + writing complete implementation in PyTorch
Todayโs Progress:
- โ
Implemented
InvertedResidual
block (expand โ depthwise โ linear projection)
- โ
Integrated optional residual connections
- โ
Validated tensor flow via forward pass
- ๐ Notebook:
MobileNetV2_Architecture_Implementation.ipynb
Insight:
- ReLU6 is carefully placed only before projection
- Linear bottleneck structure ensures compressed, lossless feature flow
- Residual connections are applied only when stride=1 and channel dims match
โ
TL;DR
๐ SCU: Voting weight tuning and correlation-based feature filtering both helped improve model clarity
๐ SQL: Learned the essence of data modeling โ entities, attributes, relationships, and schema design
๐ CV: Successfully implemented MobileNetV2 core block with custom PyTorch code and tested it
๐ Next: Try Neural Style Transfer + continue SQL (JOINs) + MobileNetV2 vs ShuffleNetV2 paper comparison