π§ Daily Study Log [2025-07-02]
Todayβs study was split across three major areas: feature engineering experiments for the SCU_Competition, continuing the MobileNetV2 paper review, and reading about DeepDream and Neural Style Transfer in computer vision.
π SCU_Competition β Submission 41β45
Focus: Advanced feature engineering + LGBM tuning
Goal: Push generalization without overfitting
Highlights:
- Submission 41: Added
log1p
and ratio-based features β solid generalization
- Submission 42: Included price-income gap and wine Γ campaign mix β minor lift
- Submission 43: Tested early-stage interaction features
- Submission 44: Combined all advanced features β Kaggle AUC 0.8944
- Submission 45: Full stack of features (original + log1p + ratio + diff + interactions) β
CV AUC: 0.8835 / Kaggle AUC: 0.8902
Takeaways:
- Layering multiple feature types can improve local CV, but diminishing returns appear in public scores
- Slight drop in Kaggle AUC in Submission 45 suggests model may be hitting a generalization ceiling
- Time to prune weaker features or apply SHAP for model-aware selection
π Paper Review β MobileNetV2 (Day 2)
Continued: MobileNetV2: Inverted Residuals and Linear Bottlenecks
Todayβs Focus: Section 2 β Architecture design and motivation
Repo: GitHub Paper Review
Reflection:
- Depthwise separable convolutions are incredibly efficient for mobile environments
- Inverted residuals + linear bottlenecks minimize memory and computation without hurting accuracy
- Eager to visualize the full architecture block in the next session
π§ CV Self-Study β DeepDream & Neural Style Transfer
What I Studied:
- DeepDream: Used gradient ascent on feature maps to visualize what CNNs βseeβ
- Neural Style Transfer (NST): Combines content from one image and style from another using VGG features
Insights:
- DeepDream is more about interpretability than generation
- NST illustrates how CNNs separate low-level style vs high-level structure
- Planning to implement NST soon with personal style + content images
π― Next Steps
- SCU: Try SHAP-based pruning on current feature set
- CV: Implement NST with VGG and log results
- Paper: Summarize Section 3 and diagram MobileNetV2 blocks
- Blog: Consider building an auto-generated paper index by tag
β
TL;DR
π SCU: Feature stacking helps, but pruning might now be more effective
π CV: DeepDream and NST offer powerful ways to βseeβ what models learn
π Paper: MobileNetV2 continues to impress with its elegant, efficient design