๐ง Daily Study Log [2025-07-07]
Wasnโt feeling well today, so I kept things light โ but still managed to move forward with SCU_Competition (submission 61โ65), a short SQL theory session, and a first trial of Neural Style Transfer. Also previewed the EfficientNet paper for upcoming review.
๐ SCU_Competition โ Submission 61โ65
Focus: Voting weight adjustment, cluster feature ablation, simplified LGBM tests, and extensive feature addition
Goal: Push past Kaggle AUC 0.8973 and validate feature importance balance
Highlights:
- Submission 61: VotingClassifier (LGBM:RF:LR = 6:2:2)
โ CV AUC: 0.8801 / Kaggle AUC: 0.8977 ๐ New Best
- Submission 62: Single LGBM (with same features)
โ CV AUC: 0.8759 / Kaggle AUC: 0.8884
- Submission 63: Removed clustering features from 61
โ CV AUC: 0.8818 / Kaggle AUC: 0.8955
- Submission 64: Simple LGBM w/ clustering features
โ CV AUC: 0.8759 / Kaggle AUC: 0.8884
- Submission 65: Same Voting (6:2:2) + large feature expansion
โ CV ongoing / Kaggle pending
Takeaway:
- Voting (6:2:2) seems optimal so far
- Cluster features help but arenโt essential
- Feature engineering continues to drive gains
๐งฎ SQL Study โ Basic Concepts Review
Focus: Refreshed fundamental database design terms
- Entity vs Attribute: Defined entities as real-world objects; attributes as their descriptive fields
- Primary Key / Foreign Key: Revisited their purpose in relational schemas
- Normalization: Briefly touched on avoiding redundancy (1NF, 2NF concept only)
Reflection:
- Very light session today โ focused more on terminology than coding
- Helped reinforce database theory before diving into query practice later
๐ผ๏ธ CV โ Neural Style Transfer (1st Trial)
Task: Implement baseline Neural Style Transfer
- Style image: Starry Night
- Content image: A Cat
- Backbone: VGG19 (layer slicing for style/content losses)
Next Plan:
- Show results & accuracy tomorrow
- Evaluate how well the style was preserved with different weight settings
Reflection:
- Early attempts were noisy, but after tuning
style_weight
, results improved
- Learned how content/style balance dramatically affects texture transfer
๐ Paper Preview โ EfficientNet
Paper: EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks (ICML 2019)
- Core Idea: Rather than increasing depth/width/resolution arbitrarily, use a compound scaling method to balance all three
- Architecture: Based on MobileNetV2โs inverted bottlenecks
- Impact: Achieves higher accuracy with fewer parameters than previous models
Preview Summary:
- Todayโs focus was just grasping the big picture: EfficientNet proposes a principled way to scale models
- Will review B0 architecture and scaling coefficients in detail next time
โ
TL;DR
๐ SCU: Submission 61 hit a new best (Kaggle AUC 0.8977) โ Voting (6:2:2) proves solid
๐ SQL: Light review of entity, attribute, keys โ basic DB schema theory
๐ CV: NST first trial done โ images improving after tuning, results tomorrow
๐ Paper: EfficientNet concept preview โ compound scaling = key innovation