π§ Daily Study Log [2025-07-03]
Todayβs study spanned advanced ensemble experimentation in the SCU_Competition, hands-on DeepDream visualization in computer vision, a continuation of MobileNetV2 paper reading, and TOEIC grammar/listening/writing practice.
π SCU_Competition β Submission 46β50
Focus: Feature pruning, cluster reintroduction, ensemble tuning
Goal: Refine generalization and ensemble strength via clean feature sets
Highlights:
- Submission 46: Removed all
log1p
features β no drop in performance
β CV AUC: 0.8859 / Kaggle AUC: 0.8866
- Submission 47: Used only key cluster features (removed purchase-power clustering)
β CV AUC: 0.8852 / Kaggle AUC: 0.8906
- Submission 48: Soft Voting (LGBM, RF, LR), based on Submission 47
β CV AUC: 0.8826 / Kaggle AUC: 0.8958
- Submission 49: Weighted Voting (LGBM:RF:LR = 5:3:2)
β CV AUC: 0.8826 / Kaggle AUC: 0.8961
- Submission 50: Added 3 meaningful engineered features on top of 49
β CV AUC: 0.8788 / Kaggle AUC: 0.8958
Takeaways:
log1p
features are dispensable in current setup
- Cluster-based features still boost performance
- Weighted Voting improves AUC slightly vs uniform voting
- Overengineering features can slightly hurt CV AUC β next: feature selection phase!
π Paper Review β MobileNetV2 (Day 3)
Continued: MobileNetV2: Inverted Residuals and Linear Bottlenecks
Todayβs Focus: Final part of Section 2 β Understanding inverted residuals with linear bottlenecks
Reflections:
- Learned the reasoning behind removing non-linearities at the bottleneck layer
- The
expansion β depthwise β projection
structure feels extremely efficient
- Will now move on to Section 3 for experiments and evaluation metrics
π¨ CV Practice β DeepDream (Hands-on)
What I Did:
- Built DeepDream in PyTorch using
VGG16
from torchvision.models
- Captured intermediate activations from deep layers (
features[22]
)
- Applied gradient ascent to the input image over 100 iterations
- Visualized the result with
matplotlib
Code Summary:
def deep_dream(img, iterations=100, lr=0.15):
for _ in range(iterations):
model.zero_grad()
model(img)
loss = features['target'].mean()
loss.backward()
grad = img.grad.data
img.data += lr * grad / (grad.std() + 1e-8)
img.grad.data.zero_()
return img
Insights:
- The deeper the CNN layer, the more abstract and surreal the visual output becomes
- The choice of layer (e.g.,
features[22]
) drastically changes the generated patterns and textures
- Tweaking the number of iterations and learning rate gives fine control over the level of visual detail
- It was fascinating to see a blurry, meaningless image gradually evolve into structured, dreamlike visuals
- This hands-on experiment helped me intuitively understand which features the model is focusing on
- Next up: Implement Neural Style Transfer using my own content and style images
π TOEIC β Light Review
Todayβs Practice:
- Listening: Unit 1 β Focused on basic dialogue and announcement comprehension
- Writing: Unit 1 β Practiced sentence structure and formal responses in business contexts
Reflections:
- Starting off felt smooth, especially since the early units reinforced grammar fundamentals
- Writing section helped me review subjectβverb agreement and clarity in tone
- Planning to keep TOEIC as a low-intensity but consistent daily routine (~30 min/day)
π― Next Steps
- SCU: Apply SHAP to identify and remove weak features in the current feature set
- CV: Implement Neural Style Transfer using custom style/content images and document results
- Paper: Begin Section 3 of MobileNetV2 β experiments, benchmarks, and architecture comparison
- Blog: Create an index page for paper reviews sorted by tags or model types
- TOEIC: Move on to Listening Unit 2 and Writing Unit 2
β
TL;DR
π SCU: VotingClassifier with weighted soft voting performed well; log1p features no longer useful
π CV: DeepDream implemented successfully β surreal outputs and layer insights gained
π Paper: MobileNetV2βs efficiency lies in its clean residual design and minimal bottlenecks
π TOEIC: Completed Listening/Writing Unit 1 β daily 30-min routine is now in place