šŸ“Œ Paper Info


🧠 Day 3 Review – Experiments, Applications, and Conclusions

āœ… Step 1: Architecture Expansion

The final MobileNetV2 network consists of:

Initial 3Ɨ3 Conv
→ Repeating Bottleneck Blocks (19 times)
→ Final 1Ɨ1 Conv + Global Average Pooling + FC layer

āœ… Step 2: Experimental Results Summary

šŸ“ ImageNet Classification

Model Top-1 Acc MACs Params
MobileNetV1 70.6% 575M 4.2M
MobileNetV2 71.8% 300M 3.4M

→ V2 achieves better accuracy with nearly half the computation.

šŸ“ Object Detection (COCO, with SSDLite)

Model mAP Latency
MobileNetV1 + SSD 19.3 27 ms
MobileNetV2 + SSDLite 22.1 19 ms

→ V2 provides higher mAP and faster inference.

šŸ“ Memory Efficiency

In Table 2 (Fig. 2 in the paper), MobileNetV2 shows peak memory usage < 400K during inference.
This is lower than ResNet-50, VGG, Inception, and other baselines.


āœ… Step 3: Applications


āœ… Key Insights (3-Line Summary)


šŸ“˜ New Terms


šŸ—‚ GitHub Repository

Visual summary + experimental table:
šŸ”— github.com/hojjang98/Paper-Review


šŸ’­ Reflections

The experiments confirm that MobileNetV2 is not just lightweight in theory, but in practice.
Its memory efficiency and speed make it one of the most impactful mobile architectures of its time.
I’m especially impressed with how well it balances performance and hardware constraints.