Today marked another big step forward for my food image classifier.
After reaching ~41% accuracy yesterday with 36-class data I collected using Selenium,
I decided to push the dataset further โ aiming for 100+ images per class.
And the result?
Accuracy has officially broken the 50% barrier โ hitting 56%. ๐ฅ
Thatโs no longer random guessing โ itโs learning.
MobileNetV2
model with the expanded datasetsweet potato
and watermelon
showed recall over 0.6โBetter data beats better models โ every time.โ
No architectural changes were made.
The only change was more and better data, and the result was a significant accuracy jump.
It reminded me again that data engineering is just as important as modeling.
class_weight='balanced'
, oversampling)EfficientNetB0
or ConvNeXt
Grad-CAM
to understand model focusThis phase taught me something critical:
You canโt tune your way out of bad data.
You have to build a solid foundation first โ and that starts with the dataset.