The application applies Transfer learning for dog's breed identification, which is implemented by the means of Tensorflow and Keras:
From a pre-trained CNN model (VGG16 | VGG19 | Resnet50 | InceptionV3 ) the last layer is removed, then new Fully Connected (FC) layers are added, which are trained on the dog's dataset.
The original dataset () consists of 8351 dog's images for 133 breeds divided into:
- training set (6680 pictures)
- validation set (835)
- test set (836)
and amounts for 1080 MB in zipped format (see the dataset link).
N.B.: pre-trained weights can be found here
 CNN articles:
- VGG: Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. CoRR abs/1409.1556 (2014); http://arxiv.org/abs/1409.1556
- Resnet: He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun: Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778. 2016. https://arxiv.org/abs/1512.03385
- InceptionV3: Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jon Shlens, Zbigniew Wojna; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 2818-2826. https://arxiv.org/abs/1512.00567
Run locally on your computer
You can run this module directly on your computer, assuming that you have Docker installed, by following these steps:
$ docker pull deephdc/deep-oc-dogs_breed_det $ docker run -ti -p 5000:5000 deephdc/deep-oc-dogs_breed_det
If you do not have Docker available or you do not want to install it, you can use udocker within a Python virtualenv:
$ virtualenv udocker $ source udocker/bin/activate $ git clone https://github.com/indigo-dc/udocker $ cd udocker $ pip install . $ udocker pull deephdc/deep-oc-dogs_breed_det $ udocker create deephdc/deep-oc-dogs_breed_det $ udocker run -p 5000:5000 deephdc/deep-oc-dogs_breed_det
Once running, point your browser to
and you will see the API documentation, where you can test the module
functionality, as well as perform other actions (such as training).
For more information, refer to the user documentation.