This service is deprecated. Please refer to the newer Tensorflow version
The deep learning revolution has brought significant advances in a number of fields , primarily linked to image and speech recognition. The standardization of image classification tasks like the ImageNet Large Scale Visual Recognition Challenge  has resulted in a reliable way to compare top performing architectures.
This Docker container contains a trained Convolutional Neural network optimized for seed identification using images. The architecture used is a Resnet50  using Lasagne on top of Theano.
As training dataset it has been used a collection of images from the Royal Botanical Garden of Spain. It consists of around 28K images from 743 species and 493 genera.
: Yann LeCun, Yoshua Bengio, and Geofrey Hinton. Deep learning. Nature, 521(7553):436–444, may 2015.
: Olga Russakovsky et al. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision (IJCV), 115(3):211–252, 2015.
: He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778)
Run locally on your computer
You can run this module directly on your computer, assuming that you have Docker installed, by following these steps:
$ docker pull deephdc/deep-oc-seeds-classification $ docker run -ti -p 5000:5000 deephdc/deep-oc-seeds-classification
If you do not have Docker available or you do not want to install it, you can use udocker within a Python virtualenv:
$ virtualenv udocker $ source udocker/bin/activate $ git clone https://github.com/indigo-dc/udocker $ cd udocker $ pip install . $ udocker pull deephdc/deep-oc-seeds-classification $ udocker create deephdc/deep-oc-seeds-classification $ udocker run -p 5000:5000 deephdc/deep-oc-seeds-classification
Once running, point your browser to
and you will see the API documentation, where you can test the module
functionality, as well as perform other actions (such as training).
For more information, refer to the user documentation.