DEEP Data Science template

A logical, reasonably standardized, but flexible project structure for doing and sharing data science work. Based on more general data science template.


Published by DEEP-Hybrid-DataCloud Consortium
Created: - Updated:

Tool Description

A logical, reasonably standardized, but flexible project structure for doing and sharing data science work. Based on more general template.

To simplify the development and in an easy way integrate a model with DEEPaaS API, a project template, DEEP Data Science template is offered.

Requirements to use the cookiecutter template:

  • Python 3.5
  • Cookiecutter Python package >= 1.4.0: This can be installed with pip by or conda depending on how you manage your Python packages:
$ pip install cookiecutter


$ conda config --add channels conda-forge
$ conda install cookiecutter

To start a new project, run:


The resulting directories

Once you answer all the questions, two directories will be created: - DEEP-OC- -

each directory is a git repository and has two branches: master and test.

The directory structure of looks like this:

├──              <- The top-level README for developers using this project.
├── data
   └── raw                <- The original, immutable data dump.

├── docs                   <- A default Sphinx project; see for details

├── models                 <- Trained and serialized models, model predictions, or model summaries

├── notebooks              <- Jupyter notebooks. Naming convention is a number (for ordering),
                             the creator's initials (if many user development),
                             and a short `_` delimited description, e.g.

├── references             <- Data dictionaries, manuals, and all other explanatory materials.

├── reports                <- Generated analysis as HTML, PDF, LaTeX, etc.
   └── figures            <- Generated graphics and figures to be used in reporting

├── requirements.txt       <- The requirements file for reproducing the analysis environment, e.g.
                             generated with `pip freeze > requirements.txt`
├── test-requirements.txt  <- The requirements file for the test environment

├──               <- makes project pip installable (pip install -e .) so {{cookiecutter.repo_name}} can be imported
├── {{cookiecutter.repo_name}}    <- Source code for use in this project.
   ├──        <- Makes {{cookiecutter.repo_name}} a Python module
   ├── dataset            <- Scripts to download or generate data
   ├── features           <- Scripts to turn raw data into features for modeling
   ├── models             <- Scripts to train models and make predictions
      └──    <- Main script for the integration with DEEP API
   ├── tests              <- Scripts to perfrom code testing
   └── visualization      <- Scripts to create exploratory and results oriented visualizations

└── tox.ini                <- tox file with settings for running tox; see

The directory structure of DEEP-OC- looks like this:

├─ Dockerfile             Describes main steps on integrationg DEEPaaS API and
│                         <your_project> application in one Docker image

├─ Jenkinsfile            Describes basic Jenkins CI/CD pipeline

├─ LICENSE                License file

├─              README for developers and users.

├─ docker-compose.yml     Allows running the application with various configurations via docker-compose

├─ metadata.json          Defines information propagated to the [DEEP Open Catalog](


More extended documentation can be found here.


tools, template, data science


License: MIT

Get the code

Github Docker Hub