Title says it all - just looking to set a few of these up for various reasons.
For instance, I may teach a bioinformatics course coming up. I would love to have them all download their data from GEO or somewhere, and there pipeline in a docker container, and then have them independently verify (or disconfirm) a published report.
Thus, if there is a vignette that does the leg-work of identifying the experiment etc. already written that would be best, but honestly just dockerized pipelines (of good quality) would be close - I could make the vignette from there.
Thanks v much!
Hello Vincent Laufer, please check the following link: https://www.tutorialspoint.com/docker/index.htm
Briefly, if you have already built your pipeline, all you have to do is to make your own docker image. This is done by something called
dockerfile
. you structure the docker image, you include the files and pipelines you need and you build it!Once the docker image is built, you make a new docker container, which is an instance from the built docker image, and then you mount your own data that needs to be analyzed inside the docker container. You can make it in another way: once you deploy the docker container, the pipeline starts directly analyzing the data using
--entrypoint
argument. A lot of things can be made with docker!