I get the point of using a Docker image is for reproducibility, but the key selling point of Docker is it's modularity! It seems slightly blasphemous to put everything onto a Docker image. Why not just a VM image if you're going to make an offline reproducibility archive of multi-gigabytes?
Instead I would give some serious thought to what genomax said, which is to have a Docker image which automates the process of downloading, decompressing, etc, all the raw public data and turning that into the final result instead. This way, your Docker image would be tiny, you don't have issues with the Docker data and the public data falling out-of-sync if there are ever corrections needed (as a general rule of thumb, there should only be 1 place to download the data from). And of course, you can update your Docker image of 100Mb much easier than updating your Docker image of 100Gb to fix a typo in a script.
As an aside, it seems to be a very popular these days when met with the question of "how can I reproduce this in 10 years time?" people think of the future as being some cataclysmic hellscape where nothing works anymore. Some poor future Bioinformatician slumped over a green and black cathode-ray monitor mumbling about "the wisdom of the ancients" while his buddies peddle bicycles to generate power. All to reproduce the RNA-Seq findings for some 10-year-old study.
As someone typing this while playing Pokemon Red via Gameboy emulator on his phone, i'd say the chances of bad code still working in 10 years from now is fairly high, so long as the code was fairly popular at the time :)
Maybe quay.io? I'm not sure what sort of size limits they have.
What if you send it to the Journal, as a Supplementary file? Some bioinformatics-related journals have no limits for file hosting.