Implementation Lessons

There are many companies that allow you to host shiny applications on their servers. There are also enterprise level shiny server systems you can buy that will allow you to run multiple applications with user authentication etc. But if you just need to host an application that is self-contained and doesn’t need authentication then using a shiny server inside of a container on GCP is one of the easiest and cost-effective ways to get your application in front of users.

Overwrite defaults

In the shiny server there is a default demo application in the /srv/shiny-server/ directory that you need to overwrite so that your application is served from the container when people visit the url.

Add system level packages

The shiny server container is published by the Rocker organization with express permission from R studio. It includes basic system level necessities to run the shiny server. However, you will often need to add packages to the system to handle things like unit and type conversions or tools for handing geospatial raster and vector files, or even tools for handing fonts that are not tied to system interfaces.

Add R application specific libraries

While you will need to include system packages you will also need to include any libraires that your R code will call. This should be done after the system level packages are installed. And it should be done as an R script rather than as a bash script. Also be sure to set the flag to include any needed dependencies so that everything your application needs to run is inside of the container.

Allocate appropriate resources

When you deploy the container you will need to set the number of vCPUs and RAM you want to give each container. If you don’t allocate enough of these resources, you may get some erratic behavior. So it may be beneficial to over allocate if you don’t expect long run times or you aren’t cost sensitive.


Retrospective

This works well for small simple projects. You package up your code, your data, and all the necessary libraries so that it can function. You upload it to an artifact registry so that cloud run can pull it from private storage. Then you can set basic details like scaling with the number of requests. If I were to update this I might try and set up a code repository and some kind of build and deployment triggers. That way you could have a kind of automated deployment and be able to easily keep apps updated if your data or analysis needs change. I might also look into ways to restrict access with things like identity aware proxies or other authentication services. That would open a lot more opportunities internal use cases.