I have been involved in developing web based software for quite some time now, I have been lucky to contribute to building web software solutions for government, enterprise and projects for medium and small business. I’ve talked a little about the different server infrastructure we have used over time to fulfill the needs of our customers but I felt it wasn’t enough. This post is about how we have structured our server infrastructure to support our web based software.
When we started building Vuid, I had been doing server admin for about 15 years and one of the things that I do not enjoy about server admin is always being on call. I don’t want to be on call 24/7/365, I don’t want anyone on my team having to be on call 24/7/365 and my customers do not want the infrastructure their important software runs on being reliant on an overworked sysadmin.
When we started developing Vuid one of the requirements we created for our server infrastructure was that it did not require a sysadmin on call 24/7/365, we wanted our sysadmins to only need to work 9-5 and have a life outside of Vuid.
To do this, we developed what we called “the shotgun approach” to server admin - which thinking about it now, probably isn’t a name that inspires confidence. In saying that it was descriptive of how we structured our servers - we replicated our server infrastructure over providers, countries and data centers. To do this, we use Docker (previously we used Tredly and utilise many Docker masters who do not run any containers except Traefik. Before we discovered Traefik, we used a custom NGINX container we developed to route traffic between all of our Docker servers. We then replicate our Docker servers over providers, data centers and countries and when bringing up each Docker service, we specify in their Docker stack file for them to be replicated over providers, data centers and countries. This means that no matter what happens, our infrastructure stays up. There are certainly other ways to attain uptime above 99.99% by using a single provider but we prefer to be able to move our software to a different provider without needing to rewrite significant portions, if and when the need arises.
The above solution works great for containers that use or manipulate data but a web app of any size, needs to store data. Our goto storage solution has always been PostgreSQL but we have also used several other storage systems. One of the problems with using PostgreSQL servers over multiple providers has been connection security i.e. the need to use SSL to encrypt traffic between our replicated servers. The main issue with using SSL to encrypt traffic is that we need to manage SSL certificates, which is laborious if you are doing it correctly. Instead of using PostgreSQL’s built in SSL encryption, we now use encrypted networks on Docker to connect our PostgreSQL servers, which completely negates the need to manage the many different PostgreSQL SSL certificates and users. By running PostgreSQL within a Docker container, we do loose some performance but we also gain the ability to configure the containers to heal automatically.