Automatic HTTPS with Azure Container Instances (ACI)

Johann Gyger
ITNEXT
Published in
4 min readJan 31, 2021

--

Let’s assume you want to deploy a simple containerized application or service to the Azure cloud. Additionally, your service needs to be reachable publicly via HTTPS. This technical article shows you how to achieve this goal.

Azure Container Instances

According to the architecture guide Choosing an Azure compute service you’ve got several options to deploy your containerized service, one of them is Azure Container Instances (ACI):

Container Instances: The fastest and simplest way to run a container in Azure, without having to provision any virtual machines and without having to adopt a higher-level service.

Simple also means that you don’t get all the options and features of a full-blown orchestration solution, such as Azure Kubernetes Service (AKS). ACI provides features like sidecars and persistent volumes. With ACI, however, you have to live with a downtime when upgrading your deployment.

And you have to set up TLS manually. There is a guide, Enable TLS with a sidecar container, which tells you how to set up HTTPS with Nginx and a self-signed certificate. Ugh. The guide also mentions Caddy as an alternate TLS provider but doesn’t provide more details.

Caddy

Caddy 2 is a powerful, enterprise-ready, open source web server with automatic HTTPS written in Go.

Ok, sounds nice! Automatic HTTPS sounds really intriguing. What does it mean? “Caddy obtains and renews TLS certificates for your sites automatically. It even staples OCSP responses.” Wow! But how is this done?

“Caddy serves public DNS names over HTTPS using certificates from a public ACME CA such as Let’s Encrypt”. This means, you just need a public DNS record and Caddy needs to be reachable via ports 80 and 443. Nice!

Setup Instructions

So let’s combine ACI and Caddy to achieve our goal. I’ll use Terraform to set up the infrastructure in Azure. We’ll start with a new Terraform file and configure it with the Azure Provider (azurerm) and a local value for the Azure region:

Next, we are going to define three resources so that we can provide persistent storage for Caddy:

This is needed so that the certificate from Let’s Encrypt is not lost between deployments. If you deploy frequently and Caddy can’t remember the previous certificate, you will probably run into a rate limit of Let’s Encrypt which means you won’t be able to get any new certificate for your domain for some time.

Now we’re ready to define our main resource, the container instance (called container group in Terraform):

Note that we define two containers. On line 9, we use an Nginx unprivileged image which serves as a surrogate for our real service and listens on port 8080.

On line 16, we define another container (sidecar) which contains our Caddy server. As mentioned previously, Caddy needs ports 80 and 443, so we assign those ports. Also, note that we are using a public IP (line 7) and we define a DNS subdomain (line 6).

Lines 32–38 contain the configuration for the shared volume which reference the storage resources we defined before. Caddy stores its data in the /data directory.

Line 40 contains all the magic to start Caddy. We tell it to act as a reverse proxy for our main service, the address to listen to (from parameter), and the forwarding address for our main service (to parameter), which is localhost:8080. That’s it. Caddy can be started with a one-liner and requires almost no configuration! (This is a concept I call zero config which I will treat in a future article.)

Finally, we print the address of our new service which should be accessible via HTTPS with a valid certificate from Let’s Encrypt.

Let’s log in with the Azure CLI (az), and let’s initialize and apply our new Terraform config:

Nice work! Let’s test our service in a browser by invoking the URL provided in the output. If the page displays “Welcome to nginx!” and the browser doesn’t complain about invalid certificates then we achieved our goal.

There are some restrictions you need to be aware of: First, for each service you have to spin up a separate Caddy service. This consumes extra resources. Second, you also have to make sure that your service doesn’t listen on ports 80 and 443 as those are reserved for Caddy. Third, Caddy requires a public IP.

Conclusion

In this technical guide, I demonstrated how you can overcome one of the shortcomings of ACI when it comes to managing TLS certificates for an HTTPS connection.

--

--