Deploy to EC2 Container Services from Jenkins

In the past few months I worked on a project where one of my tasks involved setting up the project’s CI infrastructure. Actually, it would have been sufficient if I just setup a plain Jenkins with a few jobs and leave it at that. However, this would have been too boring, so I started to think how to make things more interesting. After some thinking I decided to connect this task to a small AWS exploration and try to deploy the project to an AWS cluster.

Building

As far as the build was concerned everything stayed the same… you know… executing some scripts, running some maven builds here and there… the usual stuff.

However, one step had to be added to the build… the Docker build. To avoid setting up the application servers and databases for every host over and over again, I chose Docker right from the start.

The only things I needed to add to our Jenkins installation were two Jenkins plugins. One for the Docker build support and the other one for the Amazon ECR credentials.

After adding these two things everything was pretty straight-forward. Adding the “Docker Build and Publish” step to the build-job and configuring it to push the built container to an Amazon ECR registry.

Docker build configuration

Note that, the ECR registries must be created manually, either via EC2 Management Console or via AWS CLI-Tool.

So far, so good. Now I had perfectly deployable containers ready.

Deploying

As docker containers don’t integrate themselves automagically in the deployment-job, this job had to be updated as well. Though, mostly rewritten from scratch is the more appropriate term.

The current (simple) job deployed the application to an application server and database right on the Jenkins host. This is a bad setup for various reasons. e.g. scalability and separation of concern.

What I wanted to achieve was, that my Jenkins job could deploy the docker containers on a cluster, which was setup via ECS. After quickly looking through the available Jenkins plugins I couldn’t find any which provided the needed functionality… well.. I hadn’t much hope in the first place.

It wasn’t that big of a problem as the AWS development team provides a neat suite of scripts to trigger all kinds of actions for their cloud services. On of these actions is to trigger an ECS-Task on a cluster of my choice, which is pretty much what I needed.

Note: An ECS-Task is basically a template defining how your docker containers should be deployed. Think of it as the AWS-version of docker-compose.

The deployment itself was quickly done and could be controlled by two (job-)parameters. Task and Cluster, which define basically what to deploy where.

Note: AWS credentials need to be setup first for the jenkins-OS user (or provided in the script).

After the task was started, I could use my previous deployment steps by just replacing all 127.0.0.1 with $EC2_INSTANCE_ID.

At this point you might ask yourself, how to deal with the tear down… that’s a fair question… personally, I used below script after all deployment steps finished.

To tear down all running tasks (without using the Management Console obviously) I use this script.

Logging

Experienced readers might have noticed something. All applications run in docker containers, which are managed by a different docker host (ECS’ instances in this case). “How do you provide logs for the devs which need information about the deployment?” THAT is an excellent question. I actually struggled the most with this particular problem until I found an solution I could live with.

The problem is the following. The logs are maintained by the docker host and not available via the AWS CLI-Tool.

One option is to directly connect to the particular instance (I already had the IP $EC2_INSTANCE_ID) and invoke docker logs for this host to gain access to the logs of the containers. Though that would imply opening up the docker port for the Jenkins slaves, but most of all that implies giving the job knowledge that there IS actually docker somewhere in the background. I would like to avoid that and use some abstraction mechanism instead, like… AWS’ Cloudwatch.

The integration is as simply as it can get. Just configure your ECS-Task to log to a Cloudwatch Stream.

ECS Task logging configuration

However, this doesn’t solve anything on its own. The only thing it does, is redirecting the containers output to an AWS service. To fetch the containers’ logs I use below script and archive the produced log artifacts in a post-build action.

Note: Due to the nature of Cloudwatch it is (currently) not possible to retrieve the plain logging information. Therefore I used the JSON API and parsed the output.

Conclusion

With the methods above I was able to deploy our application without much effort on an ECS-Cluster. More advanced scripts could easily extend the scripts to create persistent and managed tasks which could be accessible by (and personally for) the developers.

After running this setup for a few weeks now, I’m able to say, that it works pretty nice. The ECS-Cluster can be scaled on demand to allow for more tasks. I used the same scripts to deploy our own release-cluster, which is controlled via an own ECS-Service with its own Elastic Load Balancer.

Though the scripts may work, there is one thing I noticed in the weeks since I made the above changes. Sometimes the infrastructure seems to get hick-ups. Like tasks not running due to random AWS issues. Don’t get me wrong… these things don’t happen all the time, but occasionally. I’d estimate 1 time in 3 weeks.

Leave a comment if you would like to share your opinion on this topic or need some additional information about the used scripts and technologies.

Best wishes.

Leave a Reply

Your email address will not be published.