GitLab is an integral part of our day-to-day workload on the AWS platform, as we manage well over 110K lines of CloudFormation code across close to one-hundred AWS accounts.
In addition to using it for our internal development purposes, we also use it to deploy and manage solutions for our managed and OrbitOps customers.
In this article, I’ll walk you through how we currently have GitLab deployed and how we integrate it into our customer’s AWS accounts.
First, let’s have a look at the architecture. We’ve deployed GitLab in what we call our ‘support’ account. Primarily, we use this AWS account as a ‘jumping-off point’ to get to all other AWS accounts - internal or external.
We deployed it in a private subnet with an Application Load Balancer (ALB) as the entry point. We’re currently running a single EC2 instance in a ‘steady-state’ autoscaling group - this means the minimum, maximum, and desired capacity are all set to one.
We do this for several reasons - the first being to reduce costs; secondly, if the EC2 instance were to fail, it would automatically be replaced by the autoscaling service. We consider our GitLab instance to be a ‘long-running’ deployment, which means we maintain the OS and application rather than replacing it with an updated AMI.
We keep the OS and GitLab updated with a simple automation document using Systems Manager (SSM). We keep the AMI up-to-date by using the EC2 Image Builder service.
GitLab does require a PostgreSQL database, and early on, we decided to use the Relational Database Service (RDS). By doing this, we’re able to reduce much of the administration. We also get the benefits of patching, backups, and automatic failover if the primary node were to fail for any reason.
To protect data on the EC2 instance itself, we use the AWS Backup service and create snapshots of the Elastic Block Store (EBS) volumes every night. GitLab natively supports the Simple Storage Service (S3), and we use this to make daily copies of necessary GitLab configuration. If anything were to go wrong, we can quickly recover all the components and get up and running again.
The architecture described above is entirely deployed and maintained through several CloudFormation stacks. When it comes to our stack strategy, we tend to break stacks into the smallest chunks that make sense. This approach provides several benefits:
Now that we have our GitLab server up and running, let’s discuss how we connect to other AWS accounts. Our general approach to ensuring the security of AWS is to always start with a role. GitLab integration is no different.
In each AWS account, we create an IAM Role with a trust policy that limits who (or what) can assume the role. We also add a randomly generated external ID. While we always practice ‘least privilege,’ it can be challenging with CloudFormation - we never really know what AWS services it may need to access. Generally, we use the AWS-managed PowerUser policy and a customer-managed policy to limit IAM access to make everything work.
Once the IAM Roles have been deployed, the Amazon Resource Names (ARN) are created, and we assign external IDs; then we update the AWS credentials for the GitLab runner, which in turn assumes the role and deploys the stacks.
And there you have it - how we deploy, manage and leverage GitLab to handle numerous lines of CloudFormation code across many AWS accounts all from a single centralized AWS account.
Like what you read? Why not subscribe to the weekly Orbit newsletter and get content before everyone else?