Atlassian did Kubernetes “the hard way” over the past three years, refining its setup through the tool’s infancy and growth, but it’s a path at least one of those involved can no longer recommend.

Kubernetes is an increasingly popular orchestration tool for Linux containers. It groups containers into logical units for easy management and discovery, and was built by Google before being open-sourced.
Nick Young, principal engineer in Atlassian’s Kubernetes team, told KubeCon 2018 in Seattle last month that the company began its Kubernetes journey before most of the tooling and industry knowledge in the space existed.
“The ‘stand me up a cluster’ tooling was very incomplete,” Young said.
“To build our clusters, we literally started from ‘Kubernetes the hard way’ and built from there.
“Our clusters are as close to artisanal as you can get, I would say.”
Kubernetes is a key technology component of Atlassian’s internal platform-as-a-service (PaaS), which is targeting running “95 percent or more of compute” workloads inside the company.
Atlassian actually started building a Docker-based PaaS in 2013/14 but quickly ran into issues that were (mostly) solved a few years later by adopting the Kubernetes container orchestration tool.
The path to adopting Kubernetes at the time was clear, since there were considerably fewer options than today.
“Everything was still pretty new at the time and so we wanted to make sure that we knew the system as well as possible,” Young said.
“That was why we started from Kubernetes the hard way, which was just to make sure that we knew this whole platform every which way.”
There is still some work to do on the PaaS that remains ongoing.
“We have a whole big team building [the] internal PaaS,” Young said.
“The point of that internal PaaS is to have developers not need to care about how their stuff gets deployed.
“They’ll be able to say, ‘I want a Docker image, a Postgres database, and I want it to be exposed on the internet please’ in a really simple declarative file and then there is a whole bunch of magic that the PaaS team are building that will then translate all of that through a variety of translations into actual Kubernetes things, wire it all up for you, generate you a Postgres database, make sure that the Postgres connection strings and details are all wired up to your [Kubernetes] pod for you.
“So that if you’re a developer, the experience of using the platform is as close as possible to the experience of running stuff on your laptop.”
Learning past lessons
Young’s team - the Kubernetes Infrastructure Technology Team or KITT, one of many unashamed Knight Rider puns used in naming conventions - in part came from the team that ran Atlassian’s now retired OnDemand platform, which was used to run hosted versions of Confluence and Jira.
“It was a pretty big platform that used an early form of containerisation in OpenVZ, and so we had experience at running quite a big fleet of stuff,” Young said.
“We wanted to make sure we took as many of the learnings from that as possible and built them into this new platform.”
One of those learnings was to set up the Kubernetes infrastructure in such a way that the “blast radius” of future changes to it could be limited to specific subsets.
“This is the number one thing we have learned from running a big platform earlier is that you really needed to make sure that when you make changes that you can make changes to some subset of your stuff,” Young said.
“If you’re managing a full stack like we are, then having that subset be manageable is super important and it frees you up.
“Keeping it orthogonal frees you up from stepping on each other’s toes when you’re working together.”
The result is a layered design somewhat reminiscent of the classic OSI model, Young said, keeping “strong isolation between the layers with a very clearly defined boundary so that if I’m working on the lowest layer, someone else can work on the next layer up and someone else can work on the top layer.”
The Kubernetes environment
Atlassian has around 20 Kubernetes clusters running its internal workloads.
“Our biggest cluster size so far is about 14,000 vCPUs and about 50TB of RAM, though most of them are not usually that big,” Young said.
“The biggest one runs a whole bunch of the internal Atlassian CI [continuous integration and delivery to build and deploy new products].
“It’s pretty common that will have 2500 builds dropped onto it in a minute or two, and so it will need to scale really quickly to ensure people don’t have to wait too long for their builds.
“I don’t know about other sysadmins, but if there’s one thing that will make your devs come and poke you real hard, it’s when their builds don’t start for like 10 or 15 minutes without a good reason, so we’ve spent a lot of effort to make sure that time is as short as possible.”
But the path to Kubernetes had not been completely smooth.
Atlassian ran into problems with the size limit on etcd’s, a critical component that stores configuration and other data for each Kubernetes cluster.
“If you ever hit the 2.1GB limit on the size of your etcd database then your cluster will flip into read only mode and you will have a pretty bad day,” Young said.
“I say that as someone who had a cluster accidentally have 80,000 namespaces [Kubernetes parlance for virtual clusters].
“The good news is Kubernetes does work when you have 80,000 namespaces. Etcd does not; 80,000 namespaces 100 percent will take up your 2.1GB.
“At about 50,000 namespaces your etcd will start slowing down. Eventually when your etcd fills up you will have a bad day and you will have to - if you’re lucky - flip traffic to another cluster while you sort out that problem, which is what we actually managed to do.
“I was very glad that we talked that [internal] customer into having a failover cluster ready to go.”
That was but one problem Young’s team had encountered with etcd.
“You don’t want to run etcd,” he said.
“Running your etcd is like running your own database 30 years ago. The software is excellent, well-built, runs really well but in the documentation with the software there are pages and pages and pages of caveats.
“If you don’t know that they’re there, they will catch you, so that’s what I mean by it’s like a database 30 years ago. There’s no years and years of best practices and things you’ve got to watch out for and industry knowledge [to fall back on].”
Young said that security was also a challenge.
“Security is really hard,” he said.
“Lots of Kubernetes security things can be pretty scary.
“There are great options in Kubernetes to tune all this stuff but the defaults are not very secure yet.”
Don't completely DIY
Though Atlassian had managed to get its Kubernetes clusters working successfully by running everything internally and “the hard way”, Young noted there were plenty of “boring tools” now available that could avoid the pain Atlassian went through.
“Managed Kubernetes is pretty great now. When we started there was no such thing as managed Kubernetes really, but now, between the three major cloud providers and everyone else, the managed Kubernetes clusters are 100 percent usable, especially if you’re just starting out,” he said.
“Then think about doing your own [thing] if you find you need to.”
There were tooling options in that space as well that companies could now take advantage of.
“If you do decide you need to use your existing tooling, then kops, kubicorn, kubeadm are all really good,” Young advised.
“Use one of those as opposed to doing what we did of literally Kubernetes the hard way.”
As for Atlassian, Young said the company was so considerably invested in its Kubernetes infrastructure - and in the tweaks it had made to it - to move to a newer deployment option.
“The managed platforms don’t claim to be all things to all people and they’re not. There are definitely things you just cannot do,” he said.
“For us, we’ve twiddled every API server [Kubernetes’ central management entity] knob you can twiddle.
“So for us we can’t go back to not being able to twiddle those things. You get addicted to the twiddling once you actually start to do it.
“The problem for us now is we’ve got this whole toolchain working and it’s as good as Kubeadm for our specific use case, and there’d [be] a huge engineering effort involved in actually swapping over to everything else.
“It’s a bit like being addicted to it because we can’t give it up.”