Starting 7 years ago the CoreOS team built a number of influential technologies. The pursuit was always better automation, security, and efficiency.

So, I went through the CoreOS blog from the beginning and highlighted a number of the technologies that we built together. 1/x
If you have a memory from the last 7 years of containerization I would love to see it in the replies here.
Summer ‘13: Two projects are launched: @CoreOS Linux and @etcdio. We were tackling two very ambitious parts of infrastructure (OSes, DBs) with a small team at the time including two people who would be leaders of those projects for years to come: @marineam and @xiangli0227. 🚢
Winter ‘13/‘14: We introduced Fleet to schedule services across a cluster of @CoreOS machines. The system was simple and utilized by many tools at the time including @opendeis and many production customers over the following years. Soon though there would be competition… ☸️
Summer ‘14: . @Kubernetesio was announced. Also, @CoreOS Linux made its first stable channel release. Docker v1.0 was also released that summer and our CoreOS Linux Alpha users would have it in the alpha channel just hours later via automated updates. ⚡️
Summer ‘14: The @quayio team (👋 @jacobmoshenko and Joseph Schorr) joins @CoreOS. Under their leadership the Quay product (and OSS project) continues to gain users and hit milestones like going OSS in Nov’19. 📦
Summer ‘14: To make Kubernetes run on all cloud providers we introduced (👋 @eyakubovich) Flannel overlay networking. This enabled a quick setup of the Kubernetes pod-to-pod networking model to work in places that only had one IP per server like AWS, Azure, and hardware. 🕸️
Winter ‘14/’15: CVE-2014-6371 Shellshock was released and @CoreOS Linux was able to roll out an automated update patch to the entire alpha/beta/stable fleet within a few hours of disclosure. 🤖
Winter ‘14: rkt container runtime was introduced as a model for runtime architecture and container format standardization. I think this nudged the industry to refactor and introduce standards including the @OCI_ORG, Kube CRI, containerd, and cri-o. Retired in Feb’20. 🚀
Summer ‘15: We introduced the first Enterprise Kubernetes platform: Tectonic. The team would focus on introducing automated operations tooling into the core of Kube. One click platform wide upgrades were a big engineering challenge 🥳
Winter ‘15: . @Quayio introduced Clair container image security scanning. It and other tools kicked off a discussion around container security and software delivery life cycle tooling that continues today. 🔒
Winter ‘15: Enterprise users at Tectonic Summit were making it clear that Kubernetes was on a path of rapid adoption. Teams were adopting the technology to replace existing bespoke systems or to deploy applications that had been developed in the last two years in containers. 🙌
Winter ‘15: Peak container runtime confusion! The volume of technologies and the rapid evolution of the interfaces was causing so many users to get confused or frustrated. We spent so much time explaining OCI, docker, rkt, appc and many others I likely forgot. 🤦‍♂️
Winter ‘15: We introduced “Tectonic Distributed Trusted Computing” (👋 @mjg59) which integrated Kubernetes and trusted computing, secure boot, and TPMs. The idea was to utilize hardware/firmware protections to ensure nodes only ran workloads their operators expected. 🔐
Winter ‘16: Kubernetes can’t scale was becoming a meme. So, part of the team (👋 @hongchaod) dove in deep to try and illustrate that scaling wasn’t a deep architectural flaw but a lack of urgency/focus on profiling. Kubernetes scales just fine these days. 📈
Spring ‘16: CoreOS Linux has been delivering updates for 1000 days. 🌐
Spring ‘16: We built “Stackanetes” to run OpenStack on top of Kubernetes. A major goal of that project was showing Kubernetes was capable of running complex existing applications. 🥞
Spring ‘16: Prometheus was becoming a solid foundation to build metrics and monitoring tools for Kubernetes. So, we begin contributing heavily with the help of @fabxc, @fredbrancz and others. 🔥
Summer ‘16: etcd v3.0 was introduced. Based on feedback and observations from Kubernetes the API and storage engine was redesigned to be more efficient and use @grpcio. 🐙
Winter ‘16: The Kubernetes Operator concept was introduced alongside an Operator for etcd and Prometheus. The goal was to provide an operating model and term for third-party ISVs to start targeting Kubernetes natively and use automated operations. 💽
Jan ‘17: First “self-hosted” patch upgrade to Tectonic clusters is rolled out. A key goal of Tectonic was to deliver a complex enterprise platform that stayed up to date like a cloud service. Which was a huge engineering undertaking! 🔼
Summer ‘17: We started to manage the updates to the entire stack of software from the Container Linux OS to the Kubernetes control plane updates with the introduction of a “Container Linux Operator”. 🔄
Winter ‘17: Getting feedback from users on Tectonic was difficult because it required deploying a cluster on AWS or Azure (💵). So, we introduced Tectonic Sandbox, as a vagrant image, and started to get wider feedback and lots of users.
Winter ‘17: Kubernetes needs tooling to manage what software is available to the cluster and tooling to deploy instances of those softwares. To put it another way: what is the Kubernetes equivalent of having Amazon RDS and deploying a new RDS instance? 📝
Jan ‘18: CoreOS is acquired by Red Hat. 🤠 /thread
You can follow @BrandonPhilips.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: