4234 shaares
3 results
tagged
scaling
we operate Kubernetes as follows to try and minimise it:
- We run multiple production clusters and teams are able to choose which clusters to run their application in. We don’t use Federation yet (we’re waiting on AWS support) but we use Envoy instead to load-balance across the different cluster Ingress load-balancers. We can automate much of this with our Continuous Delivery pipeline (we use Drone) and other AWS services.
- All clusters are configured with the same Namespaces. These map approximately 1:1 with teams.
- We use RBAC to control access to Namespaces. All access is authenticated and authorised against our corporate identity in Active Directory.
- Clusters are auto-scaled and we do as much as we can to optimise node start-up time.
- Applications auto-scale using application-level metrics exported from Prometheus.
Easy to use tool that automatically replaces some or even all on-demand AutoScaling group members with similar or larger identically configured spot instances in order to generate significant cost savings on AWS EC2, behaving much like an AutoScaling-backed spot fleet