Case study: GFS improves manual, unsecure and complex pipelines using GitOps
Learn how GFS, a logistics and supply chain organization, implemented GitOps to reduce operational overhead by 90%, while improving security, observability and deployment cadence.
Global Freight Solutions (GFS) is the UK’s biggest provider of multi-carrier eCommerce delivery solutions, providing online retailers with access to 1000+ carrier services and multi-carrier technology to maximize the customer experiences, simplify operational processes and support business growth. As a small development team with big ambitions, GFS were looking at tools and patterns that they could efficiently implement to increase their capabilities. Specifically in automating operational tasks, so that they could free up development time that can be used on creating great solutions for their customers.
After a successful test implementation, GFS settled on Weaveworks GitOps to solve their wider team challenges.
“I’d recommend Weave’s GitOps at any chance I get! For me the key factors are the low cost of entry vs. the level to which it empowers teams. Once teams see GitOps in practice, I think it rapidly sells itself. GitOps isn’t something just for the unicorns and startups; I really feel that teams of any size in any environment can benefit from it.” - John Clarke, Director of Software Development
Who is GFS?
As the pioneer of Enterprise Carrier Management (ECM), GFS’ goal is to provide unique and affordable shipping solutions for leading retailers and B2B brands worldwide. GFS optimizes delivery from checkout to doorstep, allowing their customers to boost sales and grow their business.
Slow, manual deployments
A major challenge was deployment time, especially when new code was released into production. The platform team had been looking at automated deployments within GFS for several years. Initial work using immutable VM images was performed as it fit GSF’s technology at that time. The process had shown great promise, but they ran into classic issues - image building was a long and tedious process, error prone, and the resulting images were large making it difficult to store and move around.
The team started looking at containers to resolve their early VM issues as it fit the bill very well. At that point they decided that they needed an orchestrator, and Kubernetes rapidly showed itself to be the ideal choice. The platform team was manually deploying into the cluster, but again this was slow and time consuming.
Pipeline complexity and lack of observability
GFS found that their existing Azure DevOps pipelines were often complex to build, and as their estate grew, they were becoming more of a maintenance overhead. Adding a new
environment meant updating many pipelines without a ‘no touch’ deployment making it very difficult to run quick tests. Their cluster configuration had also become fragmented through their source repository, making it onerous to see how each environment was configured.
Non-favourable authentication process
The platform team did not favour the process of allowing their pipelines to authenticate with their clusters, especially given the level of access required. Keeping authentication credentials outside of the cluster is not a best practice and can increase the attack vector of a CICD’s pipeline. The team wanted to limit direct cluster access by the engineers.
Download the case study to learn more about how GFS implemented GitOps which reduced operational overhead by 90%, while also improving security, observability and deployment cadence.