





















































Our Exclusive 2-for-1 Sale is LIVE!
For the next 24 hours only, you can secure 2 seats for the price of 1 at Generative AI in Action (Nov 11-13)!
📅 Sale ends tomorrow at 10 AM ET
Bring a colleague, friend, or your team and dive into everything this conference has to offer—from expert insights and hands-on sessions to valuable networking opportunities.
Today we will talk about:
⭐Masterclass
Building Lightweight Kubernetes Dev Ephemeral Environments
From which Kubernetes pod (and namespace!) is this process that I see on my host?
Argo Workflows: Simplify parallel jobs: Container-native workflow engine for Kubernetes
Using SimKube 1.0: Comparing Kubernetes Cluster Autoscaler and Karpenter
🔍Secret Knowledge
Behind the scenes of the OpenTelemetry Governance Committee
⚡Techwave
EC2 Image Builder now supports building and testing macOS images
Grafana 11.3 release: Scenes-powered dashboards, visualization and panel updates, and more
Sonar Details OpenAPI Generator Flaw That Creates Source Code Vulnerability
HashiCorp Updates Terraform; Wider Cloud Infrastructure Developer Toolsets
🛠️Hackhub
kubectl-guard: Accidentally modifying production instead of a local cluster? kubectl-guard helps prevent such critical mistakes.
kubesafe: Safely manage multiple Kubernetes clusters by defining safe contexts and protected commands.
Tfreveal:An open-source tool that enhances Terraform plan visibility by showing all resource and output differences, including sensitive values.
SyncLite:A low-code platform for relational data consolidation, ideal for building data-intensive apps across edge, desktop, and mobile environments.
Cheers,
Editor-in-Chief
Building Lightweight Kubernetes Dev Ephemeral Environments
Kardinal is an open-source tool for creating lightweight, temporary development environments on Kubernetes clusters. It’s designed to minimize resource usage by deploying only the services you need for testing while reusing existing resources when possible. Kardinal introduces “flows”—ephemeral environments that can be spun up for specific features or testing needs, which saves time and costs by avoiding redundant deployments.
From which Kubernetes pod (and namespace!) is this process that I see on my host?
To find which Kubernetes pod and namespace a process on your host belongs to, you can use crictl along with cgroups. First, get the process ID (PID) of the containerized process, then find its cgroup ID, which will contain the container’s unique identifier. Once you have that ID, use crictl inspect with a formatted output to get the pod’s namespace and name. This retrieves both the namespace and pod name directly from crictl using go-template formatting.
Argo Workflows: Simplify parallel jobs: Container-native workflow engine for Kubernetes
In this guide, the focus is on Argo Workflows, an open-source tool designed to manage complex workflows in Kubernetes environments by orchestrating parallel tasks in containers. Each step of a workflow is run within a container, making it ideal for complex pipelines like data processing or machine learning. Argo Workflows integrates with Kubernetes services (e.g., volumes, secrets, and RBAC) and uses Directed Acyclic Graphs (DAGs) to sequence tasks. This setup explains deploying Argo on Amazon EKS and integrating it with Argo Events to handle data-driven tasks triggered by messages from Amazon SQS, creating a scalable, event-driven Spark job processing platform on Kubernetes.
Using SimKube 1.0: Comparing Kubernetes Cluster Autoscaler and Karpenter
SimKube 1.0, a Kubernetes simulator, was used to test two popular cluster autoscaling solutions: Kubernetes Cluster Autoscaler (KCA) and Karpenter. Both tools add nodes to a Kubernetes cluster based on workload demands, but they differ significantly in approach. KCA, originally designed for homogeneous clusters, must be configured with specific instance types, which can make it slower when there are many options. Conversely, Karpenter, designed by AWS, optimizes across all available EC2 instances by default and uses both a "fast" loop for quick scheduling and a "slow" loop for optimization, which made it faster in this simulation.
Upgrading an outdated AKS cluster from version 1.21 to 1.30 without downtime requires a careful approach, especially since rolling back AKS upgrades isn't possible. A Blue-Green deployment is a good option here but is complex at the cluster level. One way to approach it is to create a new cluster with AKS version 1.30, deploy and test the application there, and then redirect production traffic to the new cluster via DNS or load balancer once confirmed stable. First, validate the application’s compatibility with version 1.30 in your QA environment and ensure no critical API changes break functionality. If creating a new cluster is challenging due to resource limitations, consider a controlled maintenance window with a staged upgrade (e.g., from 1.21 to 1.22, then to 1.24, and so on) but remember that the direct upgrade might carry risks due to skipped deprecation changes and other breaking updates.
Dokku is an open-source platform as a service (PaaS) that lets you turn a virtual private server (VPS) into a serverless platform, similar to Heroku, but with more control and no subscription costs. It allows easy deployment of web apps using Docker containers, GitHub Actions, or simple git commands. With features like auto-scaling, built-in SSL from Let’s Encrypt, and password protection, Dokku is ideal for hosting both applications and static sites from private repositories. Additionally, it offers flexible deployment options and can integrate with Cloudflare for HTTPS if needed, making it a powerful, budget-friendly solution for personal or small-scale app hosting.
Yelp has implemented multi-metric autoscaling on its PaaSTA platform, enabling services to scale based on multiple factors (like CPU and request load) rather than just one, improving stability and quicker recovery during high-demand periods. Since PaaSTA is an 11-year-old platform on Kubernetes, updating it safely was challenging. The team spent weeks understanding the codebase, gathering input, and defining a clear, gradual update plan. They used snapshot testing and strict validation to confirm stability at each step, made minimal yet crucial API adjustments, and improved monitoring through Grafana. Ultimately, the update rolled out smoothly, enhancing scaling options without causing any service interruptions.
Policy as Code (PaC) allows organizations to enforce rules and guidelines on infrastructure automatically by defining policies as code, ensuring resources meet security, compliance, and operational standards. Tools like HashiCorp Sentinel and Open Policy Agent (OPA) are popular frameworks for PaC, working with infrastructure as code (IaC) tools like Terraform. Unlike traditional IaC, which configures infrastructure, PaC sets up policy rules that are enforced whenever infrastructure changes are proposed. This approach helps maintain a secure, compliant cloud environment by preventing risky configurations.
Behind the scenes of the OpenTelemetry Governance Committee
The OpenTelemetry Governance Committee (GC) guides the OpenTelemetry project strategically, ensuring its growth as a vendor-neutral observability framework. While the Technical Committee (TC) focuses on technical aspects, the GC's role includes setting project goals, updating policies, and overseeing SIG (Special Interest Group) sponsorships, ensuring alignment with community needs. GC members also represent OpenTelemetry at events, mediate conflicts, and check in with SIG maintainers to address challenges and gather feedback.
EC2 Image Builder now supports building and testing macOS images
AWS EC2 Image Builder now supports creating macOS images, enabling users to streamline their image management and automate the creation of "golden images" (customized bootable OS images) for macOS in addition to Windows and Linux. This is particularly helpful for developers using macOS tools like Xcode and Fastlane, which are essential in CI/CD pipelines. With Image Builder, users can create components for specific tools, define a recipe for a base macOS image, configure infrastructure (like EC2 Mac Dedicated Hosts), and set up pipelines that automatically test and validate each image.
Anthropic's latest updates to the Claude 3.5 model family in Amazon Bedrock include an upgraded Claude 3.5 Sonnet, which enhances the model’s ability to handle complex software engineering tasks, knowledge-based Q&A, data extraction, and task automation at the same cost as previous versions. Additionally, a new "computer use" feature, available in public beta, allows Claude 3.5 Sonnet to interact with computer interfaces, like opening applications, typing, and clicking, opening up possibilities for AI-driven automation in software testing and administrative workflows. Lastly, the upcoming Claude 3.5 Haiku will offer faster response times paired with strong reasoning abilities, ideal for applications requiring both speed and intelligence, such as customer service and data processing in sectors like finance and healthcare.
Grafana 11.3 release: Scenes-powered dashboards, visualization and panel updates, and more
Grafana 11.3 introduces a range of new features and improvements, with a highlight on the new Scenes-powered dashboards, enhancing stability, flexibility, and organization of dashboard elements. This release also includes visual and functional updates, like a redesigned inspect feature for table cells, enabling quick data analysis, and the new "Actions" option, allowing users to trigger API calls directly from elements on canvas panels. The update further enhances alerting with simplified rule creation and RBAC for notifications, and Explore Logs is now a default feature, making log troubleshooting more accessible.
Sonar Details OpenAPI Generator Flaw That Creates Source Code Vulnerability
Sonar recently identified a vulnerability in the OpenAPI Generator, a popular tool for creating API libraries, that could allow attackers to read or delete files in certain directories. Although a patch has been released, many existing APIs built with older, unpatched versions might still be at risk, requiring DevSecOps teams to locate and update them. This vulnerability underscores the challenge of detecting security flaws in auto-generated code, where developers may be less involved in the underlying code creation process. With cybercriminals actively searching for such vulnerabilities, DevSecOps teams must prioritize remediating high-risk code while balancing limited resources.
HashiCorp Updates Terraform; Wider Cloud Infrastructure Developer Toolsets
HashiCorp, now under IBM's ownership, announced significant updates to Terraform at HashiConf, focusing on streamlining multi-cloud infrastructure management. Terraform's new "stacks" feature allows developers to manage complex, interdependent infrastructure configurations, making it easier to scale and control cloud resources across multiple environments. Additionally, HCP Waypoint provides a structured portal for internal development, using templates to standardize application deployment and updates. Other enhancements include new lifecycle management capabilities for HCP Vault, GPU resource sharing in Nomad, and an automation tool for migrating Terraform workflows, all designed to optimize and automate infrastructure in an increasingly complex cloud landscape.
To set up *kubectl-guard*, first create a file named *kubectl-guard* for the script, then make it executable by running `chmod +x kubectl-guard`. Next, open your shell configuration file (e.g., `~/.zshrc`) in a text editor, and add an alias with the command `alias kubectl='full-path-to/kubectl-guard'`, replacing "full-path-to" with the actual path where the script is saved. Save and close the file, then restart your terminal session for changes to take effect. This setup will help ensure safety by requiring the production cluster name to include "prod," though you can adjust this by modifying the `PROD_IDENTIFIER` variable.
*Kubesafe* is a tool designed to help you avoid running risky commands on the wrong Kubernetes cluster by marking certain contexts as "safe" and defining commands that need confirmation before execution. It works with any Kubernetes CLI tool (like `kubectl` or `helm`) by wrapping the command to add this layer of protection. For instance, running `kubesafe kubectl delete pod my-pod` will prompt for confirmation if the context is marked as protected. You can set up aliases, such as `alias kubectl='kubesafe kubectl'`, to automatically use Kubesafe each time you run a command.
Tfreveal:An open-source tool that enhances Terraform plan visibility by showing all resource and output differences, including sensitive values.
*tfreveal* is an open-source tool that lets you see all changes, including sensitive values, in Terraform plan files, enhancing transparency in infrastructure updates. While Terraform hides sensitive data by default, tfreveal unearths these details, which is particularly useful for detecting drift between Terraform state and actual infrastructure. Typically, sensitive data can only be viewed through complex JSON outputs, making it hard to read, especially when changes are in large encoded values. tfreveal simplifies this by displaying clear diffs, showing all values. To use, generate a plan file with `terraform plan -out plan.out`, then pipe it to tfreveal via `terraform show -json plan.out | tfreveal`.
SyncLite:A low-code platform for relational data consolidation, ideal for building data-intensive apps across edge, desktop, and mobile environments.
SyncLite is an open-source, low-code platform for creating data-intensive applications that seamlessly consolidate and synchronize data across edge, desktop, and mobile environments. It supports real-time, transactional data replication from various sources, like embedded databases (e.g., SQLite, DuckDB) and IoT message brokers, and integrates with popular data destinations, such as databases, data warehouses, and data lakes.
`pg_replicate` is a Rust library designed to help developers quickly set up data replication from PostgreSQL to various data systems. It simplifies the use of PostgreSQL’s logical streaming replication protocol, letting users focus on building data pipelines without dealing with protocol details. To get started, users create a PostgreSQL publication, run the stdout example to replicate data to standard output, and connect using simple commands.
📢 If your company is interested in reaching an audience of developers and, technical professionals, and decision makers, you may want toadvertise with us.
If you have any comments or feedback, just reply back to this email.
Thanks for reading and have a great day!