Blog Archive

Tuesday, September 5, 2023

Navigating Tech Roles: Unveiling Distinctions between Site Reliability Engineering, Cloud Engineering, DevOps, and Software Engineering

In the intricate realm of technology, distinct roles and methodologies shape the landscape of software development and infrastructure management. Site Reliability Engineering (SRE), Cloud Engineering, DevOps, and Software Engineering are four key pillars that converge to drive innovation, efficiency, and reliability. In this enlightening blog post, we dissect the nuances of each role, unraveling the differences and highlighting the unique contributions they bring to the table.


Site Reliability Engineering (SRE): Balancing Reliability and Innovation

Site Reliability Engineering is a discipline that blends software engineering with operations. Its core mission is to ensure the reliability, performance, and availability of systems and applications. SREs set and measure Service Level Objectives (SLOs) to maintain optimal user experiences. They leverage automation, incident response strategies, and capacity planning to achieve operational excellence. SREs bridge the gap between development and operations by infusing reliability into every stage of the software lifecycle.


Cloud Engineering: Pioneering Scalable Infrastructures

Cloud Engineering revolves around designing, building, and managing cloud-based infrastructures. Cloud engineers leverage cloud services and platforms to create scalable, flexible, and cost-effective solutions. They architect systems to harness the power of cloud computing, enabling organizations to scale on demand, optimize resources, and achieve business goals. Cloud engineers work with diverse cloud providers, ensuring seamless integration, security, and high availability of applications and services.


DevOps: Orchestrating Collaboration and Continuous Delivery

DevOps is a cultural and technical approach that aims to foster collaboration between development and operations teams. DevOps emphasizes automating processes, breaking down silos, and streamlining workflows to enable continuous integration and continuous delivery (CI/CD). DevOps engineers focus on tools, practices, and methodologies that enhance the speed and reliability of software deployment. They enable fast-paced development cycles, rapid feedback loops, and iterative improvements, fostering agility and innovation.


Software Engineering: Crafting Code with Precision and Creativity

Software Engineering encompasses the art and science of designing, developing, and maintaining software applications. Software engineers architect solutions, write code, and create software that meets functional requirements and user needs. They harness programming languages, design patterns, and software development methodologies to build robust, scalable, and user-friendly applications. Software engineers collaborate with cross-functional teams, translating concepts into code that powers the digital experiences we rely on daily.


Navigating the Differences: A Comparative Overview

Focus and Expertise: SRE prioritizes reliability and performance, cloud engineering emphasizes scalable infrastructures, DevOps centers on collaboration and automation, and software engineering crafts functional and efficient code.


  • Responsibilities: SREs ensure systems' reliability, cloud engineers design cloud architectures, DevOps engineers drive automation and collaboration, and software engineers create application code.
  • Mindset: SREs focus on reliability, cloud engineers optimize infrastructure, DevOps engineers value automation and teamwork, and software engineers craft code with precision.
  • Methodologies: SRE relies on SLOs, cloud engineering leverages cloud services, DevOps emphasizes CI/CD, and software engineering employs coding practices and design patterns.


In Conclusion: Navigating Roles for a Cohesive Tech Ecosystem

In the intricate tapestry of technology roles, each pillar contributes distinct expertise and methodologies. While Site Reliability Engineering ensures systems' reliability, Cloud Engineering architects scalable infrastructures, DevOps fosters collaboration and automation, and Software Engineering crafts the code that powers digital innovations. By understanding the unique contributions of each role, organizations can forge a harmonious ecosystem where reliability, innovation, scalability, and creativity converge to shape the future of technology. 

Monday, September 4, 2023

Unveiling Docker Swarm and Kubernetes: Navigating the Choice for Container Orchestration

In the realm of container orchestration, Docker Swarm and Kubernetes emerge as two prominent contenders, each offering unique capabilities to streamline the deployment and management of containerized applications. In this insightful blog post, we embark on a journey to demystify the distinctions between Docker Swarm and Kubernetes, while also illuminating the scenarios where each shines, enabling you to make informed choices for your container orchestration needs.


Docker Swarm: Simple Scalability with Built-In Simplicity

Docker Swarm is an orchestration tool that is tightly integrated with Docker, the industry-standard containerization platform. It focuses on ease of use and simplicity, making it an excellent choice for smaller teams or organizations seeking a straightforward container management solution. Docker Swarm embraces a "batteries included" philosophy, providing essential features out of the box, without the complexity associated with larger orchestrators.


Strengths of Docker Swarm:

  • Ease of Setup: Docker Swarm's simplicity shines through in its straightforward setup and configuration, enabling rapid adoption.
  • Integrated Experience: Since Docker Swarm is part of the Docker ecosystem, transitioning from local development to production is seamless.
  • Ideal for Smaller Teams: Teams with limited resources or those new to container orchestration benefit from Docker Swarm's user-friendly interface and manageable learning curve.


Kubernetes: Enterprise-Grade Scalability and Flexibility

Kubernetes, often referred to as K8s, is an open-source container orchestration platform known for its unparalleled scalability, flexibility, and robust feature set. It excels in managing complex microservices architectures and large-scale applications. Kubernetes introduces a rich ecosystem of components, enabling intricate deployments, rolling updates, scaling, and advanced networking configurations.


Strengths of Kubernetes:

  • Scalability and Resilience: Kubernetes is designed for handling massive workloads and is particularly effective for orchestrating applications with complex microservices architectures.
  • Advanced Networking: Kubernetes offers comprehensive networking capabilities, allowing you to control and optimize traffic flow between containers.
  • Ecosystem and Extensibility: The Kubernetes ecosystem boasts a wide array of tools and extensions that can be integrated to address various requirements.


Choosing the Right Fit: Appropriate Scenarios


When to Use Docker Swarm:

  • Simplicity Matters: Opt for Docker Swarm when you seek a straightforward, easy-to-use solution for smaller applications or projects with limited complexity.
  • Rapid Deployment: Docker Swarm is ideal when speed is a priority, making it a valuable choice for quickly bringing up containerized applications.
  • Familiarity with Docker: If your team is already well-versed in Docker, leveraging Docker Swarm maintains continuity and minimizes the learning curve.


When to Use Kubernetes:

  • Complex Applications: Choose Kubernetes when dealing with intricate, large-scale applications requiring advanced deployment, scaling, and networking configurations.
  • Microservices Architectures: Kubernetes excels in managing microservices architectures, ensuring resilience and effective communication among services.
  • Customization and Extensibility: If you require a highly customizable and extensible orchestration platform that can adapt to evolving needs, Kubernetes is an ideal fit.


In Conclusion: Strategic Selection for Container Orchestration

In the dynamic world of container orchestration, the choice between Docker Swarm and Kubernetes hinges on your project's complexity, scalability demands, and team expertise. Docker Swarm excels in simplicity and rapid deployment scenarios, while Kubernetes shines in managing large-scale applications with intricate architectures. By understanding the strengths and appropriate use cases of each orchestrator, you can make strategic decisions that align with your organization's goals, ensuring a seamless journey in embracing containerization and efficient application management. 

Friday, September 1, 2023

Weekend Project: How to Build a Public Facing, Automated, Cloud-Hosted Plex Streaming Service Stack

Introduction

Over a year ago, my quest for a new weekend project led me to an intriguing idea. Inspired by a cinephile friend's extensive collection of RAW HD media and hindered by pandemic-induced supply chain issues when it came to buying computer parts, I embarked on a journey to leverage my cloud infrastructure skills to construct a comprehensive, cloud-based media library solution that would be more manageable, highly accessible, and reliable compared to traditional home setups. The solution needed to be able to stream content from anywhere, on any device and have the ability to add content as easily as humanly possible. 

The Conundrum of Cost

A valid concern arose: the cost associated with cloud-hosted media content. The apprehension was well-founded, given the potential for expenses to spiral out of control, especially considering the hidden data egress charges that accompany many popular cloud platforms. However, the reality is more nuanced. Upon closer examination and cost analysis, cloud-based solutions often prove more economical than their hardware counterparts, particularly over a few years. The allure of increased flexibility, customization, and scalability only bolsters this advantage. Additionally, it's possible to opt for cloud providers that offer favorable data egress pricing, mitigating the impact of marathon viewing sessions spanning several seasons. It's worth noting that the only apparent drawback is the inability to indulge in gaming escapades like Forza on your cloud compute during off-hours—although that might change in the future depending on the hardware you are able to attain on the cloud. 

Defining the Requirements

My initial requirements were straightforward yet demanding. I sought a solution that seamlessly combined ease of use and management, global reliability, and cost-effectiveness. A fundamental prerequisite was that media data should never be stored my personal hardware, a vital consideration for security and privacy. Furthermore, I aspired to establish a workflow that would empower me to discover and add new content effortlessly, with the added convenience of executing these actions on my iOS/Android devices. In essence, my quest to create a cloud-based media library was motivated by the convergence of technology prowess, remote access necessity, and prudent cost analysis. The result was a solution that harnessed cloud infrastructure to craft a digital haven for media enthusiasts. By delving into the specifics of architecture, management, and global accessibility, we'll explore how this innovative project materialized in future sections. Stay tuned for the upcoming segments, where we will dive deeper into the technical aspects and benefits of constructing a cloud-based media library tailored to meet the demands of media hungry friends and family in the digital era.

Picking the Right Tools

Creating a fully automated media server entails a multitude of tasks, one of the most crucial being media consolidation. This process involves the seamless acquisition, organization, and preparation of media content for optimal user experience. Think of it this way: imagine when a new TV show episode becomes available. The ideal scenario involves automatic downloading of the episode, collection of associated metadata like posters and fan art, subtitle integration, proper folder organization, updating the media library, and culminating in a user notification confirming the availability of the episode for viewing.

Here's a comprehensive breakdown of the services required to accomplish this:

  • Automated Media Download and Organization:
    • Sonarr: For TV show management, downloading, and organization.
    • Radarr: For movie management, downloading, and organization.
    • Readarr: For eBook management, downloading, and organization.
    • Bazarr: Handles subtitle management for media content.
    • Organizr: Handles service consolidation into a single UI with SSO for users.
  • Automated Media Requests and Downloads:
    • Overseerr: A platform for automating media requests and triggering content downloads.
    • Put.io: Torrent downloads
    • Prowlarr: Manages indexers and sources for torrents and NZBs.
  • Media Streaming and Access:
    • Plex: A renowned media streaming platform compatible with various devices.
    • Xteve: Manages Live TV integration.
  • E-book Management:
    • Calibre Server: Manages eBook metadata and library.
    • Calibre Web: Provides user access to eBooks and facilitates sending to Kindle devices.
  • Administration and Backend:
    • Portainer: Facilitates container orchestration and administration.
    • Nginx Proxy Manager: Manages reverse proxy for SSL termination and load balancing.
    • Let's Encrypt: Generates SSL certificates for secure connections.
    • Datadog: Monitors and provides telemetry data for various services.
    • Google Domains: Manages custom domain for the server.
    • Jenkins: Handles updates and automation through pipelines.
    • Filebrowser: Allows users to view and edit files on the server.
    • Slack: Notifications using webhooks and integrations
    • Pagerduty: Incident Response Management
In selecting the appropriate services, I opted for a combination that offered reliability, functionality, and compatibility. Among these, Plex emerged as the primary media streaming platform, catering to a variety of devices. Additionally, ARR open-source helper services played a pivotal role, facilitating media requests, management, and more.

The journey toward constructing an automated media library infused my project with an element of experimentation. From choosing the right hosting solution to integrating services that catered to my requirements, the process was iterative. Stay tuned for the upcoming segments, where we'll delve into the technical intricacies and the seamless synergy between various services that gave rise to a fully functional cloud-based media library solution, ensuring accessibility, reliability, and an unparalleled media experience.

Building the platform to host the services

Having carefully selected the ideal tools and services for our media library project, the next pivotal step is translating these choices into a cohesive, reliable, secure, and maintainable solution suitable for cloud deployment. The journey toward this goal required meticulous planning and strategic implementation to ensure optimal performance and user experience.

Choosing the Right Hosting Solution

To embark on this implementation journey, I spent time evaluating various hosting solutions. These included industry giants like AWS, Google Cloud, and Digital Ocean. Additionally, the consideration of storing media content led to exploring services such as S3 and Wasabi. After comprehensive assessment, a key realization emerged: Virtual Private Servers (VPS) provided the most compelling solution. A more detail breakdown of the differences can be found here. VPS emerged as the preferred choice for a multitude of reasons but primarily because of:
  • Cost Optimization: VPS solutions offer an impressive balance between storage costs and performance. This optimization ensures efficient resource utilization, keeping expenses in check.
  • Egress Network Traffic: An essential consideration, particularly when dealing with media streaming in high resolutions like 4K, is the potential for excessive egress network traffic charges. Remarkably, VPS providers typically do not levy additional fees for egress network traffic, ensuring cost predictability, especially crucial when inviting friends and family to share in the media experience.
When it comes to securing a Virtual Private Server (VPS) provider to accommodate your data storage and streaming requirements, precision in resource selection is paramount. The choice of an ideal provider hinges on aligning available storage and compute capabilities with your unique needs. While a configuration featuring a 6-core processor and 10TB storage proves advantageous, remember that your specific use-case and projected demands will ultimately drive this decision-making process.

Mapping Services to Cloud Infrastructure

Having established the hosting framework, the next phase involved mapping the chosen tools and services onto the cloud infrastructure. This required a thoughtful orchestration of components to ensure seamless interaction and optimal utilization. Which brings us to containerization. 

What is containerization?

Containerization has revolutionized the way applications are developed, deployed, and managed, offering a streamlined approach to packaging, distributing, and running software applications. It's a technology that enables developers to encapsulate an application along with its dependencies, libraries, and configuration files into a single unit known as a container. This container can then be consistently deployed across various computing environments, be it development, testing, or production, without worrying about compatibility issues.

At its core, containerization addresses the challenges of software deployment by providing a lightweight, isolated, and reproducible environment for applications. The concept draws inspiration from shipping containers used in logistics, where goods are packed and shipped in standardized containers that can be easily transported and handled across different modes of transportation without requiring modification. Similarly, containerization standardizes the packaging of applications, making them portable and consistent across different infrastructure environments, such as local development machines, virtual machines, or cloud servers.

Key Aspects of Containerization

  • Isolation: Containers offer process-level isolation, ensuring that applications run independently of each other. This isolation prevents conflicts between different applications and their dependencies, making it easier to manage and maintain software.
  • Portability: Containers abstract away the underlying infrastructure, ensuring that applications can run consistently across various environments without modification. This portability simplifies the process of moving applications between development, testing, and production environments.
  • Resource Efficiency: Containers share the host operating system's kernel, allowing them to use resources more efficiently than traditional virtual machines. This lightweight approach reduces overhead and increases the density of applications on a single physical or virtual host.
  • Version Control: Containers can be versioned, allowing developers to manage and reproduce application states easily. This is particularly useful for maintaining consistent environments during development and troubleshooting.
  • Dependency Management: Containers encapsulate an application's dependencies, eliminating the common "it works on my machine" problem. This ensures that applications run the same way regardless of the host environment.

In essence, containerization brings efficiency, consistency, and flexibility to modern application development and deployment. It empowers developers to focus on building and shipping applications, knowing that the deployment environment will remain consistent across various stages of the development lifecycle. The result is a more agile, scalable, and manageable approach to software development that aligns well with the demands of today's dynamic computing landscape.


Launch the service containers

To help your journey, I have published the code on my GitHub. Feel free to use and modify it to your liking as needed. They are linked below.

This process initiates the deployment of all services by fetching the Docker images for each, creating a virtual private network to enable seamless communication among containers, and configuring Nginx to act as a reverse proxy, facilitating networking with these containers.

Setup the services


Setup torrent download and shipping

You can setup a cron job to check for new torrent files that have been generated by Sonarr, Radarr, Readarr etc on a recurring basis. If any are found you will need to convert them into a magnet link and tell Putio to download this. You can use my script for this.


Setup user portal


Setup IPTV through plex (Xteve)

  • Coming soon


Setup notifications


Monitoring, Observability and Telemetry

  • Coming soon

Special thanks to smarthome beginner for getting me started with docker

Navigating Tech Roles: Unveiling Distinctions between Site Reliability Engineering, Cloud Engineering, DevOps, and Software Engineering

In the intricate realm of technology, distinct roles and methodologies shape the landscape of software development and infrastructure manage...