Skip to content

Blog

Welcome to my personal blog.

See below the latest content, and navigate in the Categories menu.

Enjoy! o/


Deploy OpenShift on AWS using custom IPv4 address

Exciting news for admins who wants more control of Public IP address in the Public Cloud! =]

Starting on 4.16, OpenShift/OKD has the capability to use custom Public IPv4 address (Elastic IP (EIP)) when deploying a cluster on AWS. This can help you in different ways:

  • Allowing to trust in which address range the nodes will egress traffic from the VPC to Internet, allowing to refine the firewall rules in the target services, such as on-premisses, or services published in the internet with restricted access.
  • Allowing to control which address the API server will be used
  • Alloing to decrease the IPv4 charges applied to Elastic IP when using the CIDR IPv4 that you brought to your AWS Account

To begging with, take a look at the following guides: - Install OCP/OKD on AWS using Public IPv4 Pool - Install OCP/OKD on AWS using existing Elastic IPs

Deploy a Cost-Effective OpenShift/OKD Cluster on Azure

Are you looking to deploy a cheaper OpenShift/OKD cluster on Azure without sacrificing performance? Keep reading this post!

Starting with version 4.17, OpenShift/OKD has transitioned to using the Cluster API as its provisioning engine by installer. This change allows for greater flexibility in customizing control plane resources.

Key Steps in the Deployment Process

This guide walks you through the following steps to optimize your Azure deployment:

  • Patch the AzureMachine Manifests:Inject an additional data disk to mount etcd, reduce the size of the OS disk, and upgrade the VM generation. These adjustments can decrease the disk size by half compared to current values.
  • Add MachineConfig Manifests: Additional manifests will be included to mount the etcd path to the data disk. This setup isolates the database from OS disk operations, improving overall performance.
  • Utilize Premium Storage: The guide recommends using the new PremiumV2_LRS storage account type, which offers performance characteristics similar to AWS's gp3. This configuration provides higher IOPS and throughput without the need for high capacity, ensuring efficient resource utilization.

To explore more about these steps and how to implement them, take a look at the guide titled Installing on Azure with etcd in Data Disks (CAPI).

If you have any questions or need further assistance, feel free to reach out!

Hands on steps to install restricted OpenShift clusters on AWS | Solutions

This post makes references tutorials/solutions with handful steps to install OpenShift clusters on restricted/private networks on AWS.

Solutions 1 - Restricted with proxy

Options:

  • Installing OCP on AWS with proxy
  • Installing OCP on AWS with proxy and STS
  • Installing OCP on AWS in disconnected clusters (no internet access)
  • Installing OCP on AWS in disconnected clusters with STS

Solution 1A) Hands on steps to install restricted OpenShift cluster in existing VPC on AWS

The steps described in this section shows step-by-step (copy/paste approach) how to deploy a private cluster on AWS without exposing any service to internet.

The approach is based in the product documentation "Installing a cluster on AWS in a restricted network".

This guide introduce Nested CloudFormation Stacks allowing to reduce coupling and increase cohesion when developing and infrastructure as a code (IaC) code with CloudFormation Templates.

This guide also introduce a bastion host in private subnet used to jump into the private VPC using AWS Systems Manager Session Manager, without needing create VPN, expose/ingress internet traffic to nodes, etc. Alternatively, you can forward the traffic from the internal API Load Balancer from the client (outside the VPC) using AWS SSM Session Port forwarding, allowing to quickly access the OpenShift clusters without leaving your "home". =]

Lastly but not least, this guide also shows how to deploy Highly Available and scalable Proxy service using Autoscaling Group to spread the nodes across zones, Network Load Balancer to distributed the traffic equally between nodes, and reduce costs by using Spot EC2 Instances (capacity managed and balanced natively using ASG/Fleet).

Pros:

  • Cheaper cluster:
    • No NAT Gateway charges
    • No public IPv4 address
    • No public Load Balancer for API
  • Restricted web access with Proxy
  • Private access to clusters using native AWS services (AWS SSM tunneling), reducing the needed of VPN or extra connectivity
  • More controlled environment
  • HA and scalable Proxy service
  • (Optional) Shared HA proxy service using AWS PrivateLink [TODO]

Cons:

  • increase manual steps to setup the entire environment (including proxy service) when comparing with regular IPI

Steps:

Solutions/Architectures/Deployments:

S1) Deploy OpenShift in single stack IPv4 VPC with dedicated proxy in public subnets

Solution 1B) Hands on steps to install restricted OpenShift cluster in existing VPC on AWS with STS

TODO

Requires a fix for ccoctl to use HTTP_PROXY

1C) Deploy OpenShift in single stack IPv4 VPC with shared proxy server IPv4

Step 1) Deploy shared proxy service

  • Create Service VPC
  • Deploy Proxy Server
  • Deploy Custom VPC Service

Step 2) Create VPC with private subnets

  • Create VPC
  • Create

Step 2A) Deploy OpenShift cluster in private mode

  • Deploy jump server using IPv6
  • Deploy OpenShift using shared proxy service

Step 2B) Deploy OpenShift cluster in private mode

  • Deploy jump server using private ipv4 and SSM access
  • Deploy OpenShift using shared proxy service

1D) Deploy OpenShift in single stack IPv4 VPC with shared proxy server IPv6

Steps to deploy dual-stack VPC, with proxy runnnin in dual-stack VPC with IPv6 egress traffic to the internet, and OpenShift cluster running in single stack IPv4 on private subnets.

Read the IPv6 deployment guide.

Solutions 2 - Private clusters with shared services

2A) Shared Proxy services

TODO: steps to deploy service VPC sharing Proxy and Image registry through AWS VPC PrivateLink

2B) Deploy hub/spoke service using Transit Gateway

TODO describe how to deploy hub/spoke topology using Transit Gateway to centralize egress OpenShift traffic in management VPC.

Option 1) Public clusters ingressing traffic in the VPC, egressing through Transit Gateway Option 2) Private clusters using ingress and egress traffic through internal network

See reference guide