Exciting news for admins who wants more control of Public IP address in the Public Cloud! =]
Starting on 4.16, OpenShift/OKD has the capability to use custom Public IPv4 address (Elastic IP (EIP)) when deploying a cluster on AWS. This can help you in different ways:
Allowing to trust in which address range the nodes will egress traffic from the VPC to Internet, allowing to refine the firewall rules in the target services, such as on-premisses, or services published in the internet with restricted access.
Allowing to control which address the API server will be used
Alloing to decrease the IPv4 charges applied to Elastic IP when using the CIDR IPv4 that you brought to your AWS Account
Are you looking to deploy a cheaper OpenShift/OKD cluster on Azure without sacrificing performance? Keep reading this post!
Starting with version 4.17, OpenShift/OKD has transitioned to using the Cluster API as its provisioning engine by installer. This change allows for greater flexibility in customizing control plane resources.
This guide walks you through the following steps to optimize your Azure deployment:
Patch the AzureMachine Manifests:Inject an additional data disk to mount etcd, reduce the size of the OS disk, and upgrade the VM generation. These adjustments can decrease the disk size by half compared to current values.
Add MachineConfig Manifests: Additional manifests will be included to mount the etcd path to the data disk. This setup isolates the database from OS disk operations, improving overall performance.
Utilize Premium Storage: The guide recommends using the new PremiumV2_LRS storage account type, which offers performance characteristics similar to AWS's gp3. This configuration provides higher IOPS and throughput without the need for high capacity, ensuring efficient resource utilization.
To explore more about these steps and how to implement them, take a look at the guide titled Installing on Azure with etcd in Data Disks (CAPI).
If you have any questions or need further assistance, feel free to reach out!
The steps described in this section shows step-by-step (copy/paste approach) how to deploy a private cluster on AWS without exposing any service to internet.
This guide introduce Nested CloudFormation Stacks allowing to reduce coupling and increase cohesion when developing and infrastructure as a code (IaC) code with CloudFormation Templates.
This guide also introduce a bastion host in private subnet used to jump into
the private VPC using AWS Systems Manager Session Manager, without needing create VPN, expose/ingress internet traffic to nodes, etc. Alternatively, you can forward the traffic from the internal API Load Balancer from the client (outside the VPC) using AWS SSM Session Port forwarding, allowing to quickly access the OpenShift clusters without leaving your "home". =]
Lastly but not least, this guide also shows how to deploy Highly Available and scalable Proxy service using Autoscaling Group to spread the nodes across zones, Network Load Balancer to distributed the traffic equally between nodes, and reduce costs by using Spot EC2 Instances (capacity managed and balanced natively using ASG/Fleet).
Pros:
Cheaper cluster:
No NAT Gateway charges
No public IPv4 address
No public Load Balancer for API
Restricted web access with Proxy
Private access to clusters using native AWS services (AWS SSM tunneling), reducing the needed of VPN or extra connectivity
More controlled environment
HA and scalable Proxy service
(Optional) Shared HA proxy service using AWS PrivateLink [TODO]
Cons:
increase manual steps to setup the entire environment (including proxy service) when comparing with regular IPI
Steps:
Solutions/Architectures/Deployments:
S1) Deploy OpenShift in single stack IPv4 VPC with dedicated proxy in public subnets
Steps to deploy dual-stack VPC, with proxy runnnin in dual-stack VPC with IPv6
egress traffic to the internet, and OpenShift cluster running in single stack IPv4
on private subnets.
TODO describe how to deploy hub/spoke topology using Transit Gateway to centralize egress OpenShift traffic in management VPC.
Option 1) Public clusters ingressing traffic in the VPC, egressing through Transit Gateway
Option 2) Private clusters using ingress and egress traffic through internal network