Oracle Cloud (OKE)

1.0 Setup Overview

There are 2 ways to create a cluster when deploying the LightBeam application:

  • Quick create - the VCN is setup automatically during cluster creation

  • Custom create - The VCN needs to be setup from scratch as a pre-requisite before using custom create

We need a 3-node OKE cluster and one micro Linux VM as a jump box to access the OKE cluster and perform installation/upgrade of the LightBeam application.

2.0 OCI Deployment Requirements for LightBeam

  1. Compartment:

    1. Dedicated Compartment for LightBeam installation.

  2. Networking (VCN & Subnets):

    1. VCN: /16 CIDR Block (Recommended: 10.0.0.0/16)

  3. Subnets(Regional):

    • Private/29 CIDR Block for Kubernetes API Endpoint. Example: 10.0.10.0/28

    • Private/24 CIDR Block for Worker Nodes. Example: 10.0.1.0/23

    • Private/19 CIDR Block for Pods. Example: 10.0.32.0/19

    • Public/Private/24 CIDR Block for Load Balancer.

    • Public/Private/29 CIDR Block for JumpVM.

  4. Compute (Kubernetes Cluster):

    • Number of Worker Nodes: 3

    • Node Type: Managed OKE

    • Shape: VM.Standard.E3.Flex (4 OCPUs, 32 GB Memory)

    • Pods per Node: 80

    • Boot Volume: 100 GB

    • OS: Oracle Linux 8.10

  5. Gateways:

    • Internet Gateway – Outbound communication.

    • NAT Gateway – With Public IP Address (for outbound access)

    • Service Gateway – Accessing All Region Services in the Oracle Services Network

  6. JumpVM:

    • OS: Ubuntu 24.04 (Canonical)

    • Shape: VM.Standard.E4.Flex (default)

    • Packages: OCI CLI

    • Public IP: Required (if used as bastion host)

  7. DNS & Load Balancer:

    • Private DNS Zone – Attach to VCN

      • Create DNS A Record for Load Balancer

    • Reserved Public IPs (optional, if LightBeam UI should be public):

      • 1 for LightBeam Spectra Load Balancer

      • 1 for LightBeam PrivacyOps Load Balancer

      • 1 for JumpBox VM

Compartment Creation:

  1. Create a compartment: Go to compartments page on oracle cloud console https://cloud.oracle.com/identity/compartments and click on create button.

  1. Fill in the compartment details: Give the compartment a name and choose the appropriate parent compartment and hit create.

VCN creation:

  1. Create a VCN: Go to vcns page on oracle cloud console https://cloud.oracle.com/networking/vcns. On the compartment filter select the compartment created for lightbeam and then hit create VCN.

  1. Fill in the VCN fields : Fill in the required fields for VCN and hit create -

    • Name: Choose the appropriate name for the VCN.

    • Create in Compartment: This should show the correct compartment.

    • IPv4 CIDR block: Add 10.0.0.0/16 to the CIDR block list.

    • Use DNS hostnames in this VCN: Checkbox must be enabled.

    • DNS label (optional): A dns label can be entered , otherwise vcn name is used.

Internet Gateway Creation:

  1. Create an internet gateway: On page of your created VCN go to gateway tab and click on create internet gateway button.

  2. Fill out below details: Fill out the below fields and value while leaving other fields default.

    1. Name: internet-gateway-0

NAT Gateway Creation:

  1. Create a NAT gateway: On page of your created VCN go to gateway tab and click on create NAT gateway button.

  2. Fill out below details: Fill out the below fields and value while leaving other fields default.

    1. Name: nat-gateway-0

Service Gateway Creation:

  1. Create a Service gateway: On page of your created VCN go to gateway tab and click on create Service gateway button.

  2. Fill out below details: Fill out the below fields and value while leaving other fields default.

    1. Name: service-gateway-0

    2. Services: All <region> Services in Oracle Services Network

Security List Creation:

  1. Create the security lists: On page of your created VCN go to security tab and click on create security list button.

  1. Fill in the security list details: We want 5 security lists with below names and information.

seclist-KubernetesAPIendpoint

Ingress Rules:

State
Source
Protocol/Dest. Port
Description

Stateful

10.0.1.0/24 (Worker Nodes CIDR)

TCP/6443

Kubernetes worker to Kubernetes API endpoint communication.

Stateful

10.0.1.0/24 (Worker Nodes CIDR)

TCP/12250

Kubernetes worker to Kubernetes API endpoint communication.

Stateful

10.0.1.0/24 (Worker Nodes CIDR)

ICMP 3,4

Path Discovery.

Stateful

10.0.32.0/19 (Pods CIDR)

TCP/6443

Pod to Kubernetes API endpoint communication (when using VCN-native pod networking).

Stateful

10.0.32.0/19 (Pods CIDR)

TCP/12250

Pod to Kubernetes API endpoint communication (when using VCN-native pod networking).

Stateful

Bastion subnet CIDR, or specific CIDR

TCP/6443

(optional) External access to Kubernetes API endpoint.

  • Bastion subnet CIDR when access is made through OCI Bastion

  • Specific CIDR when access is made from other specific CIDR

Egress Rules:

State:
Destination
Protocol / Dest. Port
Description:

Stateful

All <region> Services in Oracle Services Network

TCP/ALL

Allow Kubernetes API endpoint to communicate with OKE.

Stateful

All <region> Services in Oracle Services Network

ICMP 3,4

Path Discovery.

Stateful

10.0.1.0/24 (Worker Nodes CIDR)

TCP/10250

Allow Kubernetes API endpoint to communicate with worker nodes.

Stateful

10.0.1.0/24 (Worker Nodes CIDR)

ICMP 3,4

Path Discovery.

Stateful

10.0.32.0/19 (Pods CIDR)

ALL/ALL

Allow Kubernetes API endpoint to communicate with pods.

seclist-workernodes

Ingress Rules:

State:
Source
Protocol / Dest. Port
Description:

Stateful

10.0.0.0/29 (Kubernetes API Endpoint CIDR)

TCP/10250

Allow Kubernetes API endpoint to communicate with worker nodes.

Stateful

0.0.0.0/0

ICMP 3,4

Path Discovery.

Stateful

Bastion subnet CIDR, or specific CIDR

TCP/22

(optional) Allow inbound SSH traffic to managed nodes.

Stateful

Load balancer subnet CIDR(10.0.2.0/24)

ALL/30000-32767

Load balancer to worker nodes node ports.

Stateful

Load balancer subnet CIDR(10.0.2.0/24)

ALL/10256

Allow load balancer to communicate with kube-proxy on worker nodes.

Egress Rules:

State:
Destination
Protocol / Dest. Port
Description:

Stateful

10.0.32.0/19 (Pods CIDR)

ALL/ALL

Allow worker nodes to access pods.

Stateful

0.0.0.0/0

ICMP 3,4

Path Discovery.

Stateful

All <region> Services in Oracle Services Network

TCP/ALL

Allow worker nodes to communicate with OKE.

Stateful

10.0.0.0/29 (Kubernetes API Endpoint CIDR)

TCP/6443

Kubernetes worker to Kubernetes API endpoint communication.

Stateful

10.0.0.0/29 (Kubernetes API Endpoint CIDR)

TCP/12250

Kubernetes worker to Kubernetes API endpoint communication.

Stateful

0.0.0.0/0

TCP/443

Allow nodes to pull images from internet

seclist-pods

Ingress Rules:

State:
Source
Protocol / Dest. Port
Description:

Stateful

10.0.1.0/24 (Worker Nodes CIDR)

ALL/ALL

Allow worker nodes to access pods.

Stateful

10.0.0.0/29 (Kubernetes API Endpoint CIDR)

ALL/ALL

Allow Kubernetes API endpoint to communicate with pods.

Stateful

10.0.32.0/19 (Pods CIDR)

ALL/ALL

Allow pods to communicate with other pods.

Egress Rules:

State:
Destination
Protocol / Dest. Port
Description:

Stateful

10.0.32.0/19 (Pods CIDR)

ALL/ALL

Allow pods to communicate with other pods.

Stateful

All <region> Services in Oracle Services Network

ICMP 3,4

Path Discovery.

Stateful

All <region> Services in Oracle Services Network

TCP/ALL

Allow pods to communicate with OCI services.

Stateful

0.0.0.0/0

TCP/443

(optional) Allow pods to communicate with internet.

Stateful

10.0.0.0/29 (Kubernetes API Endpoint CIDR)

TCP/6443

Pod to Kubernetes API endpoint communication (when using VCN-native pod networking).

Stateful

10.0.0.0/29 (Kubernetes API Endpoint CIDR)

TCP/12250

Pod to Kubernetes API endpoint communication (when using VCN-native pod networking).

seclist-loadbalancers

Ingress Rules:

State:
Source
Protocol / Dest. Port
Description:

Stateful

Application specific (Internet or specific CIDR)

Application specific (for example, TCP, UDP - 443, 8080)

(optional) Load balancer listener protocol and port. Customize as required.

Stateful

10.0.0.8/29( Jumpbox CIDR)

Application specific (for example, TCP, UDP - 443, 8080)

(optional)For UI access through jumpbox

Egress Rules:

State:
Destination
Protocol / Dest. Port
Description:

Stateful

10.0.1.0/24 (Worker Nodes CIDR)

ALL/30000-32767

Load balancer to worker nodes node ports.

Stateful

10.0.1.0/24 (Worker Nodes CIDR)

ALL/10256

Allow load balancer to communicate with kube-proxy on worker nodes.

seclist-jumpbox

Ingress Rules:

State:
Source
Protocol / Dest. Port
Description:

Stateful

34.198.104.197/32 ( Lightbeam jumpbox ip)

TCP/22

Allow lightbeam jumbox access

Stateful

34.198.104.197/32 ( Lightbeam jumpbox ip)

ICMP 3,4

ICMP traffic for: 3, 4 Destination Unreachable: Fragmentation Needed and Don't Fragment was Set

Stateful

10.0.0.0/16

ICMP 3

ICMP traffic for: 3 Destination Unreachable

Egress Rules:

State:
Destination
Protocol / Dest. Port
Description:

Stateful

10.0.0.0/29 (Kubernetes API Endpoint CIDR)

TCP/6443

Allow jumpbox to access the Kubernetes API endpoint.

Stateful

10.0.1.0/24 (Worker Nodes CIDR)

TCP/22

(optional) Allow SSH traffic to worker nodes.

Stateful

0.0.0.0/0

All/All

Route table creation:

  1. Create Route Tables: On the page of your created VCN go to routing tab and click on create route table.

  1. Fill in the route table details: We will need 4 route tables for our requirement with below names and details.

Default

Two route rule defined as follows:

  • Rule for traffic to internet:

    • Destination CIDR block: 0.0.0.0/0

    • Target Type: NAT Gateway

    • Target: nat-gateway-0

  • Rule for traffic to OCI services:

    • Destination: All <region> Services in Oracle Services Network

    • Target Type: Service Gateway

    • Target: service-gateway-0

routetable-workernodes

Two route rules defined as follows:

  • Rule for traffic to OCI services:

    • Destination: All <region> Services in Oracle Services Network

    • Target Type: Service Gateway

    • Target: service-gateway

  • Rule for traffic to internet:

    • Destination CIDR block: 0.0.0.0/0

    • Target Type: NAT Gateway

    • Target: nat-gateway-0

routetable-pods

Two route rules defined as follows:

  • Rule for traffic to internet:

    • Destination CIDR block: 0.0.0.0/0

    • Target Type: NAT Gateway

    • Target: nat-gateway-0

  • Rule for traffic to OCI services:

    • Destination: All <region> Services in Oracle Services Network

    • Target Type: Service Gateway

    • Target: service-gateway-0

routetable-serviceloadbalancers

One route rule defined as follows:

  • Destination CIDR block: 0.0.0.0/0

  • Target Type: Internet Gateway

  • Target Internet Gateway: internet-gateway-0

routetable-jumpbox

One route rule defined as follows:

  • Destination CIDR block: 0.0.0.0/0

  • Target Type: Internet Gateway

  • Target Internet Gateway: internet-gateway-0

Subnet Creation:

  1. Create subnets: Go the vcns page and click on your created VCN. Then go to the subnets tab and hit create.

  1. Fill in the subnet details: We want 5 subnets for our requirement with the details as mentioned in the table other options can be left as default.

Name
CIDR block
Subnet Access
Security List
Route Table

KubernetesAPIendpoint

10.0.0.0/29

Private

seclist-KubernetesAPIendpoint

routetable-KubernetesAPIendpoint

workernodes

10.0.1.0/24

Private

seclist-workernodes

routetable-workernodes

pods

10.0.32.0/19

Private

seclist-pods

routetable-pods

loadbalancers

10.0.2.0/24

Private/Public

seclist-loadbalancers

routetable-serviceloadbalancers

jumpbox

10.0.0.8/29

Private/Public

seclist-jumpbox

routetable-jumpbox

OKE Cluster Creation (Custom Create):

Now that we have our VCN setup ready we can start creating our cluster:

  1. Create a cluster with custom create: Go to https://cloud.oracle.com/containers/clusters and select the correct compartment and then click on create cluster button. A pop should show with quick create and custom create options , select custom create and hit submit.

  2. Fill in basic cluster details: In the resulting form fill in the cluster name and other options can be left default.

  3. Fill in the network setup details: In the resulting form fill the following fields:

    • Network Type: VCN native ( we can choose flannel as well in which case we don't need to create the pod subnet resources )

    • VCN in the cluster compartment: Choose the VCN we had setup earlier.

    • Kubernetes API endpoint subnet: Choose KubernetesAPIendpoint from the dropdown.

    • Automatically assign public IPv4 address: Should be unchecked.

    • Specify load balancer subnets: Should be checked

    • Kubernetes service LB subnets: Choose loadbalancers from the dropdown.

    • Hit next to move to the next section.

  4. Fill in the node pool details: In the resulting form fill in the following fields:

    • Node type: Choose Managed.

    • Node placement Configuration: Choose the availability domain from the dropdown and choose worker node subnet as workernodes from the dropdown.

    • Node Shape: Can be kept default (VM.Standard.E3.Flex)

    • Select Number of OCPUs: 4

    • Amount of memory(GB): 32

    • Node Count: 3

    • Specify Custom Boot Volume Checkbox : Must be checked and the boot volume specified as 100 GB for each node.

    • In Pod communication section( for VCN-native pod networking): Choose the pod subnet as pods from the dropdown.

    • In the Pod communication advanced options: The number of pods per node must be given as 93

    • Hit Next to move to the review section and hit next again to start cluster creation.

    • Once cluster is created we can go to the clusters page and monitor its progress.

OKE Cluster Creation (Quick Create):

This abstracts away most of the effort required to create the VCN for OKE cluster creation.

  1. Create a cluster with quick create: Go to https://cloud.oracle.com/containers/clusters and select the correct compartment and then click on create cluster button. A pop should show with quick create and custom create options , select quick create and hit submit.

  1. Fill in the cluster details: Fill the following fields:

    1. Name: Enter the cluster name

    2. Compartment: Choose the correct compartment for lightbeam

    3. Kubernetes API endpoint: as private

    4. Node Type: as managed

    5. Kubernetes Worker Nodes: as private

    6. Select node shape for each node ( 4 OCPU & 32 GB each )

    7. Increase the boot volume for each node to 100 GB

    8. Keep other options as default and hit create.

    Note some additional modifications need to made on the cluster to make it suitable for lightbeam deployment ( Hence custom create is more favourable)

  2. Add pod subnet: Since quick create uses VCN native CNI , we need to add a separate pod subnet for sufficient IPs. (refer VCN pod subnet setup )

  3. Edit the node pool: Go the cluster page and node pool page from there. We want to add our pod subnet and increase the pods per node to 93.

  1. Cycle the nodes: The changes made are not reflected on the node unless we cycle each node one by one, lets scroll down to the nodes section and click on cycle nodes.

In the pop up menu click replace nodes to start replacing nodes with new configuration.

Jumpbox Creation:

  1. Create a jumpbox: Go to https://cloud.oracle.com/compute/instances page and choose the compartment where lightbeam cluster is located. Click on create instance button.

  2. Fill in the instance details:

    • Name: Give an appropriate name ( Example: lightbeam-jumpbox).

    • Image section: Click on change image button and change image to Ubuntu and choose Canonical Ubuntu 24.04 as the flavor.

    • Shape section: Click on change shape and for Instance type -> Keep the default virtual machine selected.

    • Shape series: Choose AMD.

    • Shape name: VM.Standard.E4.Flex ( with 1 OCPU and 2 GB Memory )

  3. Instance Security section: We can keep it default options.

  4. Instance Networking section: fill in the details:

    • VNIC name: Give an appropriate name (Example: lightbeam-jumpbox-vnic)

    • Primary network: With Select existing virtual cloud network option selected choose the VCN & compartment where lightbeam is deployed selected.

    • Subnet: With Select existing subnet option selected choose the jumpbox subnet.

    • We can keep other options default and in Add SSH keys keys section download the private key.

  5. Other sections can be kept default and keep clicking next and our jumpbox will be created, and we can connect to it using ssh through its public ip.

Setup Jumpbox for cluster access

  1. Connect to jumpbox: Connect to jumpbox with the downloaded private key and public ip

ssh -i ~/.ssh/private.key [email protected]

by default root ssh is disabled, however ubuntu user has sudo access, we will work with root user.

sudo su
  1. We need to install the packages required for Lightbeam installation.

sudo apt-get update
sudo apt-get install unzip
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version
 
wget https://get.helm.sh/helm-v3.3.4-linux-amd64.tar.gz
tar -xvf helm-v3.3.4-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/

curl https://pyenv.run | bash
pyenv install 3.7.10
pyenv global 3.7.10
  1. Next we need to install oci cli: Follow the steps here to install oci cli https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm#InstallingCLI__linux_and_unix

  2. Generate an api key for oci cli access:

# An API key is an RSA key pair in PEM format used for signing API requests. Generate it below
openssl genrsa -out oci_api_key.pem 2048
# Generate the public key from the private key
openssl rsa -pubout -in oci_api_key.pem -out oci_api_key_public.pem
# copy the public key
cat oci_api_key_public.pem
  1. Go to user settings in oci console: Click on user settings.

  2. Add the api key: Go to tokens and keys tab and add the copied public key there:

Once api key is added we will see a configuration preview like below:

Make note of the values or copy the whole configuration for we will need it during oci setup.

  1. Run oci setup: From the jumpbox with root user execute:

oci setup config

Enter the values for user, fingerprint, tenancy, region and the key file path. Hit enter for other inputs to use default values. This will setup oci cli access for us.

Connecting to OKE cluster

As a final setup to complete the cluster setup for us we need to connect the jumpbox to our previously created OKE cluster.

  1. Go to the created OKE cluster page: Go the https://cloud.oracle.com/containers/clusters and select our lightbeam cluster.

  1. Access Cluster: Click on access cluster button, which should open a popup and local access should be selected and follow the steps provided. Choose the VCN native private endpoint access and paste the command in the jumpbox shell with root access.

  2. Verify the setup: Run the below command and if the setup was success the OKE cluster nodes should be displayed.

kubectl get pods

This should return a list of the nodes in your OKE cluster.

Set the LightBeam namespace as current context using the command

kubectl config set-context --current --namespace lightbeam

Lightbeam Cluster Deployment

Once the kubectl starts working with the above provisioned OKE Cluster then install the LightBeam chart using the command. The --spectra flag specifies the spectra deployment. Use the --privacy_ops flag to specify the privacy ops deployment.

export DOCKER_USERNAME="lbcustomers" DOCKER_REGISTRY_PASSWORD="<DOCKER_REGISTRY_TOKEN>" KBLD_REGISTRY_HOSTNAME="docker.io" KBLD_REGISTRY_USERNAME="lbcustomers" KBLD_REGISTRY_PASSWORD="<DOCKER_REGISTRY_TOKEN>"
Publicly Accessible Lightbeam Endpoint Deployment
  1. Create reserved public IPs for public lightbeam endpoint ( if required) - Go to https://cloud.oracle.com/networking/ip-management/public-ips and select the lightbeam compartment. Click on Reserve public IP address.

  2. Fill in the details: Give the IP address a suitable name and click on reserve public ip address.

  1. Update lightbeam chart: in the Values.yaml update the field oke.loadBalancerIP to the created public ip.

Run the install command from the jumpbox:

./installer/lb-install.sh --install --spectra --values charts/lightbeam/values.yaml --final_values  charts/lightbeam/values.yaml --oke
Privately Accessible Lightbeam Endpoint Deployment
  1. Update lightbeam chart: in the Values.yaml update the field oke.privateLoadBalancerSubnet to the OCID of the load balancer subnet.

Run the install command from the jumpbox:

./installer/lb-install.sh --install --spectra --values charts/lightbeam/values.yaml --final_values  charts/lightbeam/values.yaml --oke --internal_oke_load_balancer
  1. Provision a FQDN for the load balancer (If required): This will enable us to access lightbeam using a domain name instead of the load balancer ip.

    1. Create a private DNS zone: Go to https://cloud.oracle.com/dns/private-zones and select the correct compartment and click on Create Zone

    2. Fill in the details: Add the details for below fields

      1. Zone Name: Give an appropriate zone name ( Example: company.com)

      2. DNS private view: Select existing DNS private view and select the VCN where Lightbeam is situated.

      3. Finally hit the create button.

    3. Go the created private DNS zone and click on the records tab.

    4. Click on Manage records and then click on add record in the page that opens up.

    5. Fill the required fields:

      1. Name: Give the record a name. ( Example: Lightbeam , will result in fqdn lightbeam.company.com )

      2. Type: Select type as A - IPv4 record address.

      3. Address: The IP address of the load balancer obtained from the ingress.

      4. Other options can be left default and save changes

    6. Click on review changes and then publish the records.

  2. Update the FQDN in the cluster: We need to update the load balancer to use the newly created FQDN , for which we need to run :

kubectl edit ingress kong-proxy

# We need to update the chart by adding host under rules section

rules:
 ( replace with your FQDN )
    http:
      paths:
        - path: /auth
          pathType: Prefix
          backend:
            service:
              name: lb-keycloak
              port:
                number: 80
                
# Next update the configmap to use the new FQDN
kubectl patch cm/lightbeam-common-configmap -n lightbeam --type merge -p '{"data": {"AUTH_BASE_URL": "http://<INGRESS_ADDRESS>"}}'
kubectl delete pods -l app=lightbeam-api-gateway -n lightbeam

Wait for ~20 minutes to complete the installation. After installation copy the default username and password for Lightbeam Instance.

Note:

  • To check the storage class use the command: kubectl get sc

  • Verify that the GKE cluster worker nodes has internet access.

  • Copy the Address from the ingress and run the following commands:

kubectl patch cm/lightbeam-common-configmap -n lightbeam --type merge -p '{"data": {"AUTH_BASE_URL": "http://<INGRESS_ADDRESS>"}}'
kubectl delete pods -l app=lightbeam-api-gateway -n lightbeam
  • Wait for API-gateway pod to be in an up and running state, monitor the gateway state using the command:

kubectl get pods -n lightbeam -o wide | grep api-gateway

Last updated