Oracle Cloud (OKE)
1.0 Setup Overview
There are 2 ways to create a cluster when deploying the LightBeam application:
Quick create - the VCN is setup automatically during cluster creation
Custom create - The VCN needs to be setup from scratch as a pre-requisite before using custom create
We need a 3-node OKE cluster and one micro Linux VM as a jump box to access the OKE cluster and perform installation/upgrade of the LightBeam application.

2.0 OCI Deployment Requirements for LightBeam
Compartment:
Dedicated Compartment for LightBeam installation.
Networking (Virtual Cloud Network & Subnets):
Virtual Cloud Network (VCN): /16 CIDR Block (Recommended: 10.0.0.0/16)
Subnets(Regional):
Private – /29 CIDR Block for Kubernetes API Endpoint. Example: 10.0.10.0/28
Private – /24 CIDR Block for Worker Nodes. Example: 10.0.1.0/23
Private – /19 CIDR Block for Pods. Example: 10.0.32.0/19
Public/Private – /24 CIDR Block for Load Balancer.
Public/Private – /29 CIDR Block for JumpVM.
Compute (Kubernetes Cluster):
Number of Worker Nodes: 3
Node Type: Managed OKE
Shape: VM.Standard.E3.Flex (4 OCPUs, 32 GB Memory)
Pods per Node: 80
Boot Volume: 100 GB
OS: Oracle Linux 8.10
Gateways:
Internet Gateway – Outbound communication.
NAT Gateway – With Public IP Address (for outbound access)
Service Gateway – Accessing All Region Services in the Oracle Services Network
JumpVM:
OS: Ubuntu 24.04 (Canonical)
Shape: VM.Standard.E4.Flex (default)
Packages: OCI CLI
Public IP: Required (if used as bastion host)
DNS & Load Balancer:
Private DNS Zone – Attach to Virtual Cloud Network(VCN)
Create DNS A Record for Load Balancer
Reserved Public IPs (optional, if LightBeam UI should be public):
1 for LightBeam Spectra Load Balancer
1 for LightBeam PrivacyOps Load Balancer
1 for JumpBox VM
Security List recommendation for each subnet are as mentioned in the below doc -
Compartment Creation:
Create a compartment: Go to compartments page on oracle cloud console https://cloud.oracle.com/identity/compartments and click on create button.

Fill in the compartment details: Give the compartment a name and choose the appropriate parent compartment and hit create.

Virtual Cloud Network(VCN) creation:
Create a Virtual Cloud Network(VCN): Go to vcns page on oracle cloud console https://cloud.oracle.com/networking/vcns. On the compartment filter select the compartment created for lightbeam and then hit create VCN.

Fill in the Virtual Cloud Network(VCN) fields : Fill in the required fields for VCN and hit create -
Name: Choose the appropriate name for the VCN.
Create in Compartment: This should show the correct compartment.
IPv4 CIDR block: Add 10.0.0.0/16 to the CIDR block list.
Use DNS hostnames in this VCN: Checkbox must be enabled.
DNS label (optional): A dns label can be entered , otherwise vcn name is used.

Internet Gateway Creation:
Create an internet gateway: On page of your created VCN go to gateway tab and click on create internet gateway button.
Fill out below details: Fill out the below fields and value while leaving other fields default.
Name: internet-gateway-0
NAT Gateway Creation:
Create a NAT gateway: On page of your created VCN go to gateway tab and click on create NAT gateway button.
Fill out below details: Fill out the below fields and value while leaving other fields default.
Name: nat-gateway-0
Service Gateway Creation:
Create a Service gateway: On page of your created VCN go to gateway tab and click on create Service gateway button.
Fill out below details: Fill out the below fields and value while leaving other fields default.
Name: service-gateway-0
Services: All <region> Services in Oracle Services Network
Security List Creation:
Create the security lists: On page of your created VCN go to security tab and click on create security list button.

Fill in the security list details: We want 5 security lists with below names and information.
seclist-KubernetesAPIendpoint
Ingress Rules:
Stateful
10.0.1.0/24 (Worker Nodes CIDR)
TCP/6443
Kubernetes worker to Kubernetes API endpoint communication.
Stateful
10.0.1.0/24 (Worker Nodes CIDR)
TCP/12250
Kubernetes worker to Kubernetes API endpoint communication.
Stateful
10.0.1.0/24 (Worker Nodes CIDR)
ICMP 3,4
Path Discovery.
Stateful
10.0.32.0/19 (Pods CIDR)
TCP/6443
Pod to Kubernetes API endpoint communication (when using VCN-native pod networking).
Stateful
10.0.32.0/19 (Pods CIDR)
TCP/12250
Pod to Kubernetes API endpoint communication (when using VCN-native pod networking).
Stateful
Bastion subnet CIDR, or specific CIDR
TCP/6443
(optional) External access to Kubernetes API endpoint.
Bastion subnet CIDR when access is made through OCI Bastion
Specific CIDR when access is made from other specific CIDR
Egress Rules:
Stateful
All <region> Services in Oracle Services Network
TCP/ALL
Allow Kubernetes API endpoint to communicate with OKE.
Stateful
All <region> Services in Oracle Services Network
ICMP 3,4
Path Discovery.
Stateful
10.0.1.0/24 (Worker Nodes CIDR)
TCP/10250
Allow Kubernetes API endpoint to communicate with worker nodes.
Stateful
10.0.1.0/24 (Worker Nodes CIDR)
ICMP 3,4
Path Discovery.
Stateful
10.0.32.0/19 (Pods CIDR)
ALL/ALL
Allow Kubernetes API endpoint to communicate with pods.
seclist-workernodes
Ingress Rules:
Stateful
10.0.0.0/29 (Kubernetes API Endpoint CIDR)
TCP/10250
Allow Kubernetes API endpoint to communicate with worker nodes.
Stateful
0.0.0.0/0
ICMP 3,4
Path Discovery.
Stateful
Bastion subnet CIDR, or specific CIDR
TCP/22
(optional) Allow inbound SSH traffic to managed nodes.
Stateful
Load balancer subnet CIDR(10.0.2.0/24)
ALL/30000-32767
Load balancer to worker nodes node ports.
Stateful
Load balancer subnet CIDR(10.0.2.0/24)
ALL/10256
Allow load balancer to communicate with kube-proxy on worker nodes.
Egress Rules:
Stateful
10.0.32.0/19 (Pods CIDR)
ALL/ALL
Allow worker nodes to access pods.
Stateful
0.0.0.0/0
ICMP 3,4
Path Discovery.
Stateful
All <region> Services in Oracle Services Network
TCP/ALL
Allow worker nodes to communicate with OKE.
Stateful
10.0.0.0/29 (Kubernetes API Endpoint CIDR)
TCP/6443
Kubernetes worker to Kubernetes API endpoint communication.
Stateful
10.0.0.0/29 (Kubernetes API Endpoint CIDR)
TCP/12250
Kubernetes worker to Kubernetes API endpoint communication.
Stateful
0.0.0.0/0
TCP/443
Allow nodes to pull images from internet
seclist-pods
Ingress Rules:
Stateful
10.0.1.0/24 (Worker Nodes CIDR)
ALL/ALL
Allow worker nodes to access pods.
Stateful
10.0.0.0/29 (Kubernetes API Endpoint CIDR)
ALL/ALL
Allow Kubernetes API endpoint to communicate with pods.
Stateful
10.0.32.0/19 (Pods CIDR)
ALL/ALL
Allow pods to communicate with other pods.
Egress Rules:
Stateful
10.0.32.0/19 (Pods CIDR)
ALL/ALL
Allow pods to communicate with other pods.
Stateful
All <region> Services in Oracle Services Network
ICMP 3,4
Path Discovery.
Stateful
All <region> Services in Oracle Services Network
TCP/ALL
Allow pods to communicate with OCI services.
Stateful
0.0.0.0/0
TCP/443
(optional) Allow pods to communicate with internet.
Stateful
10.0.0.0/29 (Kubernetes API Endpoint CIDR)
TCP/6443
Pod to Kubernetes API endpoint communication (when using VCN-native pod networking).
Stateful
10.0.0.0/29 (Kubernetes API Endpoint CIDR)
TCP/12250
Pod to Kubernetes API endpoint communication (when using VCN-native pod networking).
seclist-loadbalancers
Ingress Rules:
Stateful
Application specific (Internet or specific CIDR)
Application specific (for example, TCP, UDP - 443, 8080)
(optional) Load balancer listener protocol and port. Customize as required.
Stateful
10.0.0.8/29( Jumpbox CIDR)
Application specific (for example, TCP, UDP - 443, 8080)
(optional)For UI access through jumpbox
Egress Rules:
Stateful
10.0.1.0/24 (Worker Nodes CIDR)
ALL/30000-32767
Load balancer to worker nodes node ports.
Stateful
10.0.1.0/24 (Worker Nodes CIDR)
ALL/10256
Allow load balancer to communicate with kube-proxy on worker nodes.
seclist-jumpbox
Ingress Rules:
Stateful
34.198.104.197/32 ( Lightbeam jumpbox ip)
TCP/22
Allow lightbeam jumbox access
Stateful
34.198.104.197/32 ( Lightbeam jumpbox ip)
ICMP 3,4
ICMP traffic for: 3, 4 Destination Unreachable: Fragmentation Needed and Don't Fragment was Set
Stateful
10.0.0.0/16
ICMP 3
ICMP traffic for: 3 Destination Unreachable
Egress Rules:
Stateful
10.0.0.0/29 (Kubernetes API Endpoint CIDR)
TCP/6443
Allow jumpbox to access the Kubernetes API endpoint.
Stateful
10.0.1.0/24 (Worker Nodes CIDR)
TCP/22
(optional) Allow SSH traffic to worker nodes.
Stateful
0.0.0.0/0
All/All
Route table creation:
Create Route Tables: On the page of your created VCN go to routing tab and click on create route table.

Fill in the route table details: We will need 4 route tables for our requirement with below names and details.
routetable-KubernetesAPIendpoint
Two route rule defined as follows:
Rule for traffic to internet:
Destination CIDR block: 0.0.0.0/0
Target Type: NAT Gateway
Target: nat-gateway-0
Rule for traffic to OCI services:
Destination: All <region> Services in Oracle Services Network
Target Type: Service Gateway
Target: service-gateway-0
routetable-workernodes
Two route rules defined as follows:
Rule for traffic to OCI services:
Destination: All <region> Services in Oracle Services Network
Target Type: Service Gateway
Target: service-gateway
Rule for traffic to internet:
Destination CIDR block: 0.0.0.0/0
Target Type: NAT Gateway
Target: nat-gateway-0
routetable-pods
Two route rules defined as follows:
Rule for traffic to internet:
Destination CIDR block: 0.0.0.0/0
Target Type: NAT Gateway
Target: nat-gateway-0
Rule for traffic to OCI services:
Destination: All <region> Services in Oracle Services Network
Target Type: Service Gateway
Target: service-gateway-0
routetable-serviceloadbalancers
One route rule defined as follows:
Destination CIDR block: 0.0.0.0/0
Target Type: Internet Gateway
Target Internet Gateway: internet-gateway-0
routetable-jumpbox
One route rule defined as follows:
Destination CIDR block: 0.0.0.0/0
Target Type: Internet Gateway
Target Internet Gateway: internet-gateway-0
Subnet Creation:
Create subnets: Go the vcns page and click on your created VCN. Then go to the subnets tab and hit create.

Fill in the subnet details: We want 5 subnets for our requirement with the details as mentioned in the table other options can be left as default.
KubernetesAPIendpoint
10.0.0.0/29
Private
seclist-KubernetesAPIendpoint
routetable-KubernetesAPIendpoint
workernodes
10.0.1.0/24
Private
seclist-workernodes
routetable-workernodes
pods
10.0.32.0/19
Private
seclist-pods
routetable-pods
loadbalancers
10.0.2.0/24
Private/Public
seclist-loadbalancers
routetable-serviceloadbalancers
jumpbox
10.0.0.8/29
Private/Public
seclist-jumpbox
routetable-jumpbox
OKE Cluster Creation (Custom Create):
Now that we have our VCN setup ready we can start creating our cluster:
Create a cluster with custom create: Go to https://cloud.oracle.com/containers/clusters and select the correct compartment and then click on create cluster button. A pop should show with quick create and custom create options , select custom create and hit submit.

Fill in basic cluster details: In the resulting form fill in the cluster name and other options can be left default.

Fill in the network setup details: In the resulting form fill the following fields:
Network Type: VCN native ( we can choose flannel as well in which case we don't need to create the pod subnet resources )
VCN in the cluster compartment: Choose the VCN we had setup earlier.
Kubernetes API endpoint subnet: Choose KubernetesAPIendpoint from the dropdown.
Automatically assign public IPv4 address: Should be unchecked.
Specify load balancer subnets: Should be checked
Kubernetes service LB subnets: Choose loadbalancers from the dropdown.
Hit next to move to the next section.

Fill in the node pool details: In the resulting form fill in the following fields:
Node type: Choose Managed.
Node placement Configuration: Choose the availability domain from the dropdown and choose worker node subnet as workernodes from the dropdown.
Node Shape: Can be kept default (VM.Standard.E3.Flex)
Select Number of OCPUs: 4
Amount of memory(GB): 32
Node Count: 3
Specify Custom Boot Volume Checkbox : Must be checked and the boot volume specified as 100 GB for each node.
In Pod communication section( for VCN-native pod networking): Choose the pod subnet as pods from the dropdown.
In the Pod communication advanced options: The number of pods per node must be given as 93
Hit Next to move to the review section and hit next again to start cluster creation.
Once cluster is created we can go to the clusters page and monitor its progress.


OKE Cluster Creation (Quick Create):
This abstracts away most of the effort required to create the VCN for OKE cluster creation.
Create a cluster with quick create: Go to https://cloud.oracle.com/containers/clusters and select the correct compartment and then click on create cluster button. A pop should show with quick create and custom create options , select quick create and hit submit.

Fill in the cluster details: Fill the following fields:
Name: Enter the cluster name
Compartment: Choose the correct compartment for lightbeam
Kubernetes API endpoint: as private
Node Type: as managed
Kubernetes Worker Nodes: as private
Select node shape for each node ( 4 OCPU & 32 GB each )
Increase the boot volume for each node to 100 GB
Keep other options as default and hit create.
Note some additional modifications need to made on the cluster to make it suitable for lightbeam deployment ( Hence custom create is more favourable)
Add pod subnet: Since quick create uses VCN native CNI , we need to add a separate pod subnet for sufficient IPs. (refer VCN pod subnet setup )
Edit the node pool: Go the cluster page and node pool page from there. We want to add our pod subnet and increase the pods per node to 93.

Cycle the nodes: The changes made are not reflected on the node unless we cycle each node one by one, lets scroll down to the nodes section and click on cycle nodes.

In the pop up menu click replace nodes to start replacing nodes with new configuration.
Jumpbox Creation:
Create a jumpbox: Go to https://cloud.oracle.com/compute/instances page and choose the compartment where lightbeam cluster is located. Click on create instance button.
Fill in the instance details:
Name: Give an appropriate name ( Example: lightbeam-jumpbox).
Image section: Click on change image button and change image to Ubuntu and choose Canonical Ubuntu 24.04 as the flavor.
Shape section: Click on change shape and for Instance type -> Keep the default virtual machine selected.
Shape series: Choose AMD.
Shape name: VM.Standard.E4.Flex ( with 1 OCPU and 2 GB Memory )
Instance Security section: We can keep it default options.
Instance Networking section: fill in the details:
VNIC name: Give an appropriate name (Example: lightbeam-jumpbox-vnic)
Primary network: With Select existing virtual cloud network option selected choose the VCN & compartment where lightbeam is deployed selected.
Subnet: With Select existing subnet option selected choose the jumpbox subnet.
We can keep other options default and in Add SSH keys keys section download the private key.
Other sections can be kept default and keep clicking next and our jumpbox will be created, and we can connect to it using ssh through its public ip.

Setup Jumpbox for cluster access
Connect to jumpbox: Connect to jumpbox with the downloaded private key and public ip
by default root ssh is disabled, however ubuntu user has sudo access, we will need to create a user with non root priveleges
We need to install the packages required for Lightbeam installation.
Next we need to install oci cli: Follow the steps here to install oci cli https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm#InstallingCLI__linux_and_unix
Generate an api key for oci cli access:
Go to user settings in oci console: Click on user settings.

Add the api key: Go to tokens and keys tab and add the copied public key there:

Once api key is added we will see a configuration preview like below:

Make note of the values or copy the whole configuration for we will need it during oci setup.
Run oci setup: From the jumpbox with root user execute:
Enter the values for user, fingerprint, tenancy, region and the key file path. Hit enter for other inputs to use default values. This will setup oci cli access for us.
Connecting to OKE cluster
As a final setup to complete the cluster setup for us we need to connect the jumpbox to our previously created OKE cluster.
Go to the created OKE cluster page: Go the https://cloud.oracle.com/containers/clusters and select our lightbeam cluster.

Access Cluster: Click on access cluster button, which should open a popup and local access should be selected and follow the steps provided. Choose the VCN native private endpoint access and paste the command in the jumpbox shell with root access.

Verify the setup: Run the below command and if the setup was success the OKE cluster nodes should be displayed.
This should return a list of the nodes in your OKE cluster.
Set the LightBeam namespace as current context using the command
Lightbeam Cluster Deployment
Once the kubectl starts working with the above provisioned OKE Cluster then install the LightBeam chart using the command. The --spectra flag specifies the spectra deployment. Use the --privacy_ops flag to specify the privacy ops deployment.
Publicly Accessible Lightbeam Endpoint Deployment
Create reserved public IPs for public lightbeam endpoint ( if required) - Go to https://cloud.oracle.com/networking/ip-management/public-ips and select the lightbeam compartment. Click on Reserve public IP address.
Fill in the details: Give the IP address a suitable name and click on reserve public ip address.

Update lightbeam chart: in the Values.yaml update the field oke.loadBalancerIP to the created public ip.
Run the install command from the jumpbox:
Privately Accessible Lightbeam Endpoint Deployment
Update lightbeam chart: in the Values.yaml update the field oke.privateLoadBalancerSubnet to the OCID of the load balancer subnet.
Run the install command from the jumpbox:
Provision a FQDN for the load balancer (If required): This will enable us to access lightbeam using a domain name instead of the load balancer ip.
Create a private DNS zone: Go to https://cloud.oracle.com/dns/private-zones and select the correct compartment and click on Create Zone
Fill in the details: Add the details for below fields
Zone Name: Give an appropriate zone name ( Example: company.com)
DNS private view: Select existing DNS private view and select the VCN where Lightbeam is situated.
Finally hit the create button.
Go the created private DNS zone and click on the records tab.

Click on Manage records and then click on add record in the page that opens up.
Fill the required fields:
Name: Give the record a name. ( Example: Lightbeam , will result in fqdn lightbeam.company.com )
Type: Select type as A - IPv4 record address.
Address: The IP address of the load balancer obtained from the ingress.
Other options can be left default and save changes
Click on review changes and then publish the records.
Update the FQDN in the cluster: We need to update the load balancer to use the newly created FQDN , for which we need to run :
Wait for ~20 minutes to complete the installation. After installation copy the default username and password for Lightbeam Instance.
Note:
To check the storage class use the command:
kubectl get scVerify that the GKE cluster worker nodes has internet access.
Copy the Address from the ingress and run the following commands:
Wait for
API-gatewaypod to be in an up and running state, monitor the gateway state using the command:
Last updated