Oracle Cloud (OKE)
1.0 Setup Overview
There are 2 ways to create a cluster when deploying the LightBeam application:
Quick create - the VCN is setup automatically during cluster creation
Custom create - The VCN needs to be setup from scratch as a pre-requisite before using custom create
We need a 3-node OKE cluster and one micro Linux VM as a jump box to access the OKE cluster and perform installation/upgrade of the LightBeam application.
2.0 OCI Deployment Requirements for LightBeam
Compartment:
Dedicated Compartment for LightBeam installation.
Networking (VCN & Subnets):
VCN: /16 CIDR Block (Recommended: 10.0.0.0/16)
Subnets(Regional):
Private – /29 CIDR Block for Kubernetes API Endpoint. Example: 10.0.10.0/28
Private – /24 CIDR Block for Worker Nodes. Example: 10.0.1.0/23
Private – /19 CIDR Block for Pods. Example: 10.0.32.0/19
Public/Private – /24 CIDR Block for Load Balancer.
Public/Private – /29 CIDR Block for JumpVM.
Compute (Kubernetes Cluster):
Number of Worker Nodes: 3
Node Type: Managed OKE
Shape: VM.Standard.E3.Flex (4 OCPUs, 32 GB Memory)
Pods per Node: 80
Boot Volume: 100 GB
OS: Oracle Linux 8.10
Gateways:
Internet Gateway – Outbound communication.
NAT Gateway – With Public IP Address (for outbound access)
Service Gateway – Accessing All Region Services in the Oracle Services Network
JumpVM:
OS: Ubuntu 24.04 (Canonical)
Shape: VM.Standard.E4.Flex (default)
Packages: OCI CLI
Public IP: Required (if used as bastion host)
DNS & Load Balancer:
Private DNS Zone – Attach to VCN
Create DNS A Record for Load Balancer
Reserved Public IPs (optional, if LightBeam UI should be public):
1 for LightBeam Spectra Load Balancer
1 for LightBeam PrivacyOps Load Balancer
1 for JumpBox VM
Security List recommendation for each subnet are as mentioned in the below doc -
Compartment Creation:
Create a compartment: Go to compartments page on oracle cloud console https://cloud.oracle.com/identity/compartments and click on create button.

Fill in the compartment details: Give the compartment a name and choose the appropriate parent compartment and hit create.

VCN creation:
Create a VCN: Go to vcns page on oracle cloud console https://cloud.oracle.com/networking/vcns. On the compartment filter select the compartment created for lightbeam and then hit create VCN.

Fill in the VCN fields : Fill in the required fields for VCN and hit create -
Name: Choose the appropriate name for the VCN.
Create in Compartment: This should show the correct compartment.
IPv4 CIDR block: Add 10.0.0.0/16 to the CIDR block list.
Use DNS hostnames in this VCN: Checkbox must be enabled.
DNS label (optional): A dns label can be entered , otherwise vcn name is used.

Internet Gateway Creation:
Create an internet gateway: On page of your created VCN go to gateway tab and click on create internet gateway button.
Fill out below details: Fill out the below fields and value while leaving other fields default.
Name: internet-gateway-0
NAT Gateway Creation:
Create a NAT gateway: On page of your created VCN go to gateway tab and click on create NAT gateway button.
Fill out below details: Fill out the below fields and value while leaving other fields default.
Name: nat-gateway-0
Service Gateway Creation:
Create a Service gateway: On page of your created VCN go to gateway tab and click on create Service gateway button.
Fill out below details: Fill out the below fields and value while leaving other fields default.
Name: service-gateway-0
Services: All <region> Services in Oracle Services Network
Security List Creation:
Create the security lists: On page of your created VCN go to security tab and click on create security list button.

Fill in the security list details: We want 5 security lists with below names and information.
Route table creation:
Create Route Tables: On the page of your created VCN go to routing tab and click on create route table.

Fill in the route table details: We will need 4 route tables for our requirement with below names and details.
Subnet Creation:
Create subnets: Go the vcns page and click on your created VCN. Then go to the subnets tab and hit create.

Fill in the subnet details: We want 5 subnets for our requirement with the details as mentioned in the table other options can be left as default.
KubernetesAPIendpoint
10.0.0.0/29
Private
seclist-KubernetesAPIendpoint
routetable-KubernetesAPIendpoint
workernodes
10.0.1.0/24
Private
seclist-workernodes
routetable-workernodes
pods
10.0.32.0/19
Private
seclist-pods
routetable-pods
loadbalancers
10.0.2.0/24
Private/Public
seclist-loadbalancers
routetable-serviceloadbalancers
jumpbox
10.0.0.8/29
Private/Public
seclist-jumpbox
routetable-jumpbox
OKE Cluster Creation (Custom Create):
Now that we have our VCN setup ready we can start creating our cluster:
Create a cluster with custom create: Go to https://cloud.oracle.com/containers/clusters and select the correct compartment and then click on create cluster button. A pop should show with quick create and custom create options , select custom create and hit submit.
Fill in basic cluster details: In the resulting form fill in the cluster name and other options can be left default.
Fill in the network setup details: In the resulting form fill the following fields:
Network Type: VCN native ( we can choose flannel as well in which case we don't need to create the pod subnet resources )
VCN in the cluster compartment: Choose the VCN we had setup earlier.
Kubernetes API endpoint subnet: Choose KubernetesAPIendpoint from the dropdown.
Automatically assign public IPv4 address: Should be unchecked.
Specify load balancer subnets: Should be checked
Kubernetes service LB subnets: Choose loadbalancers from the dropdown.
Hit next to move to the next section.
Fill in the node pool details: In the resulting form fill in the following fields:
Node type: Choose Managed.
Node placement Configuration: Choose the availability domain from the dropdown and choose worker node subnet as workernodes from the dropdown.
Node Shape: Can be kept default (VM.Standard.E3.Flex)
Select Number of OCPUs: 4
Amount of memory(GB): 32
Node Count: 3
Specify Custom Boot Volume Checkbox : Must be checked and the boot volume specified as 100 GB for each node.
In Pod communication section( for VCN-native pod networking): Choose the pod subnet as pods from the dropdown.
In the Pod communication advanced options: The number of pods per node must be given as 93
Hit Next to move to the review section and hit next again to start cluster creation.
Once cluster is created we can go to the clusters page and monitor its progress.


OKE Cluster Creation (Quick Create):
This abstracts away most of the effort required to create the VCN for OKE cluster creation.
Create a cluster with quick create: Go to https://cloud.oracle.com/containers/clusters and select the correct compartment and then click on create cluster button. A pop should show with quick create and custom create options , select quick create and hit submit.

Fill in the cluster details: Fill the following fields:
Name: Enter the cluster name
Compartment: Choose the correct compartment for lightbeam
Kubernetes API endpoint: as private
Node Type: as managed
Kubernetes Worker Nodes: as private
Select node shape for each node ( 4 OCPU & 32 GB each )
Increase the boot volume for each node to 100 GB
Keep other options as default and hit create.
Note some additional modifications need to made on the cluster to make it suitable for lightbeam deployment ( Hence custom create is more favourable)
Add pod subnet: Since quick create uses VCN native CNI , we need to add a separate pod subnet for sufficient IPs. (refer VCN pod subnet setup )
Edit the node pool: Go the cluster page and node pool page from there. We want to add our pod subnet and increase the pods per node to 93.

Cycle the nodes: The changes made are not reflected on the node unless we cycle each node one by one, lets scroll down to the nodes section and click on cycle nodes.

In the pop up menu click replace nodes to start replacing nodes with new configuration.
Jumpbox Creation:
Create a jumpbox: Go to https://cloud.oracle.com/compute/instances page and choose the compartment where lightbeam cluster is located. Click on create instance button.
Fill in the instance details:
Name: Give an appropriate name ( Example: lightbeam-jumpbox).
Image section: Click on change image button and change image to Ubuntu and choose Canonical Ubuntu 24.04 as the flavor.
Shape section: Click on change shape and for Instance type -> Keep the default virtual machine selected.
Shape series: Choose AMD.
Shape name: VM.Standard.E4.Flex ( with 1 OCPU and 2 GB Memory )
Instance Security section: We can keep it default options.
Instance Networking section: fill in the details:
VNIC name: Give an appropriate name (Example: lightbeam-jumpbox-vnic)
Primary network: With Select existing virtual cloud network option selected choose the VCN & compartment where lightbeam is deployed selected.
Subnet: With Select existing subnet option selected choose the jumpbox subnet.
We can keep other options default and in Add SSH keys keys section download the private key.
Other sections can be kept default and keep clicking next and our jumpbox will be created, and we can connect to it using ssh through its public ip.

Setup Jumpbox for cluster access
Connect to jumpbox: Connect to jumpbox with the downloaded private key and public ip
ssh -i ~/.ssh/private.key [email protected]
by default root ssh is disabled, however ubuntu user has sudo access, we will work with root user.
sudo su
We need to install the packages required for Lightbeam installation.
sudo apt-get update
sudo apt-get install unzip
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version
wget https://get.helm.sh/helm-v3.3.4-linux-amd64.tar.gz
tar -xvf helm-v3.3.4-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/
curl https://pyenv.run | bash
pyenv install 3.7.10
pyenv global 3.7.10
Next we need to install oci cli: Follow the steps here to install oci cli https://docs.oracle.com/en-us/iaas/Content/API/SDKDocs/cliinstall.htm#InstallingCLI__linux_and_unix
Generate an api key for oci cli access:
# An API key is an RSA key pair in PEM format used for signing API requests. Generate it below
openssl genrsa -out oci_api_key.pem 2048
# Generate the public key from the private key
openssl rsa -pubout -in oci_api_key.pem -out oci_api_key_public.pem
# copy the public key
cat oci_api_key_public.pem
Go to user settings in oci console: Click on user settings.
Add the api key: Go to tokens and keys tab and add the copied public key there:

Once api key is added we will see a configuration preview like below:

Make note of the values or copy the whole configuration for we will need it during oci setup.
Run oci setup: From the jumpbox with root user execute:
oci setup config
Enter the values for user, fingerprint, tenancy, region and the key file path. Hit enter for other inputs to use default values. This will setup oci cli access for us.
Connecting to OKE cluster
As a final setup to complete the cluster setup for us we need to connect the jumpbox to our previously created OKE cluster.
Go to the created OKE cluster page: Go the https://cloud.oracle.com/containers/clusters and select our lightbeam cluster.

Access Cluster: Click on access cluster button, which should open a popup and local access should be selected and follow the steps provided. Choose the VCN native private endpoint access and paste the command in the jumpbox shell with root access.
Verify the setup: Run the below command and if the setup was success the OKE cluster nodes should be displayed.
kubectl get pods
This should return a list of the nodes in your OKE cluster.
Set the LightBeam namespace as current context
using the command
kubectl config set-context --current --namespace lightbeam
Lightbeam Cluster Deployment
Once the kubectl
starts working with the above provisioned OKE Cluster then install the LightBeam chart using the command. The --spectra
flag specifies the spectra deployment. Use the --privacy_ops
flag to specify the privacy ops deployment.
export DOCKER_USERNAME="lbcustomers" DOCKER_REGISTRY_PASSWORD="<DOCKER_REGISTRY_TOKEN>" KBLD_REGISTRY_HOSTNAME="docker.io" KBLD_REGISTRY_USERNAME="lbcustomers" KBLD_REGISTRY_PASSWORD="<DOCKER_REGISTRY_TOKEN>"
Wait for ~20 minutes to complete the installation. After installation copy the default username and password for Lightbeam Instance.
Copy the Address from the ingress and run the following commands:
kubectl patch cm/lightbeam-common-configmap -n lightbeam --type merge -p '{"data": {"AUTH_BASE_URL": "http://<INGRESS_ADDRESS>"}}'
kubectl delete pods -l app=lightbeam-api-gateway -n lightbeam
Wait for
API-gateway
pod to be in an up and running state, monitor the gateway state using the command:
kubectl get pods -n lightbeam -o wide | grep api-gateway
Last updated