Google Cloud (GKE)
Installer Guide for Google Cloud (GKE)
1.0 Setup Overview
There are 2 scenarios when deploying the LightBeam application:
The GKE cluster is already present on the customer account.
The GKE cluster is to be deployed by the LightBeam team.
We need a 3-node GKE cluster and one micro Linux VM as a jumpbox to access the GKE cluster and perform installation/upgrade of the LightBeam application.
2.0 Create GKE cluster in Google Cloud
To create a GKE cluster:
Create a new Google Cloud account: If you do not already have a Google Cloud account, create one at
https://cloud.google.com/
.Enable billing: To use GKE, you need to enable billing for your Google Cloud account. Follow the instructions in the Google Cloud Console to enable billing.
Create a new project: In the Google Cloud Console, navigate to the
Select a project
drop-down menu at the top of the screen and click New Project. Give the project a name and click Create.Enable the Kubernetes Engine API: In the Google Cloud Console, navigate to the
APIs & Services
section and search for Kubernetes Engine API. Click on the API and click Enable.Create a new GKE cluster: In the Google Cloud Console, navigate to the
Kubernetes Engine
section and click on Clusters.
Click on CREATE.
Modes of operation
When you create a cluster in GKE, you do so by using one of the following modes of operation:
Autopilot: Provides a fully-provisioned and managed cluster configuration. For clusters created using the Autopilot mode, the cluster configuration options are made for you. Autopilot clusters are pre-configured with an optimized cluster configuration that is ready for production workloads.
Standard: Provides advanced configuration flexibility over the cluster's underlying infrastructure. For clusters created using the Standard mode, you determine the configurations needed for your production workloads.
In this case, click on Switch to Standard Cluster.
Cluster basics
Name: Name of the cluster should be unique within the project and the zone.
Location Type: Regional
In a regional cluster, the control plane is replicated across multiple zones in a region. Choose the relevant region in the drop-down menu under Region.
Choose the Release channel control plane version of Kubernetes for your cluster.
Select ‘Stable channel’ from the drop-down menu.
Node Pool Details
Node pools are groups of nodes within your cluster that share a common configuration.
Name: Select the default-node-pool and customize it with the necessary details.
Size: The number of nodes to create in the cluster. For this use case, number of nodes is 1.
Surge upgrade: Select the surge upgrade field and set the Max Surge value to 1.
Surge upgrades minimize interruptions to your nodes while performing cluster maintenance and allow you to control the number of nodes upgraded concurrently.
A. Nodes
Image type: Select the default node image from the dropdown list:
Container-Optimized OS with containerd (cos_containerd) (default)
Machine Configuration: Select Compute Optimized type of configuration under the
Machine family.
Series: Select series C2 from the drop-down list.
Machine type: Choose c2-standard-8 (8vCPU, 32GB memory) under
Compute Engine machine type
to use for the instances.
Boot disk type: Under this field, select Balanced persistent disk.
Boot disk size (GB): Select the boot size as 200.
By default, Google-managed encryption keys are used for boot disk encryption.
B. Node Networking
Click on the “Networking" section of the node pool settings.
Maximum Pods per Node Value: Enter the desired maximum number of pods per node in the input field as 110.
C. Node Security
Configure Service Account and Access Scopes:
Scroll down to the
Security
section within the "node-pool" settings.In the Service account field, select "
Compute Engine default service account
" from the drop-down menu.Check the box next to Allow default access.
Enable Integrity Monitoring:
In the
Shielded Options
section, check the box next to Enable integrity monitoring. Integrity monitoring helps ensure the security and trustworthiness of your nodes by monitoring their runtime state and detecting any potential compromises or unauthorized changes.
D. Node Metadata
Configure Kubernetes Labels:
Scroll down to the
"Node metadata"
section.Locate the
"Kubernetes labels"
section.In the "Key 1" field, enter
lb/enabled.
In the "Value 1" field, enter
true
.
This label indicates that the node is enabled for load balancing.
Configure GCE Instance Metadata:
Locate the "
GCE instance metadata
" section.In the "Key 1" field, enter
disabled-legacy-endpoints.
In the "Value 1" field, enter
true
.
This metadata entry disables the legacy metadata endpoints on the GCE instances, improving the security of your cluster nodes.
E. Cluster Networking
Configure Network and Subnet
Scroll down to the "
Networking
" section under CLUSTER header.In the "Network" field, select 'default' from the drop-down menu.
In the "Node subnet" field, select 'default' from the drop-down menu.
Select IPv4 Network Access Type
Locate the "
IPv4 network access
" setting.Select the 'Public Cluster' option. This setting configures your cluster to have public IP addresses, which allows external traffic to access the cluster's control plane and nodes.
Enable VPC-Native Traffic Routing
Locate the "
Advanced networking options”
setting.Check the box next to Enable VPC-native traffic routing (uses alias IP).
Set Maximum Pods per Node:
Enter '110
' in the input field under "Maximum pods per node".
Enable HTTP Load Balancing
Locate the "
Enable HTTP load balancing
" setting.Check the box next to this setting to enable HTTP load balancing.
Select DNS Provider
Locate the "
DNS provider
" setting.Select the 'Kube-dns' option. Kube-dns is the default DNS provider for Kubernetes and is responsible for resolving DNS queries within your cluster.
F. Cluster Security
In the "Cluster Security
" section, locate the "Enable Shielded GKE Nodes" setting.
Check the box next to this setting to enable Shielded GKE Nodes.
By enabling this feature, you ensure that your cluster nodes use a verified, secure boot process and runtime environment, protecting your cluster from various types of attacks and vulnerabilities.
G. Cluster Features
Here, you can choose either of the two options:
i) Systems and Workloads
ii) System, Workloads, API Server, Scheduler, and Control Manager
i) Systems and Workloads:
Enable Logging
In the "
Operations
" subsection, locate the "Enable Logging" setting.Check the box next to this setting to enable logging for your cluster.
Click on the drop-down menu below the "Components" setting.
To ensure that logs are collected for both system components and workload resources running in your cluster for comprehensive insights into your cluster's performance, select "Systems and Workloads" from the list.
Alternatively, to select different components, see point (ii)
Enable Cloud Monitoring
In the "
Operations
" subsection, locate the "Enable Cloud Monitoring" setting.Check the box next to this setting to enable Cloud Monitoring for your cluster.
Click on the drop-down menu below the "
Components
" setting.Select "System" from the list.
Enable Compute Engine Persistent Disk
In the "
Other
" subsection, locate the "Enable Compute Engine Persistent Disk CSI Driver" setting.Check the box next to this setting.
ii) System, Workloads, API Server, Scheduler, and Control Manager:
Enable Logging
In the "
Operations
" subsection, locate the "Enable Logging" setting.Check the box next to this setting to enable logging for your cluster.
Click on the drop-down menu below the "
Components
" setting.Select "System, Workloads, API Server, Scheduler, and Control Manager" from the list. This configuration ensures that logs are collected for these specific components, providing comprehensive insights into your cluster's performance.
Enable Cloud Monitoring
In the "
Operations
" subsection, locate the "Enable Cloud Monitoring" setting.Check the box next to this setting to enable Cloud Monitoring for your cluster.
Click on the drop-down menu below the "
Components
" setting.Select "System, API Server, Scheduler, and Control Manager" from the list. This configuration ensures that Cloud Monitoring collects metrics from these specific cluster components, enabling you to monitor and analyze their performance and health.
Save and Create the Cluster
Review your cluster configuration to ensure it meets your requirements.
Click the "Create" button at the bottom of the GKE Cluster Creation page to start the cluster creation process.
Observe the status of your cluster, which will be displayed in the "
Status
" column. It will be updated once the cluster creation process is complete.
Click on the name of the cluster to view its details.
This will show the cluster details being configured.
Once the cluster has been created, the status will be reflected in the Overview.
Click on the cluster name to view the updated details of the cluster.
3.0 Connecting to the Kubernetes cluster
There are two methods to connect to a Kubernetes cluster in GKE.
Using the command line in your local machine.
Using the cloud shell in Gcloud
Method 1: Using the Command Line on Your Local Machine
Copy the Command-line access command displayed in the "
CONNECT
" section of your cluster's details page.The command will look like this:
gcloud container clusters get-credentials <CLUSTER_NAME> — zone <ZONE> — project <PROJECT_NAME>
Open a terminal on your local machine and paste the command, then press Enter.
This command will configure your local
kubectl
installation to connect to your GKE cluster.
Method 2: Using the Cloud Shell in Google Cloud
On the cluster's details page, click on the 'CONNECT' button.
Click on the 'RUN IN CLOUD SHELL' button. This action will open a new cloud shell session in your browser, with the command to connect to your cluster pre-filled.
Press Enter in the cloud shell session to execute the pre-filled command. This command will configure your cloud shell's
kubectl
installation to connect to your GKE cluster.
4.0 Create and Setup Linux Jump Box
Create Linux VM
Search for
VM instances
in the top search bar.Inside VM instances, click on Create instances.
Give instance name, select series and machine type, change image and disk size as shown in the screenshot given below.
Click on Create.
Set Up Linux VM
Install packages
sudo apt-get update
sudo apt-get install unzip
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"
sudo chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version
wget https://get.helm.sh/helm-v3.3.4-linux-amd64.tar.gz
tar -xvf helm-v3.3.4-linux-amd64.tar.gz
sudo mv linux-amd64/helm /usr/local/bin/
Setup Google Cloud CLI
sudo apt-get update && wget https://dl.google.com/dl/cloudsdk/channels/rapid/downloads/google-cloud-sdk-356.0.0-linux-x86_64.tar.gz
tar -xvf google-cloud-sdk-356.0.0-linux-x86_64.tar.gz
cd google-cloud-sdk && ./install.sh
Initialize and authenticate gcloud cli
gcloud init
gcloud auth login
Configure and Verify GKE Cluster Access
Once
kubectl
is installed, configure it to connect to your GKE cluster by running the following command in your terminal:
gcloud container clusters get-credentials <cluster-name> --region <region> --project <project-name>
Replace
<cluster-name>
,<region>
and<project-name>
with the appropriate values for your GKE cluster.To verify that
kubectl
is configured correctly and can connect to the GKE cluster, run the following command in your terminal:
kubectl get nodes
This should return a list of the nodes in your GKE cluster.
Set the LightBeam namespace as current context
using the command
kubectl config set-context --current --namespace lightbeam
5.0 LightBeam Cluster Deployment
Once the kubectl
starts working with the above provisioned AKS Cluster then install the LightBeam chart using the command. The --spectra
flag specifies the spectra deployment. Use the --privacy_ops
flag to specify the privacy ops deployment.
export DOCKER_USERNAME="lbcustomers" DOCKER_REGISTRY_PASSWORD="<DOCKER_REGISTRY_TOKEN>" KBLD_REGISTRY_HOSTNAME="docker.io" KBLD_REGISTRY_USERNAME="lbcustomers" KBLD_REGISTRY_PASSWORD="<DOCKER_REGISTRY_TOKEN>"
./installer/lb-install.sh --install --spectra --values charts/lightbeam/values.yaml --final_values charts/lightbeam/values.yaml
Wait for ~20 minutes to complete the installation. After installation copy the default username and password for Lightbeam Instance.
Copy the Address from the ingress and run the following commands:
kubectl patch cm/lightbeam-common-configmap -n lightbeam --type merge -p '{"data": {"AUTH_BASE_URL": "http://<INGRESS_ADDRESS>"}}'
kubectl delete pods -l app=lightbeam-api-gateway -n lightbeam
Wait for
API-gateway
pod to be in an up and running state, monitor the gateway state using the command:
kubectl get pods -n lightbeam -o wide | grep api-gateway
6.0 Configure Access LightBeam UI using Jump Box IP
Copy the Address from the ingress and run the following commands
kubectl patch cm/lightbeam-common-configmap -n lightbeam --type merge -p '{"data": {"AUTH_BASE_URL": "http://<JUMP_BOX_PUBLIC_IP>"}}'
kubectl delete pods -l app=lightbeam-api-gateway -n lightbeam
Wait for
API-gateway
pod to be in an up and running state, monitor the gateway state using the command:
kubectl get pods -n lightbeam -o wide | grep api-gateway
Create the following script and systemd service
Create new file:
/usr/local/bin/lightbeam.sh
#!/usr/bin/env bash
trap 'kill $(jobs -p)' EXIT
/usr/bin/kubectl port-forward service/kong-proxy -n lightbeam --address 0.0.0.0 80:80 443:443 --kubeconfig /root/.kube/config &
PID=$!
/bin/systemd-notify --ready
while(true); do
FAIL=0
kill -0 $PID
if [[ $? -ne 0 ]]; then FAIL=1; fi
status_code=`curl -s -o /dev/null -w "%{http_code}" http://localhost/api/health`
echo "Lightbeam cluster health check: $status_code"
if [[ $? -ne 0 || $status_code -ne 200 && $status_code -ne 301 ]]; then FAIL=1; fi
if [[ $FAIL -eq 0 ]]; then /bin/systemd-notify WATCHDOG=1; fi
sleep 1
done
Change permission of file:
chmod u+x /usr/local/bin/lightbeam.sh
Create new file:
/etc/systemd/system/lightbeam.service
[Unit]
Description=LightBeam Application
After=network-online.target
Wants=network-online.target systemd-networkd-wait-online.service
StartLimitIntervalSec=500
StartLimitBurst=10000
[Service]
Type=notify
Restart=always
RestartSec=1
TimeoutSec=5
WatchdogSec=5
ExecStart=/usr/local/bin/lightbeam.sh
[Install]
WantedBy=multi-user.target
Enable and start services:
systemctl enable lightbeam
systemctl start lightbeam
systemctl status lightbeam
Access LightBeam UI using Public IP of node and Port 80:
http://<JUMPBOX_VM_PUBLIC_IP>:80
About LightBeam
LightBeam automates Privacy, Security, and AI Governance, so businesses can accelerate their growth in new markets. Leveraging generative AI, LightBeam has rapidly gained customers’ trust by pioneering a unique privacy-centric and automation-first approach to security. Unlike siloed solutions, LightBeam ties together sensitive data cataloging, control, and compliance across structured and unstructured data applications providing 360-visibility, redaction, self-service DSRs, and automated ROPA reporting ensuring ultimate protection against ransomware and accidental exposures while meeting data privacy obligations efficiently. LightBeam is on a mission to create a secure privacy-first world helping customers automate compliance against a patchwork of existing and emerging regulations.
For any questions or suggestions, please get in touch with us at: [email protected].
Last updated