Google Cloud (GKE)
Installer Guide for Google Cloud (GKE)
Last updated
Installer Guide for Google Cloud (GKE)
Last updated
There are 2 scenarios when deploying the LightBeam application:
The GKE cluster is already present on the customer account.
The GKE cluster is to be deployed by the LightBeam team.
We need a 3-node GKE cluster and one micro Linux VM as a jumpbox to access the GKE cluster and perform installation/upgrade of the LightBeam application.
To create a GKE cluster:
Create a new Google Cloud account: If you do not already have a Google Cloud account, create one at .
Enable billing: To use GKE, you need to enable billing for your Google Cloud account. Follow the instructions in the Google Cloud Console to enable billing.
Create a new project: In the Google Cloud Console, navigate to the Select a project
drop-down menu at the top of the screen and click New Project. Give the project a name and click Create.
Enable the Kubernetes Engine API: In the Google Cloud Console, navigate to the APIs & Services
section and search for Kubernetes Engine API. Click on the API and click Enable.
Create a new GKE cluster: In the Google Cloud Console, navigate to the Kubernetes Engine
section and click on Clusters.
Click on CREATE.
When you create a cluster in GKE, you do so by using one of the following modes of operation:
Autopilot: Provides a fully-provisioned and managed cluster configuration. For clusters created using the Autopilot mode, the cluster configuration options are made for you. Autopilot clusters are pre-configured with an optimized cluster configuration that is ready for production workloads.
Standard: Provides advanced configuration flexibility over the cluster's underlying infrastructure. For clusters created using the Standard mode, you determine the configurations needed for your production workloads.
In this case, click on Switch to Standard Cluster.
Name: Name of the cluster should be unique within the project and the zone.
Location Type: Regional
In a regional cluster, the control plane is replicated across multiple zones in a region. Choose the relevant region in the drop-down menu under Region.
Choose the Release channel control plane version of Kubernetes for your cluster.
Select ‘Stable channel’ from the drop-down menu.
Node pools are groups of nodes within your cluster that share a common configuration.
Name: Select the default-node-pool and customize it with the necessary details.
Size: The number of nodes to create in the cluster. For this use case, number of nodes is 1.
Surge upgrade: Select the surge upgrade field and set the Max Surge value to 1.
Surge upgrades minimize interruptions to your nodes while performing cluster maintenance and allow you to control the number of nodes upgraded concurrently.
Image type: Select the default node image from the dropdown list: Container-Optimized OS with containerd (cos_containerd) (default)
Machine Configuration: Select Compute Optimized type of configuration under the Machine family.
Series: Select series C2 from the drop-down list.
Machine type: Choose c2-standard-8 (8vCPU, 32GB memory) under Compute Engine machine type
to use for the instances.
Boot disk type: Under this field, select Balanced persistent disk.
Boot disk size (GB): Select the boot size as 200.
By default, Google-managed encryption keys are used for boot disk encryption.
Click on the “Networking" section of the node pool settings.
Maximum Pods per Node Value: Enter the desired maximum number of pods per node in the input field as 110.
Configure Service Account and Access Scopes:
Scroll down to the Security
section within the "node-pool" settings.
In the Service account field, select "Compute Engine default service account
" from the drop-down menu.
Check the box next to Allow default access.
Enable Integrity Monitoring:
In the Shielded Options
section, check the box next to Enable integrity monitoring. Integrity monitoring helps ensure the security and trustworthiness of your nodes by monitoring their runtime state and detecting any potential compromises or unauthorized changes.
Configure Kubernetes Labels:
Scroll down to the "Node metadata"
section.
Locate the "Kubernetes labels"
section.
In the "Key 1" field, enter lb/enabled.
In the "Value 1" field, enter true
.
This label indicates that the node is enabled for load balancing.
Configure GCE Instance Metadata:
Locate the "GCE instance metadata
" section.
In the "Key 1" field, enter disabled-legacy-endpoints.
In the "Value 1" field, enter true
.
This metadata entry disables the legacy metadata endpoints on the GCE instances, improving the security of your cluster nodes.
Configure Network and Subnet
Scroll down to the "Networking
" section under CLUSTER header.
In the "Network" field, select 'default' from the drop-down menu.
In the "Node subnet" field, select 'default' from the drop-down menu.
Select IPv4 Network Access Type
Locate the "IPv4 network access
" setting.
Select the 'Public Cluster' option. This setting configures your cluster to have public IP addresses, which allows external traffic to access the cluster's control plane and nodes.
Enable VPC-Native Traffic Routing
Locate the "Advanced networking options”
setting.
Check the box next to Enable VPC-native traffic routing (uses alias IP).
Set Maximum Pods per Node:
Enter '110
' in the input field under "Maximum pods per node".
Enable HTTP Load Balancing
Locate the "Enable HTTP load balancing
" setting.
Check the box next to this setting to enable HTTP load balancing.
Select DNS Provider
Locate the "DNS provider
" setting.
Select the 'Kube-dns' option. Kube-dns is the default DNS provider for Kubernetes and is responsible for resolving DNS queries within your cluster.
In the "Cluster Security
" section, locate the "Enable Shielded GKE Nodes" setting.
Check the box next to this setting to enable Shielded GKE Nodes.
By enabling this feature, you ensure that your cluster nodes use a verified, secure boot process and runtime environment, protecting your cluster from various types of attacks and vulnerabilities.
Here, you can choose either of the two options:
i) Systems and Workloads
ii) System, Workloads, API Server, Scheduler, and Control Manager
i) Systems and Workloads:
Enable Logging
In the "Operations
" subsection, locate the "Enable Logging" setting.
Check the box next to this setting to enable logging for your cluster.
Click on the drop-down menu below the "Components" setting.
To ensure that logs are collected for both system components and workload resources running in your cluster for comprehensive insights into your cluster's performance, select "Systems and Workloads" from the list.
Alternatively, to select different components, see point (ii)
Enable Cloud Monitoring
In the "Operations
" subsection, locate the "Enable Cloud Monitoring" setting.
Check the box next to this setting to enable Cloud Monitoring for your cluster.
Click on the drop-down menu below the "Components
" setting.
Select "System" from the list.
Enable Compute Engine Persistent Disk
In the "Other
" subsection, locate the "Enable Compute Engine Persistent Disk CSI Driver" setting.
Check the box next to this setting.
ii) System, Workloads, API Server, Scheduler, and Control Manager:
Enable Logging
In the "Operations
" subsection, locate the "Enable Logging" setting.
Check the box next to this setting to enable logging for your cluster.
Click on the drop-down menu below the "Components
" setting.
Select "System, Workloads, API Server, Scheduler, and Control Manager" from the list. This configuration ensures that logs are collected for these specific components, providing comprehensive insights into your cluster's performance.
Enable Cloud Monitoring
In the "Operations
" subsection, locate the "Enable Cloud Monitoring" setting.
Check the box next to this setting to enable Cloud Monitoring for your cluster.
Click on the drop-down menu below the "Components
" setting.
Select "System, API Server, Scheduler, and Control Manager" from the list. This configuration ensures that Cloud Monitoring collects metrics from these specific cluster components, enabling you to monitor and analyze their performance and health.
Review your cluster configuration to ensure it meets your requirements.
Click the "Create" button at the bottom of the GKE Cluster Creation page to start the cluster creation process.
Observe the status of your cluster, which will be displayed in the "Status
" column. It will be updated once the cluster creation process is complete.
Click on the name of the cluster to view its details.
This will show the cluster details being configured.
Once the cluster has been created, the status will be reflected in the Overview.
Click on the cluster name to view the updated details of the cluster.
There are two methods to connect to a Kubernetes cluster in GKE.
Using the command line in your local machine.
Using the cloud shell in Gcloud
Method 1: Using the Command Line on Your Local Machine
Copy the Command-line access command displayed in the "CONNECT
" section of your cluster's details page.
The command will look like this:
Open a terminal on your local machine and paste the command, then press Enter.
This command will configure your local kubectl
installation to connect to your GKE cluster.
Method 2: Using the Cloud Shell in Google Cloud
On the cluster's details page, click on the 'CONNECT' button.
Click on the 'RUN IN CLOUD SHELL' button. This action will open a new cloud shell session in your browser, with the command to connect to your cluster pre-filled.
Press Enter in the cloud shell session to execute the pre-filled command. This command will configure your cloud shell's kubectl
installation to connect to your GKE cluster.
Search for VM instances
in the top search bar.
Inside VM instances, click on Create instances.
Give instance name, select series and machine type, change image and disk size as shown in the screenshot given below.
Click on Create.
Install packages
Setup Google Cloud CLI
Initialize and authenticate gcloud cli
Once kubectl
is installed, configure it to connect to your GKE cluster by running the following command in your terminal:
Replace <cluster-name>
, <region>
and <project-name>
with the appropriate values for your GKE cluster.
To verify that kubectl
is configured correctly and can connect to the GKE cluster, run the following command in your terminal:
This should return a list of the nodes in your GKE cluster.
Set the LightBeam namespace as current context
using the command
Once the kubectl
starts working with the above provisioned AKS Cluster then install the LightBeam chart using the command. The --spectra
flag specifies the spectra deployment. Use the --privacy_ops
flag to specify the privacy ops deployment.
Wait for ~20 minutes to complete the installation. After installation copy the default username and password for Lightbeam Instance.
Copy the Address from the ingress and run the following commands:
Wait for API-gateway
pod to be in an up and running state, monitor the gateway state using the command:
Copy the Address from the ingress and run the following commands
Wait for API-gateway
pod to be in an up and running state, monitor the gateway state using the command:
Create new file:
/usr/local/bin/lightbeam.sh
Change permission of file:
Create new file: /etc/systemd/system/lightbeam.service
Enable and start services:
Access LightBeam UI using Public IP of node and Port 80:
http://<JUMPBOX_VM_PUBLIC_IP>:80
LightBeam automates Privacy, Security, and AI Governance, so businesses can accelerate their growth in new markets. Leveraging generative AI, LightBeam has rapidly gained customers’ trust by pioneering a unique privacy-centric and automation-first approach to security. Unlike siloed solutions, LightBeam ties together sensitive data cataloging, control, and compliance across structured and unstructured data applications providing 360-visibility, redaction, self-service DSRs, and automated ROPA reporting ensuring ultimate protection against ransomware and accidental exposures while meeting data privacy obligations efficiently. LightBeam is on a mission to create a secure privacy-first world helping customers automate compliance against a patchwork of existing and emerging regulations.
For any questions or suggestions, please get in touch with us at: .