Katonic Agents
Node pool requirementsโ
A. Single-Node EKS Cluster:โ
The EKS cluster must have one node pool that produce worker nodes with the following specifications and distinct node labels.
SR NO. | POOL | MIN-MAX | INSTANCE | LABELS | TAINTS |
---|---|---|---|---|---|
1 | Compute | 1-10 | m5.2xlarge | katonic.ai/node-pool=compute | |
2 | Vectordb | 1-4 | m5.xlarge | katonic.ai/node-pool=vectordb | katonic.ai/node-pool=vectordb:NoSchedule |
B. Multi-Node EKS Cluster:โ
The EKS cluster must have at least three node pools that produce worker nodes with the following specifications and distinct node labels, and it might include an optional GPU pool:
SR NO. | POOL | MIN-MAX | INSTANCE | LABELS | TAINTS |
---|---|---|---|---|---|
1 | Platform | 2-4 | m5.large | katonic.ai/node-pool=platform | katonic.ai/node-pool=platform:NoSchedule |
2 | Compute | 1-10 | m5.2xlarge | katonic.ai/node-pool=compute | |
3 | Deployment | 1-10 | m5.2xlarge | katonc.ai/node-pool=deployment | katonic.ai/node-pool=deployment:NoSchedule |
4 | Vectordb | 1-4 | m5.xlarge | katonic.ai/node-pool=vectordb | katonic.ai/node-pool=vectordb:NoSchedule |
5 | GPU (Optional) | 0-5 | p2.xlarge | katonic.ai/node-pool=gpu-{GPU-type} | nvidia.com/gpu=gpu-{GPU-type}:NoSchedule |
Note: For example we can use Gpu type as v100, A30, A100
Note: When backup_enabled = True, then compute_nodes.min_count should be set to 2.
If you want Katonic to run with some components deployed as highly available ReplicaSets you must use 2 availability zones. All compute node pools you use must have corresponding ASGs in any AZ used by other node pools. Setting up an isolated node pool in one zone can cause volume affinity issues.
To run the node pools across multiple availability zones, you will need duplicate ASGs in each zone with the same configuration, including the same labels, to ensure pods are delivered to the zone where the required ephemeral volumes are available.
The easiest way to get suitable drivers onto GPU nodes is to use the EKS-optimized AMI distributed by Amazon as the machine image for the GPU node pool.
Additional ASGs with distinct katonic.ai/node-pool labels can be added to make other instance types available for Katonic executions.
The Katonic installer can set up all configurations of ASG and zones for the Katonic platform.
AWS Platform-Node Specificationsโ
Platform nodes in platform AWS cloud deployments must fulfil the following hardware specification requirements according to the deployment type:
COMPONENT | SPECIFICATION |
---|---|
Node count | Min 2 |
Instance type | m5.large |
vCPUs | 4 |
Memory | 16 GB |
Boot disk size | 128 GB |
AWS Compute-Node Specificationsโ
The following instance types are required for Compute nodes in Katonic platform's AWS cloud deployments:
Choose the type that best fits your requirements. AWS Elastic Kubernetes Service (EKS) is also supported for application nodes, using the instance types listed below. The Katonic platform requires at least 1 minimum Compute node for the teams version. For specification details of each type, refer to the AWS documentation.
Note: Supported compute node configurations
- m5.xlarge
- m5.2xlarge (default configuration)
- m5.4xlarge
- m5.8xlarge
- m5.12xlarge
- Boot disk : 128GB
AWS Deployment-Node Specificationsโ
The following instance types are required for Deployment nodes in Katonic platform's AWS cloud deployments:
Choose the type that best fits your requirements. AWS Elastic Kubernetes Service (EKS) is also supported for application nodes, using the instance types listed below. The Katonic platform requires at least 1 minimum deployment node for the teams version. For specification details of each type, refer to the AWS documentation.
Note: Supported deployment node configurations
- m5.xlarge
- m5.2xlarge (default configuration)
- m5.4xlarge
- m5.8xlarge
- m5.12xlarge
- Boot disk : 128GB
AWS Vectordb-Node Specificationsโ
Vectordb nodes in platform AWS cloud deployments must fulfil the following hardware specification requirements:
COMPONENT | SPECIFICATION |
---|---|
Instance type | m5.xlarge |
AWS GPU-Node Specificationsโ
The following instance types are required for GPU nodes in Katonic platform's AWS cloud deployments:
Choose the type that best fits your requirements. AWS Elastic Kubernetes Service (EKS) is also supported for application nodes, using the instance types listed below. For specification details of each type, refer to the AWS documentation.
Note: Supported GPU node configurations
- p2.xlarge (default configuration)
- p2.8xlarge
- p2.16xlarge
- Boot Disk: 512 GB
Additional node pools with distinct katonic.ai/node-pool labels can be added to make other instance types available for Katonic executions.
Prerequisitesโ
To install and configure Katonic in your AWS account you must have:
Quay credentials from Katonic.
Required: PEM encoded public key certificate for your domain and private key associated with the given certificate.
AWS region with enough quota to create:
- At least 4 m5.2xlarge EC2 machines.
- p2.xlarge or greater (All P instances) VMs, if you want to use GPU.
A Linux operating system (Ubuntu/Debian) based machine with the following steps:
a. A Linux operating system (Ubuntu/Debian) based machine having 4GB RAM and 2vcpus. Skip to step b if you already have the machine with the given specifications.
Note: After the platform is deployed successfully, the VM can be deleted.
b. Switch to the root user inside the machine.
c. AWS CLI must be installed and logged in to your AWS account using the aws configure command, with a user that has IAM policies required to create the resources listed above.
Commands for installing AWS CLI v2:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/installThe following tools should be installed:
To install Katonic Platform Agents version follow the steps mentioned below:โ
1. Take access of the JumpHost and configure aws.โ
2. Log in to Quay with the credentials described in the requirements section above.โ
docker login quay.io
3. Retrieve the Katonic installer image from Quay.โ
docker pull quay.io/katonic/katonic-installer:v5.0.9
4. Create a directory.โ
mkdir katonic
cd katonic
5. Adding PEM Encoded Public Key Certificate and Private Key to Directoryโ
Put the PEM encoded public key certificate (having extension .crt) for your domain and private key associated with the given certificate (having extension .key) inside the current directory (katonic).
6. The Katonic Installer can deploy the Katonic Platform Agents version in two ways:โ
- Creating Private EKS and deploying the Katonic Platform Agents version.
- Install Katonic Platform on existing Private AWS Elastic Kubernetes Service.
1. Creating Private EKS and deploying the Katonic Platform Agents versionโ
A. Single-Node EKS Clusterโ
Initialize the installer application to generate a template configuration file named katonic.yml.
docker run -it --rm --name generating-yaml -v $(pwd):/install quay.io/katonic/katonic-installer:v5.0.9 init aws katonic_agents single_node deploy_kubernetes private
Edit the configuration file with all necessary details about the target cluster, storage systems, and hosting domain. Read the following configuration reference:
PARAMETER | DESCRIPTION | VALUE |
---|---|---|
katonic_platform_version | It has the value by default regarding the Katonic Platform Version. | katonic_agents |
deploy_on | Cluster to be deployed on | AWS |
create_k8s_cluster | Must be set True if EKS is not deployed | True |
single_node_cluster | set "True" if opting for single node cluster | True or False |
eks_version | EKS version | eg. 1.30(1.27 and above versions supported) |
cluster_name | Enter cluster name which you deploy | eg. katonic-agents-platform-v5-0 |
aws_region | AWS region name | eg. us-east-1 |
private_cluster | Set "True" if opting for private cluster | True |
vpc_id | Pass VPC ID if opting for private cluster | |
subnet_1_id | Pass one of the Private Subnet ID's if opting for private cluster | |
subnet_2_id | Pass another one of the Private Subnet ID's if opting for private cluster | |
jump_server_name | name of the jump server/ jump host created for private cluster | |
enable_exposing_genai_applications_to_internet | set "True" if opting for exposing genai applications to internet | False |
public_domain_for_genai_applications | Public FQDN of domain for genai applications that will be exposed to the internet | (eg. public-chatbots.google.com) |
internal_loadbalancer | Set "True" if opting for internal loadbalancer | False |
compute_nodes.instance_type | Compute node VM size | eg. m5.2xlarge |
compute_nodes.min_count | Minimum number of compute nodes should be 1 | eg. 1 |
compute_nodes.max_count | Maximum number of compute should be greater than compute nodes min count nodes. | eg. 4 |
compute_nodes.os_disk_size | Compute Nodes OS Disk Size | eg. 128 GB |
vectordb_nodes.instance_type | Vectordb Node VM size | eg. m5.xlarge |
vectordb_nodes.min_count | Minimum number of Vectordb nodes should be 1 | eg. 1 |
vectordb_nodes.max_count | Maximum number of Vectordb should be greater than Vectordb nodes min count nodes. | eg. 4 |
vectordb_nodes.os_disk_size | Vectordb Nodes OS Disk Size | eg. 128 GB |
gpu_enabled | Add GPU nodepool | True or False |
gpu_nodes.instance_type | GPU node VM size | eg. p2.xlarge |
gpu_nodes.gpu_type | Enter the type of GPU available on machine. | eg. v100,k80,none |
gpu_nodes.min_count | Minimum number of GPU nodes | eg. 1 |
gpu_nodes.max_count | Maximum number of GPU nodes | eg. 4 |
gpu_nodes.os_disk_size | GPU Nodes OS Disk Size | eg. 512 GB |
gpu_nodes.gpu_vRAM | Enter GPU Node RAM size | |
gpu_nodes.gpus_per_node | Enter GPU per node count | |
enable_gpu_workspace | Set it true if you want to use GPU Workspace | True or False |
shared_storage_create | Set it true if you want to use shared storage | True or False |
genai_nfs_size | nfs storage size for genai | 100Gi |
workspace_timeout_interval | Set the Timeout Interval in Hours | eg: "12" |
backup_enabled | enabling of the backup | True or False |
s3_bucket_name | name of the s3 bucket | eg. katonic-backup |
s3_bucket_region | region of the s3 bucket | eg. us-east-1 |
backup_schedule | scheduling of the backup | 0 0 1 * * |
backup_expiration | expiration of the backup | 2160h0m0s |
use_custom_domain | Set this to True if you want to host katonic platform on your custom domain. Skip if use_katonic_domain: True | True or False |
custom_domain_name | Expected a valid domain. | eg. katonic.tesla.com |
use_katonic_domain | Set this to True if you want to host katonic platform on Katonic Deploy Platform domain. Skip if use_custom_domain: True | True or False |
katonic_domain_prefix | One word expected with no special characters and all small alphabets | eg. tesla |
enable_pre_checks | Set this to True if you want to perform the Pre-checks | True / False |
AD_Group_Management | Set "True" to enable functionality that provides you ability to sign in using Azure AD | False |
AD_CLIENT_ID | Client ID of App registered for SSO in client's Azure or Identity Provider | |
AD_CLIENT_SECRET | Client Secret of App registered for SSO in client's Azure or any other Identity Provider | |
AD_AUTH_URL | Authorization URL endpoint of app registered for SSO. | |
AD_TOKEN_URL | Token URL endpoint of app registered for SSO. | |
quay_username | Username for quay | |
quay_password | Password for quay | |
adminUsername | email for admin user | eg. john@katonic.ai |
adminPassword | password for admin user | at least 1 special character at least 1 upper case letter at least 1 lower case letter minimum 8 characters |
adminFirstName | Admin first name | eg. john |
adminLastName | Admin last name | eg. musk |
B. Multi-Node EKS Clusterโ
Initialize the installer application to generate a template configuration file named katonic.yml.
docker run -it --rm --name generating-yaml -v $(pwd):/install quay.io/katonic/katonic-installer:v5.0.9 init aws katonic_agents multi_node deploy_kubernetes private
Edit the configuration file with all necessary details about the target cluster, storage systems, and hosting domain. Read the following configuration reference:
PARAMETER | DESCRIPTION | VALUE |
---|---|---|
katonic_platform_version | It has the value by default regarding the Katonic Platform Version. | katonic_agents |
deploy_on | Cluster to be deployed on | AWS |
create_k8s_cluster | Must be set True if EKS is not deployed | True |
eks_version | EKS version | eg. 1.30(1.27 and above versions supported) |
cluster_name | Enter cluster name which you deploy | eg. katonic-agents-platform-v5-0 |
aws_region | AWS region name | eg. us-east-1 |
private_cluster | Set "True" if opting for private cluster | True |
vpc_id | Pass VPC ID if opting for private cluster | |
subnet_1_id | Pass one of the Private Subnet ID's if opting for private cluster | |
subnet_2_id | Pass another one of the Private Subnet ID's if opting for private cluster | |
jump_server_name | name of the jump server/ jump host created for private cluster | |
enable_exposing_genai_applications_to_internet | set "True" if opting for exposing genai applications to internet | False |
public_domain_for_genai_applications | Public FQDN of domain for genai applications that will be exposed to the internet | (eg. public-chatbots.google.com) |
internal_loadbalancer | Set "True" if opting for internal loadbalancer | False |
compute_nodes.instance_type | Compute node VM size | eg. m5.2xlarge |
compute_nodes.min_count | Minimum number of compute nodes should be 1 | eg. 1 |
compute_nodes.max_count | Maximum number of compute should be greater than compute nodes min count nodes. | eg. 4 |
compute_nodes.os_disk_size | Compute Nodes OS Disk Size | eg. 128 GB |
vectordb_nodes.instance_type | Vectordb Node VM size | eg. m5.xlarge |
vectordb_nodes.min_count | Minimum number of Vectordb nodes should be 1 | eg. 1 |
vectordb_nodes.max_count | Maximum number of Vectordb should be greater than Vectordb nodes min count nodes. | eg. 4 |
vectordb_nodes.os_disk_size | Vectordb Nodes OS Disk Size | eg. 128 GB |
gpu_enabled | Add GPU nodepool | True or False |
gpu_nodes.instance_type | GPU node VM size | eg. p2.xlarge |
gpu_nodes.gpu_type | Enter the type of GPU available on machine. | eg. v100,k80,none |
gpu_nodes.min_count | Minimum number of GPU nodes | eg. 1 |
gpu_nodes.max_count | Maximum number of GPU nodes | eg. 4 |
gpu_nodes.os_disk_size | GPU Nodes OS Disk Size | eg. 512 GB |
gpu_nodes.gpu_vRAM | Enter GPU Node RAM size | |
gpu_nodes.gpus_per_node | Enter GPU per node count | |
enable_gpu_workspace | Set it true if you want to use GPU Workspace | True or False |
shared_storage_create | Set it true if you want to use shared storage | True or False |
genai_nfs_size | nfs storage size for genai | 100Gi |
workspace_timeout_interval | Set the Timeout Interval in Hours | eg: "12" |
backup_enabled | enabling of the backup | True or False |
s3_bucket_name | name of the s3 bucket | eg. katonic-backup |
s3_bucket_region | region of the s3 bucket | eg. us-east-1 |
backup_schedule | scheduling of the backup | 0 0 1 * * |
backup_expiration | expiration of the backup | 2160h0m0s |
use_custom_domain | Set this to True if you want to host katonic platform on your custom domain. Skip if use_katonic_domain: True | True or False |
custom_domain_name | Expected a valid domain. | eg. katonic.tesla.com |
use_katonic_domain | Set this to True if you want to host katonic platform on Katonic Deploy Platform domain. Skip if use_custom_domain: True | True or False |
katonic_domain_prefix | One word expected with no special characters and all small alphabets | eg. tesla |
enable_pre_checks | Set this to True if you want to perform the Pre-checks | True / False |
AD_Group_Management | Set "True" to enable functionality that provides you ability to sign in using Azure AD | False |
AD_CLIENT_ID | Client ID of App registered for SSO in client's Azure or Identity Provider | |
AD_CLIENT_SECRET | Client Secret of App registered for SSO in client's Azure or any other Identity Provider | |
AD_AUTH_URL | Authorization URL endpoint of app registered for SSO. | |
AD_TOKEN_URL | Token URL endpoint of app registered for SSO. | |
quay_username | Username for quay | |
quay_password | Password for quay | |
adminUsername | email for admin user | eg. john@katonic.ai |
adminPassword | password for admin user | at least 1 special character at least 1 upper case letter at least 1 lower case letter minimum 8 characters |
adminFirstName | Admin first name | eg. john |
adminLastName | Admin last name | eg. musk |
Installing the Katonic Platform Agents versionโ
After configuring the katonic.yml file, run the following command to install the Katonic Platform Agents version:
docker run -it --rm --name install-katonic -v /root/.aws:/root/.aws -v $(pwd):/inventory quay.io/katonic/katonic-installer:v5.0.9
2. Install Katonic Platform on existing Private AWS Elastic Kubernetes Serviceโ
The steps are similar to Installing the Katonic Platform with Private AWS Elastic Kubernetes Service. Just edit the configuration file with all the details about the target cluster, storage systems, and hosting domain. Read the following configuration reference, these are the only parameters required when installing the Katonic Platform on existing Private EKS.
Prerequisites
You will need to create an EBS gp3-based storage class named kfs. To do this, you need to install and configure the EBS CSI driver in the EKS cluster. Refer to the documentation for instructions on creating the GP3 based storage class.
SINGLE NODEโ
Initialize the installer application to generate a template configuration file named katonic.yml.
docker run -it --rm --name generating-yaml -v $(pwd):/install quay.io/katonic/katonic-installer:v5.0.9 init aws katonic_agents single_node kubernetes_already_exists private
MULTI NODEโ
Initialize the installer application to generate a template configuration file named katonic.yml.
docker run -it --rm --name generating-yaml -v $(pwd):/install quay.io/katonic/katonic-installer:v5.0.9 init aws katonic_agents multi_node kubernetes_already_exists private
For both single node and multi-node clusters, the configuration template includes the following parameters:
PARAMETER | DESCRIPTION | VALUE |
---|---|---|
katonic_platform_version | It has the value by default regarding the Katonic Platform Version. | katonic_agents |
deploy_on | Cluster to be deployed on | AWS |
single_node_cluster | set "True" if opting for single node cluster | True or False |
cluster_name | Enter cluster name which you deploy | eg. katonic-agents-platform-v5-0 |
aws_region | Enter region where you deploy EKS cluster | eg. us-east-1 |
private_cluster | set "True" for private cluster | True |
enable_exposing_genai_applications_to_internet | set "True" if opting for exposing genai applications to internet | False |
public_domain_for_genai_applications | Public FQDN of domain for genai applications that will be exposed to the internet | (eg. public-chatbots.google.com) |
internal_loadbalancer | Set "True" if opting for internal loadbalancer | False |
genai_nfs_size | nfs storage size for genai | 100Gi |
workspace_timeout_interval | Set the Timeout Interval in Hours | eg: "12" |
backup_enabled | Enabling of the backup | True or False |
s3_bucket_name | Name of the s3 bucket | eg. katonic-backup |
s3_bucket_region | Region of the s3 bucket | eg. us-east-1 |
backup_schedule | Scheduling of the backup | 0 0 1 * * |
backup_expiration | Expiration of the backup | 2160h0m0s |
use_custom_domain | Set this to True if you want to host katonic platform on your custom domain. Skip if use_katonic_domain: True | True or False |
custom_domain_name | Expected a valid domain. | eg. katonic.tesla.com |
use_katonic_domain | Set this to True if you want to host katonic platform on Katonic Deploy Platform domain. Skip if use_custom_domain: True | True or False |
katonic_domain_prefix | One word expected with no special characters and all small alphabets | eg. tesla |
enable_pre_checks | Set this to True if you want to perform the Pre-checks | True / False |
AD_Group_Management | Set "True" to enable functionality that provides you ability to sign in using Azure AD | False |
AD_CLIENT_ID | Client ID of App registered for SSO in client's Azure or Identity Provider | |
AD_CLIENT_SECRET | Client Secret of App registered for SSO in client's Azure or any other Identity Provider | |
AD_AUTH_URL | Authorization URL endpoint of app registered for SSO. | |
AD_TOKEN_URL | Token URL endpoint of app registered for SSO. | |
quay_username | Username of quay registry | |
quay_password | Password of quay registry | |
adminUsername | Email for admin user | eg. john@katonic.ai |
adminPassword | Password for admin user | at least 1 special character at least 1 upper case letter at least 1 lower case letter minimum 8 characters |
adminFirstName | Admin first name | eg. john |
adminLastName | Admin last name | eg. musk |
Note: In the katonic.yml template, the single_node_cluster parameter will be set to True for single node clusters and will be omitted for multi-node clusters
Installing the Katonic Platform Agents versionโ
After configuring the katonic.yml file, run the following command to install the Katonic Platform Agents version:
docker run -it --rm --name install-katonic -v /root/.aws:/root/.aws -v $(pwd):/inventory quay.io/katonic/katonic-installer:v5.0.9
Installation Verificationโ
The installation process can take up to 60 minutes to complete fully. The installer will output verbose logs, and commands to take kubectl access of deployed cluster and surface any errors it encounters. After installation, you can use the following commands to check whether all applications are in a running state or not.
cd /root/katonic
aws eks --region $(cat /root/katonic/katonic.yml | grep aws_region | awk '{print $2}') update-kubeconfig --name $(cat /root/katonic/katonic.yml | grep cluster_name | awk '{print $2}')-$(cat /root/katonic/katonic.yml | grep random_value | awk '{print $2}')
kubectl get pods --all-namespace
This will show the status of all pods being created by the installation process. If you see any pods enter a crash loop or hang in a non-ready state, you can get logs from that pod by running the:
kubectl logs $POD_NAME --namespace $NAMESPACE_NAME
If the installation completes successfully, you should see a message that says:
TASK [platform-deployment : Credentials to access Katonic Agents Platform] *******************************ok: [localhost] => {
"msg": [
"Platform Domain: $domain_name",
"Username: $adminUsername",
"Password: $adminPassword"
]
}
However, the application will only be accessible via HTTPS at that FQDN if you have configured DNS for the name to point to an ingress load balancer with the appropriate SSL certificate that forwards traffic to your platform nodes.
Test and troubleshootโ
To verify the successful installation of Katonic, perform the following tests:
If you encounter a 500 or 502 error, take access of your cluster and execute the following command:
kubectl rollout restart deploy nodelog-deploy -n application
Login to the Katonic application and ensure that all the navigation panel options are operational. If this test fails, please verify that Keycloak was set up properly.
Create a new project and launch a Jupyter/JupyterLab workspace. If this test fails, please check that the default environment images have been loaded in the cluster.
Publish an app with Flask or Shiny. If this test fails, please verify that the environment images have Flask and Shiny installed.
Deleting the Katonic platform from AWSโ
When you start the installation, in your current directory, you will get the platform deletion script. you just need to run the script.
./aws-cluster-delete.sh