Skip to main content
Version: 0.5.x

In-cluster deployment

Deploy interLink in the local K8S cluster.

Docusaurus themed imageDocusaurus themed image

Deploy Kubernetes components

The deployment of the Kubernetes components are managed by the official HELM chart. Depending on the scenario you selected, there might be additional operations to be done.

  • Create an helm values file:
values.yaml
nodeName: interlink-in-cluster

interlink:
enabled: true
address: http://localhost
port: 3000
logging:
verboseLogging: true

plugin:
enabled: true
image: "ghcr.io/interlink-hq/interlink-sidecar-slurm/interlink-sidecar-slurm:0.5.1"
address: "http://localhost"
port: 4000
privileged: true
extraVolumeMounts:
- name: plugin-data
mountPath: /slurm-data
envs:
- name: SLURMCONFIGPATH
value: "/etc/interlink/plugin.yaml"
- name: SHARED_FS
values: "true"
config: |
#Socket: "unix:///var/run/plugin.sock"
ImagePrefix: "docker://"
SidecarPort: 4000
VerboseLogging: true
ErrorsOnlyLogging: false
DataRootFolder: "/slurm-data/"
ExportPodData: true
SbatchPath: "/usr/bin/sbatch"
ScancelPath: "/usr/bin/scancel"
SqueuePath: "/usr/bin/squeue"
CommandPrefix: ""
SingularityPrefix: ""
Namespace: "vk"
Tsocks: false
TsocksPath: "$WORK/tsocks-1.8beta5+ds1/libtsocks.so"
TsocksLoginNode: "login01"
BashPath: /bin/bash

virtualNode:
resources:
CPUs: 4
memGiB: 16
pods: 50

extraVolumes:
- name: plugin-data
hostPath:
path: /tmp/test
type: DirectoryOrCreate

Eventually deploy the latest release of the official:

  export INTERLINK_CHART_VERSION="X.X.X"
helm upgrade --install \
--create-namespace \
-n interlink \
my-node \
oci://ghcr.io/interlink-hq/interlink-helm-chart/interlink \
--version $INTERLINK_CHART_VERSION \
--values values.yaml
warning

Remember to pick the version of the chart and put it into the INTERLINK_CHART_VERSION env var above.

Whenever you see the node ready, you are good to go!

note

You can find a demo pod to test your setup here.

To start debugging in case of problems we suggest starting from the pod containers logs!

Verify the setup

Test the complete setup:

# Check if node appears in Kubernetes
kubectl get nodes

# Deploy a test pod
kubectl apply -f - <<EOF
apiVersion: v1
kind: Pod
metadata:
name: test-tunnel
spec:
nodeSelector:
kubernetes.io/hostname: interlink-in-cluster
tolerations:
- key: virtual-node.interlink/no-schedule
operator: Exists
containers:
- name: test
image: busybox
command: ["sleep", "3600"]
EOF

# Check pod status
kubectl get pod test-tunnel -o wide