You are viewing documentation for KubeSphere version:v3.0.0

KubeSphere v3.0.0 documentation is no longer actively maintained. The version you are currently viewing is a static snapshot. For up-to-date documentation, see the latest version.

Deploy TiDB Operator and a TiDB Cluster on KubeSphere

TiDB is a cloud-native, open-source NewSQL database that supports Hybrid Transactional and Analytical Processing (HTAP) workloads. It features horizontal scalability, strong consistency, and high availability.

This tutorial demonstrates how to deploy TiDB Operator and a TiDB Cluster on KubeSphere.

Prerequisites

  • You need to have at least 3 schedulable nodes.
  • You need to enable the OpenPitrix system.
  • You need to create a workspace, a project, and two user accounts (ws-admin and project-regular) for this tutorial. The account ws-admin must be granted the role of workspace-admin in the workspace, and the account project-regular must be invited to the project with the role of operator. If they are not ready, refer to Create Workspaces, Projects, Accounts and Roles.

Hands-on Lab

Step 1: Install TiDB Operator CRD

  1. Log in to the KubeSphere Web console as admin, and use Kubectl from the Toolbox in the bottom right corner to execute the following command to install TiDB Operator CRD:

    kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.6/manifests/crd.yaml
    
  2. You can see the expected output as below:

    customresourcedefinition.apiextensions.k8s.io/tidbclusters.pingcap.com created
    customresourcedefinition.apiextensions.k8s.io/backups.pingcap.com created
    customresourcedefinition.apiextensions.k8s.io/restores.pingcap.com created
    customresourcedefinition.apiextensions.k8s.io/backupschedules.pingcap.com created
    customresourcedefinition.apiextensions.k8s.io/tidbmonitors.pingcap.com created
    customresourcedefinition.apiextensions.k8s.io/tidbinitializers.pingcap.com created
    customresourcedefinition.apiextensions.k8s.io/tidbclusterautoscalers.pingcap.com created
    

Step 2: Add an app repository

  1. Log out of KubeSphere and log back in as ws-admin. In your workspace, go to App Repos under Apps Management, and then click Add Repo.

    add-repo

  2. In the dialog that appears, enter pingcap for the app repository name and https://charts.pingcap.org for the PingCAP Helm repository URL. Click Validate to verify the URL and you will see a green check mark next to the URL if it is available. Click OK to continue.

    add-pingcap-repo

  3. Your repository displays in the list after successfully imported to KubeSphere.

    added-pingcap-repo

Step 3: Deploy TiDB Operator

  1. Log out of KubeSphere and log back in as project-regular. In your project, go to Applications under Application Workloads and click Deploy New Application.

    deploy-app

  2. In the dialog that appears, select From App Templates.

    from-app-templates

  3. Select pingcap from the drop-down list, then click tidb-operator.

    click-tidb-operator

    Note

    This tutorial only demonstrates how to deploy TiDB Operator and a TiDB cluster. You can also deploy other tools based on your needs.
  4. On the Chart Files tab, you can view the configuration from the console directly or download the default values.yaml file by clicking the icon in the upper right corner. Under Versions, select a version number from the drop-down list and click Deploy.

    select-version

  5. On the Basic Info page, confirm the app name, app version, and deployment location. Click Next to continue.

    basic-info

  6. On the App Config page, you can either edit the values.yaml file, or click Deploy directly with the default configurations.

    check-config-file

  7. Wait for TiDB Operator to be up and running.

    tidb-operator-running

  8. Go to Workloads, and you can see two Deployments created for TiDB Operator.

    tidb-deployment

Step 4: Deploy a TiDB cluster

The process of deploying a TiDB cluster is similar to deploying TiDB Operator.

  1. Go to Applications under Application Workloads, click Deploy New Application again, and then select From App Templates.

    deploy-app-again

    from-app-templates-2

  2. From the PingCAP repository, click tidb-cluster.

    click-tidb-cluster

  3. On the Chart Files tab, you can view the configuration and download the values.yaml file. Click Deploy to continue.

    download-yaml-file

  4. On the Basic Info page, confirm the app name, app version, and deployment location. Click Next to continue.

    tidb-cluster-info

  5. Some TiDB components require persistent volumes. You can run the following command to view your storage classes.

    / # kubectl get sc
    NAME                       PROVISIONER     RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    csi-high-capacity-legacy   csi-qingcloud   Delete          Immediate           true                   71m
    csi-high-perf              csi-qingcloud   Delete          Immediate           true                   71m
    csi-ssd-enterprise         csi-qingcloud   Delete          Immediate           true                   71m
    csi-standard (default)     csi-qingcloud   Delete          Immediate           true                   71m
    csi-super-high-perf        csi-qingcloud   Delete          Immediate           true                   71m
    
  6. On the App Config page, change all the default values of the field storageClassName from local-storage to the name of your storage class. For example, you can change them to csi-standard based on the above output.

    tidb-cluster-config

    Note

    Only the field storageClassName is changed to provide external persistent storage. If you want to deploy each TiDB component, such as TiKV and Placement Driver, to individual nodes, specify the field nodeAffinity.
  7. Click Deploy and you can see two apps in the list as shown below:

    tidb-cluster-app-running

Step 5: View TiDB cluster status

  1. Go to Workloads under Application Workloads, and verify that all TiDB cluster Deployments are up and running.

    tidb-cluster-deployments-running

  2. Switch to the StatefulSets tab, and you can see TiDB, TiKV and PD are up and running.

    tidb-statefulsets

    Note

    TiKV and TiDB will be created automatically and it may take a while before they display in the list.
  3. Click a single StatefulSet to go to its detail page. You can see the metrics in line charts over a period of time under the Monitoring tab.

    TiDB metrics:

    tidb-metrics

    TiKV metrics:

    tikv-metrics

    PD metrics:

    pd-metrics

  4. In Pods under Application Workloads, you can see the TiDB cluster contains two TiDB Pods, three TiKV Pods, and three PD Pods.

    tidb-pod-list

  5. In Volumes under Storage, you can see TiKV and PD are using persistent volumes.

    tidb-storage-usage

  6. Volume usage is also monitored. Click a volume item to go to its detail page. Here is an example of TiKV:

    tikv-volume-status

  7. On the Overview page of the project, you can see a list of resource usage in the current project.

    tidb-project-resource-usage

Step 6: Access the TiDB cluster

  1. Go to Services under Application Workloads, and you can see detailed information of all Services. As the Service type is set to NodePort by default, you can access it through the Node IP address outside the cluster.

    tidb-service

  2. TiDB integrates Prometheus and Grafana to monitor performance of the database cluster. For example, you can access Grafana through {$NodeIP}:{Nodeport} to view metrics.

    tidb-service-grafana

    tidb-grafana

    Note

    You may need to open the port in your security groups and configure related port forwarding rules depending on where your Kubernetes cluster is deployed.