In this final part of the Azure Arc series, we will deploy the data controller followed by PostgreSQL-Hyperscale.
Though there are multiple techniques available for deploying Azure Arc enabled data services, we are using the native Kubernetes deployment model.
This article assumes that you have a Kubernetes cluster running version 1.17 or above with a storage class called local-storage is configured. I am using PX-Essentials, the free storage option from Portworx by Pure Storage as the storage layer. You are free to use any Kubernetes compatible storage engine.
Azure Arc enabled data services rely on a data controller for lifecycle management. All the objects of this service are deployed as Custom Resource Definitions (CRD). You need Kubernetes cluster administration permissions to deal with this deployment.
Installing the Data Controller
Let’s start by deploying the required CRDs:
1
|
kubectl create –f https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/yaml/custom-resource-definitions.yaml
|
Azure Arc enabled data services are typically installed within a namespace called arc
. Let’s create that:
1
|
kubectl create namespace arc
|
The next step is to deploy a bootstrapper that handles incoming requests for creating, editing, and deleting custom resources:
1
|
kubectl create —namespace arc –f https://raw.githubusercontent.com/microsoft/azure_arc/master/arc_data_services/deploy/yaml/bootstrapper.yaml
|
You should now have the bootstrapper up and running in the arc
namespace.
We have to create a secret that holds the username and password of the data controller. On macOS, you can run the below commands to generate a base64 encoded string for username and password:
1
2
|
#prints YWRtaW4=
echo “admin” | tr –d n | base64
|
1
2
|
#prints UGFzc3dvcmRAMTIz
echo “[email protected]” | tr –d n | base64
|
Take the values from the above commands to create a secret:
1
2
3
4
5
6
7
|
apiVersion: v1
data:
password: UGFzc3dvcmRAMTIz
username: YWRtaW4=
kind: Secret
metadata:
name: controller–login–secret
|
1
|
kubectl create –f controller–login–secret.yaml
|
Download the data controller YAML file and modify it to reflect your connectivity and storage options:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
|
apiVersion: arcdata.microsoft.com/v1alpha1
kind: datacontroller
metadata:
generation: 1
name: arc
spec:
credentials:
controllerAdmin: controller–login–secret
serviceAccount: sa–mssql–controller
docker:
imagePullPolicy: Always
imageTag: public–preview–sep–2020
registry: mcr.microsoft.com
repository: arcdata
security:
allowDumps: true
allowNodeMetricsCollection: true
allowPodMetricsCollection: true
allowRunAsRoot: false
services:
– name: controller
port: 30080
serviceType: LoadBalancer
– name: serviceProxy
port: 30777
serviceType: LoadBalancer
settings:
ElasticSearch:
vm.max_map_count: “-1”
azure:
connectionMode: Indirect
location: westeurope
resourceGroup:
subscription:
controller:
displayName: arc
enableBilling: “True”
logs.rotation.days: “7”
logs.rotation.size: “5000”
storage:
data:
accessMode: ReadWriteOnce
className: local–storage
size: 15Gi
logs:
accessMode: ReadWriteOnce
className: local–storage
size: 10Gi
|
Update the template with an appropriate resource group, subscription ID, and storage class name. Apply the data controller specification:
1
|
kubectl apply –f data–controller.yaml
|
The controller is exposed through a LoadBalancer service. Find the IP address and port of the service:
1
|
kubectl get svc controller–svc–external –n arc
|
We can now login into the controller with the azdata
tool. Run the below commands to install the latest version of the Azure Arc enabled data services CLI:
1
2
3
|
brew tap microsoft/azdata–cli–release
brew update
brew install azdata–cli
|
Running azdata login
will prompt us for the details:
Now that the controller is in place, we are ready to deploy PostgreSQL Hyperscale.
Installing PostgreSQL Hyperscale Instance
Start by downloading the YAML template file from the official Microsoft Git repository. Modify it based on the values of your storage class. Set the password value to a bas64 encoded string.
The following specification has a secret called [email protected] with the storage class pointed to local-storage:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
|
apiVersion: v1
data:
password: UGFzc3dvcmRAMTIz
kind: Secret
metadata:
name: pgsql–login–secret
type: Opaque
—–
apiVersion: arcdata.microsoft.com/v1alpha1
kind: postgresql–12
metadata:
generation: 1
name: pgsql
spec:
engine:
extensions:
– name: citus
scale:
shards: 3
scheduling:
default:
resources:
limits:
cpu: “4”
memory: 4Gi
requests:
cpu: “1”
memory: 2Gi
service:
type: LoadBalancer
storage:
backups:
className: local–storage
size: 10Gi
data:
className: local–storage
size: 10Gi
logs:
className: local–storage
size: 5Gi
|
Apply the specification with the below kubectl command:
1
|
kubectl apply –n arc –f pgsql.yaml
|
In a few minutes, you will see four new pods belonging to PostgreSQL Hyperscale added to the arc
namespace:
1
|
kubectl get pods –l type=postgresql –n arc
|
The deployment is exposed through a service that can be used to access the database:
1
|
kubectl get svc pgsql–external–svc –n arc
|
We can also use azdata
to get the PostgreSQL endpoint:
1
|
azdata arc postgres endpoint list –n pgsql
|
We can now login into PostgreSQL using any client tools. The below screenshot shows the psql
CLI accessing the database instance:
1
|
PGPASSWORD=Password@123 psql –h 10.0.0.203 –U postgres
|
This tutorial walked you through the steps of deploying Azure Arc enabled database services on Kubernetes.
Janakiram MSV’s Webinar series, “Machine Intelligence and Modern Infrastructure (MI2)” offers informative and insightful sessions covering cutting-edge technologies. Sign up for the upcoming MI2 webinar at http://mi2.live.
Portworx is a sponsor of InApps Technology.
Source: InApps.net