WS1 UEM SCIM Adapter: 20.08 Release

Release Notes

New Features:

  1. Docker containers now exclusively supported for deployments
  2. Helm chart included to facilitate Kubernetes orchestration of adapter containers

Bugs Fixed:

  1. createGroup returns unexpected error due to missing payload return

Download:

Checkout the official VMware Fling download here.

Deployment Method

In my previous GA release post, the preferred installation method centered on a bundled install script and a Bitnami Node.js appliance. This method has been deprecated in favor of the included Helm chart in version 20.08, which is much simpler to run and allows for aggressive scaling due to its stateless properties.

Deploying the application with the Helm chart is split into four simple, familiar steps for those already accustomed to a Kubernetes environment:

  1. Build the Docker container and submit to registry
  2. Deploy an Nginx ingress controller
  3. Install the Jetstack cert-manager and create a cluster issuer
  4. Install the WS1 UEM SCIM Adapter Helm chart

You’ll also want to make sure you’ve created a DNS record for the ingress controller public IP representing your Kubernetes Service fronting the application. For those without a Kubernetes environment, I highly recommend GKE, but AKS or EKS will do fine as well. YMMV…

Build Docker Container

The included Dockerfile can be shipped to any build service to handle the construction and registration of your container for later use by Kubernetes. I have chosen to use the Alpine image to keep the container lightweight. As an example, here’s how you would submit the Dockerfile to Azure Container Registry:

az acr build --image scimadapter/scimadapter:20.8.1 --registry scimadapter --file Dockerfile .

The “20.8.1” tag is mission critical, as this is predefined for you in many of the Helm chart properties used later.

Nginx Ingress Controller

This part is pretty simple. Add the stable Nginx ingress-controller repository to your Helm charts, then update. To deploy the ingress-controller to your cluster:

helm install nginx-ingress stable/nginx-ingress

Jetstack Cert-Manager

See instructions here from Jetstack on installing cert-manager in your cluster. Once deployed, create a cluster issuer yaml declaring LetsEncrypt as your issuer:

apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: you@example.com
    server: https://acme-v02.api.letsencrypt.org/directory
    solvers:
    - http01:
        ingress:
          class: nginx
    privateKeySecretRef:
      name: letsencrypt-prod

Then apply the cluster issuer to the cluster:

kubectl apply -f ./cluster-issuer.yml

WS1 UEM SCIM Adapter

Lastly, edit the Helm chart values.yaml included in the Fling to include the unique data about your environment. Namely:

  1. image.repository
  2. airwatchHost
  3. airwatchApi
  4. cert-manager.io/cluster-issuer
  5. hosts.host
  6. tls.hosts

For example:

# Default values for ws1scimadapter.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.

replicaCount: 2

image:
  # repository should include absolute reference without version, i.e. cr.example.com/repo/name
  repository: cr.example.com/scimadapter/scimadapter
  pullPolicy: Always

containerPort: 9000
logLevel: info

# host should not include protocol prefix, i.e. api.example.com
airwatchHost: api.example.com
airwatchApi: base64==

healthCheckUri: /api/ping

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: false
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name:

podSecurityContext: {}
  # fsGroup: 2000

securityContext: {}
  # capabilities:
  #   drop:
  #   - ALL
  # readOnlyRootFilesystem: true
  # runAsNonRoot: true
  # runAsUser: 1000

service:
  type: ClusterIP
  port: 8080

ingress:
  enabled: true
  annotations:
    kubernetes.io/ingress.class: nginx

    # set cluster issuer to your preferred label, i.e. letsencrypt-prod
    cert-manager.io/cluster-issuer: letsencrypt-prod

    # kubernetes.io/ingress.class: nginx
    # kubernetes.io/tls-acme: "true"
  
  # set to the hostname of your scimadapter, i.e. scim.example.com
  hosts:
    - host: scim.example.com
      paths:
        - /api

  tls:
    - secretName: scim-test-secret
      hosts:
        - scim.example.com

  #  - secretName: chart-example-tls
  #    hosts:
  #      - chart-example.local

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

nodeSelector: {}

tolerations: []

affinity: {}

Confirm you’re in the Helm chart working directory, then install the chart:

helm install ws1scimadapter .

Give the pods a few seconds to show as ‘Running’, then return to my original GA post to setup Azure AD provisioning to the service.

Tags: , , , , , , ,