This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.
Before you start deploying SUSE Cloud Application Platform, review the following documents:
This chapter describes how to deploy SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS), using Amazon's Elastic Load Balancer to provide fault-tolerant access to your cluster.
The following are required to deploy and use SUSE Cloud Application Platform on EKS:
An Amazon AWS account with sufficient permissions. For details, refer to https://docs.aws.com/eks/latest/userguide/security-iam.html.
eksctl, a command line client to create and manage
Kubernetes clusters on Amazon EKS. See
https://docs.aws.amazon.com/eks/latest/userguide/getting-started-eksctl.html
for more information and installation instructions.
cf, the Cloud Foundry command line interface. For more information,
see https://docs.cloudfoundry.org/cf-cli/.
For SUSE Linux Enterprise and openSUSE systems, install using zypper.
tux > sudo zypper install cf-cliFor SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.
tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.
kubectl, the Kubernetes command line tool. For more
information, refer to
https://kubernetes.io/docs/reference/kubectl/overview/.
For SLE 12 SP3 or 15 SP1 systems, install the package kubernetes-client from the Public Cloud module.
For other systems, follow the instructions at https://kubernetes.io/docs/tasks/tools/install-kubectl/.
curl, the Client URL (cURL) command line tool.
Now you can create an EKS cluster using eksctl. Be sure to
keep in mind the following minimum requirements of the cluster.
Node sizes are at least t3.xlarge.
The NodeVolumeSize must be a minimum of 100 GB.
The Kubernetes version is at least 1.14.
As a minimal example, the following command will create an EKS cluster. To
see additional configuration parameters, see eksctl create cluster --help.
tux > eksctl create cluster --name kubecf --version 1.14 \
--nodegroup-name standard-workers --node-type t3.xlarge \
--nodes 3 --node-volume-size 100 \
--region us-east-2 --managed \
--ssh-access --ssh-public-key /path/to/some_key.pub
Helm is a Kubernetes package manager used to install and manage SUSE Cloud Application Platform.
This requires installing the Helm client, helm, on your
remote management workstation. Cloud Application Platform requires Helm 3.
For more information regarding Helm, refer to the documentation at
https://helm.sh/docs/.
Make sure that you are installing and using Helm 3 and not Helm 2.
If your remote management workstation has the SUSE CaaS Platform package repository,
install helm by running
tux >sudo zypper install helm3tux >sudo update-alternatives --set helm /usr/bin/helm3
Otherwise, helm can be installed by referring to the
documentation at https://helm.sh/docs/intro/install/.
In some SUSE Cloud Application Platform instance groups, such as bits,
database, diego-cell, and
singleton-blobstore require a storage class for persistent
data. To learn more about storage classes, see
https://kubernetes.io/docs/concepts/storage/storage-classes/.
By default, SUSE Cloud Application Platform will use the cluster's default storage class. To designate or change the default storage class, refer to https://kubernetes.io/docs/tasks/administer-cluster/change-default-storage-class/ for instructions.
In some cases, the default and predefined storage classes may not be suitable for certain workloads. If this is the case, operators can define their own custom StorageClass resource according to the specification at https://kubernetes.io/docs/concepts/storage/storage-classes/#the-storageclass-resource.
With the storage class defined, run:
tux > kubectl create --filename my-storage-class.yamlThen verify the storage class is available by running
tux > kubectl get storageclass
If operators do no want to use the default storage class or one does not
exist, a storage class must be specified by
setting the kube.storage_class value in your
kubecf-config-values.yaml configuration file to the name of the storage class as seen
in this example.
kube: storage_class: my-storage-class
Use this example kubecf-config-values.yaml as a template
for your configuration.
The format of the kubecf-config-values.yaml file has been restructured completely in
Cloud Application Platform 2.x. Do not re-use the Cloud Application Platform 1.x version of the file. Instead, see the
default file in the appendix in
Section A.1, “Complete suse/kubecf values.yaml File” and pick parameters according to
your needs.
When selecting a domain, SUSE Cloud Application Platform expects system_domain to
be either a subdomain or a root domain. Setting system_domain to
a top-level domain, such as suse, is not supported.
### Example deployment configuration file
### kubecf-config-values.yaml
system_domain: example.com
credentials:
cf_admin_password: changeme
uaa_admin_client_secret: alsochangeme
### This block is required due to the log-cache issue described below
properties:
log-cache:
log-cache:
memory_limit_percent: 3
### This block is required due to the log-cache issue described below
###
### The value for key may need to be replaced depending on
### how notes in your cluster are labeled
###
### The value(s) listed under values may need to be
### replaced depending on how notes in your cluster are labeled
operations:
inline:
- type: replace
path: /instance_groups/name=log-cache/env?/bosh/agent/settings/affinity
value:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- LABEL_VALUE_OF_NODEThe log-cache component currently has a memory allocation issue where the node memory available is reported instead of the one assigned to the container under cgroups. In such a situation, log-cache would start allocating memory based on these values, causing a varying range of issues (OOMKills, performance degradation, etc.). To address this issue, node affinity must be used to tie log-cache to nodes of a uniform size, and then declaring the cache percentage based on that number. A limit of 3% has been identified as sufficient.
In the node affinity configuration, the values for key and
values may need to be changed depending on how notes in your
cluster are labeled. For more information on labels, see
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels.
Note that the diego-cell pods used by the Diego standard
scheduler are
privileged
use large local emptyDir volumes (i.e. require node disk storage)
and set kernel parameters on the node
These things all mean that these pods should not live next to other Kubernetes workloads. They should all be placed on their own dedicated nodes instead where possible.
This can be done by setting affinities and tolerations, as explained in the associated tutorial at https://kubecf.io/docs/deployment/affinities-and-tolerations/.
This section describes the process to secure traffic passing through your SUSE Cloud Application Platform deployment. This is achieved by using certificates to set up Transport Layer Security (TLS) for the router component. Providing certificates for the router traffic is optional. In a default deployment, without operator-provided certificates, generated certificates will be used.
Ensure the certificates you use have the following characteristics:
The certificate is encoded in the PEM format.
The certificate is signed by an external Certificate Authority (CA).
The certificate's Subject Alternative Names (SAN) include the domain
*.example.com, where example.com
is replaced with the system_domain in your
kubecf-config-values.yaml.
The certificate used to secure your deployment is passed through the
kubecf-config-values.yaml configuration file. To specify
a certificate, set the value of the certificate and its corresponding private
key using the router.tls.crt and
router.tls.key Helm values in the
settings: section.
settings:
router:
tls:
crt: |
-----BEGIN CERTIFICATE-----
MIIEEjCCAfoCCQCWC4NErLzy3jANBgkqhkiG9w0BAQsFADBGMQswCQYDVQQGEwJD
QTETMBEGA1UECAwKU29tZS1TdGF0ZTEOMAwGA1UECgwFTXlPcmcxEjAQBgNVBAMM
CU15Q0Euc2l0ZTAeFw0xODA5MDYxNzA1MTRaFw0yMDAxMTkxNzA1MTRaMFAxCzAJ
...
xtNNDwl2rnA+U0Q48uZIPSy6UzSmiNaP3PDR+cOak/mV8s1/7oUXM5ivqkz8pEJo
M3KrIxZ7+MbdTvDOh8lQplvFTeGgjmUDd587Gs4JsormqOsGwKd1BLzQbGELryV9
1usMOVbUuL8mSKVvgqhbz7vJlW1+zwmrpMV3qgTMoHoJWGx2n5g=
-----END CERTIFICATE-----
key: |
-----BEGIN RSA PRIVATE KEY-----
MIIEpAIBAAKCAQEAm4JMchGSqbZuqc4LdryJpX2HnarWPOW0hUkm60DL53f6ehPK
T5Dtb2s+CoDX9A0iTjGZWRD7WwjpiiuXUcyszm8y9bJjP3sIcTnHWSgL/6Bb3KN5
G5D8GHz7eMYkZBviFvygCqEs1hmfGCVNtgiTbAwgBTNsrmyx2NygnF5uy4KlkgwI
...
GORpbQKBgQDB1/nLPjKxBqJmZ/JymBl6iBnhIgVkuUMuvmqES2nqqMI+r60EAKpX
M5CD+pq71TuBtbo9hbjy5Buh0+QSIbJaNIOdJxU7idEf200+4anzdaipyCWXdZU+
MPdJf40awgSWpGdiSv6hoj0AOm+lf4AsH6yAqw/eIHXNzhWLRvnqgA==
-----END RSA PRIVATE KEY----This section describes how to use an ingress controller (see https://kubernetes.io/docs/concepts/services-networking/ingress/) to manage access to the services in the cluster. Using an ingress controller is optional. In a default deployment, load balancers are used instead.
Note that only the NGINX Ingress Controller has been verified to be compatible with Cloud Application Platform. Other Ingress controller alternatives may work, but compatibility with Cloud Application Platform is not supported.
Create a configuration file with the section below. The file is called
nginx-ingress.yaml in this example. When using
Eirini instead of Diego, replace the first line with
2222: "kubecf/eirinix-ssh-proxy:2222".
tcp: 2222: "kubecf/scheduler:2222" 20000: "kubecf/tcp-router:20000" 20001: "kubecf/tcp-router:20001" 20002: "kubecf/tcp-router:20002" 20003: "kubecf/tcp-router:20003" 20004: "kubecf/tcp-router:20004" 20005: "kubecf/tcp-router:20005" 20006: "kubecf/tcp-router:20006" 20007: "kubecf/tcp-router:20007" 20008: "kubecf/tcp-router:20008"
Create the namespace.
tux > kubectl create namespace nginx-ingressInstall the NGINX Ingress Controller.
tux > helm install nginx-ingress suse/nginx-ingress \
--namespace nginx-ingress \
--values nginx-ingress.yamlMonitor the progess of the deployment:
tux > watch --color 'kubectl get pods --namespace nginx-ingress'After the deployment completes, the Ingress controller service will be deployed with either an external IP or a hostname.
Find the external IP or hostname.
tux > kubectl get services nginx-ingress-controller --namespace nginx-ingressYou will get output similar to the following.
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) nginx-ingress-controller LoadBalancer 10.63.248.70 35.233.191.177 80:30344/TCP,443:31386/TCP
Set up DNS records corresponding to the controller service IP or hostname
and map it to the system_domain defined in your
kubecf-config-values.yaml.
Obtain a PEM formatted certificate that is associated with the
system_domain defined in your kubecf-config-values.yaml
In your kubecf-config-values.yaml configuration file, enable the ingress feature and
set the tls.crt and tls.key for the
certificate from the previous step.
features:
ingress:
enabled: true
tls:
crt: |
-----BEGIN CERTIFICATE-----
MIIE8jCCAtqgAwIBAgIUT/Yu/Sv8AUl5zHXXEKCy5RKJqmYwDQYJKoZIhvcMOQMM
[...]
xC8x/+zB7XlvcRJRio6kk670+25ABP==
-----END CERTIFICATE-----
key: |
-----BEGIN RSA PRIVATE KEY-----
MIIE8jCCAtqgAwIBAgIUSI02lj2b2ImLy/zMrjNgW5d8EygwQSVJKoZIhvcYEGAW
[...]
to2WV7rPMb9W9fd2vVUXKKHTc+PiNg==
-----END RSA PRIVATE KEY-----This feature requires SUSE Cloud Application Platform 2.0.1 or newer.
Operators can set affinity/anti-affinity rules to restrict how the scheduler determines the placement of a given pod on a given node. This can be achieved through node affinity/anti-affinity, where placement is determined by node labels (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#node-affinity), or pod affinity/anti-affinity, where pod placement is determined by labels on pods that are already running on the node (see https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#inter-pod-affinity-and-anti-affinity).
In SUSE Cloud Application Platform, a default configuration will have following affinity/anti-affinity rules already in place:
Instance groups have anti-affinity against themselves. This applies to all
instance groups, including database, but not to the
bits, eirini, and
eirini-extensions subcharts.
The diego-cell and router instance
groups have anti-affinity against each other.
Note that to ensure an optimal spread of the pods across worker nodes we
recommend running 5 or more worker nodes to satisfy both of the default
anti-affinity constraints. An operator can also specify custom affinity rules
via the
sizing.instance-group.affinity
helm parameter and any affinity rules specified here will overwrite the
default rule, not merge with it.
To add or override affinity/anti-affinity settings, add a
sizing.INSTANCE_GROUP.affinity block to your
kubecf-config-values.yaml. Repeat as necessary for each instance group where
affinity/anti-affinity settings need to be applied. For information on
the available fields and valid values within the affinity:
block, see
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#affinity-and-anti-affinity.
Repeat as necessary for each instance group where affinity/anti-affinity
settings need to be applied.
Example 1, node affinity.
Using this configuration, the Kubernetes scheduler would place both the
asactors and asapi instance groups on a
node with a label where the key is
topology.kubernetes.io/zone and the value is
0.
sizing:
asactors:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- 0
asapi:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: topology.kubernetes.io/zone
operator: In
values:
- 0Example 2, pod anti-affinity.
sizing:
api:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: quarks.cloudfoundry.org/quarks-statefulset-name
operator: In
values:
- sample_group
topologyKey: kubernetes.io/hostname
database:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: quarks.cloudfoundry.org/quarks-statefulset-name
operator: In
values:
- sample_group
topologyKey: kubernetes.io/hostname
Example 1 above uses topology.kubernetes.io/zone as its
label, which is one of the standard labels that get attached to nodes by
default. The list of standard labels can be found at
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#built-in-node-labels.
In addition to the standard labels, custom labels can be specified as in Example 2. To use custom labels, following the process described in this section https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector.
High availability mode is optional. In a default deployment, SUSE Cloud Application Platform is deployed in single availability mode.
There are two ways to make your SUSE Cloud Application Platform deployment highly available.
The first method is to set the high_availability parameter
in your deployment configuration file to true. The second
method is to create custom configuration files with your own sizing values.
The sizing: section in the Helm
values.yaml files for the kubecf chart
describes which roles can be scaled, and the scaling options for each role.
You may use helm inspect to read the
sizing: section in the Helm chart:
tux > helm show suse/kubecf | less +/sizing:
Another way is to use Perl to extract the information for each role from
the sizing: section.
tux > helm inspect values suse/kubecf | \
perl -ne '/^sizing/..0 and do { print $.,":",$_ if /^ [a-z]/ || /high avail|scale|count/ }'
The default values.yaml files are also included in
this guide at Section A.1, “Complete suse/kubecf values.yaml File”.
high_availability Helm Property #
One way to make your SUSE Cloud Application Platform deployment highly available is
to use the high_availability Helm property. In your
kubecf-config-values.yaml, set this property to
true. This changes the size of all roles to the minimum
required for a highly available deployment. Your configuration file,
kubecf-config-values.yaml, should include the following.
high_availability: true
When sizing values are specified, it takes precedence over the high_availability property.
Another method to make your SUSE Cloud Application Platform deployment highly available is to explicitly configure the instance count of an instance group.
When sizing values are specified, it takes precedence over the high_availability property.
To see the full list of configurable instance groups, refer to default
KubeCF values.yaml file in the appendix at
Section A.1, “Complete suse/kubecf values.yaml File”.
The following is an example High Availability configuration. The example values are not meant to be copied, as these depend on your particular deployment and requirements.
sizing:
adapter:
instances: 2
api:
instances: 2
asactors:
instances: 2
asapi:
instances: 2
asmetrics:
instances: 2
asnozzle:
instances: 2
auctioneer:
instances: 2
bits:
instances: 2
cc_worker:
instances: 2
credhub:
instances: 2
database:
instances: 1
diego_api:
instances: 2
diego_cell:
instances: 2
doppler:
instances: 2
eirini:
instances: 3
log_api:
instances: 2
nats:
instances: 2
router:
instances: 2
routing_api:
instances: 2
scheduler:
instances: 2
uaa:
instances: 2
tcp_router:
instances: 2Cloud Foundry Application Runtime (CFAR) uses a blobstore (see https://docs.cloudfoundry.org/concepts/cc-blobstore.html) to store the source code that developers push, stage, and run. This section explains how to configure an external blobstore for the Cloud Controller component of your SUSE Cloud Application Platform deployment. Using an external blobstore is optional. In a default deployment, an internal blobstore is used.
SUSE Cloud Application Platform relies on ops files (see
https://github.com/cloudfoundry/cf-deployment/blob/master/operations/README.md)
provided by cf-deployment (see https://github.com/cloudfoundry/cf-deployment)
releases for external blobstore configurations. The default configuration for
the blobstore is singleton.
Currently SUSE Cloud Application Platform supports Amazon Simple Storage Service (Amazon S3, see https://aws.amazon.com/s3/) as an external blobstore.
Using the Amazon S3 service, create four buckets. A bucket should be created for app packages, buildpacks, droplets, and resources. For instructions on how to create Amazone S3 buckets, see https://docs.aws.amazon.com/AmazonS3/latest/user-guide/create-bucket.html.
To grant proper access to the create buckets, configure an additional IAM role as described in the first step of https://docs.cloudfoundry.org/deploying/common/cc-blobstore-config.html#fog-aws-iam.
Set the following in your kubecf-config-values.yaml file and replace the
example values.
features:
blobstore:
provider: s3
s3:
aws_region: "us-east-1"
blobstore_access_key_id: AWS-ACCESS-KEY-ID
blobstore_secret_access_key: AWS-SECRET-ACCESS-KEY>
# User provided value for the blobstore admin password.
blobstore_admin_users_password: PASSWORD
# The following values are used as S3 bucket names. The buckets are automatically created if not present.
app_package_directory_key: APP-BUCKET-NAME
buildpack_directory_key: BUILDPACK-BUCKET-NAME
droplet_directory_key: DROPLET-BUCKET-NAME
resource_directory_key: RESOURCE-BUCKET-NAMESUSE Cloud Application Platform can be configured to use an external database system, such as a data service offered by a cloud service provider or an existing high availability database server. In a default deployment, an internal single availability database is used.
To configure your deployment to use an external database, please follow the instructions below.
The current SUSE Cloud Application Platform release is compatible with the following types and versions of external databases:
MySQL 5.7
This section describes how to enable and configure your deployment to connect
to an external database. The configuration options are specified through
Helm values inside the kubecf-config-values.yaml. The
deployment and configuration of the external database itself is the
responsibility of the operator and beyond the scope of this documentation. It
is assumed the external database has been deployed and accessible.
Configuration of SUSE Cloud Application Platform to use an external database must be done during the initial installation and cannot be changed afterwards.
All the databases listed in the config snippet below need to exist before
installing KubeCF. One way of doing that is manually running
CREATE DATABASE IF NOT EXISTS
database-name for each database.
The following snippet of the kubecf-config-values.yaml
contains an example of an external database configuration.
features:
embedded_database:
enabled: false
external_database:
enabled: true
require_ssl: false
ca_cert: ~
type: mysql
host: hostname
port: 3306
databases:
uaa:
name: uaa
password: root
username: root
cc:
name: cloud_controller
password: root
username: root
bbs:
name: diego
password: root
username: root
routing_api:
name: routing-api
password: root
username: root
policy_server:
name: network_policy
password: root
username: root
silk_controller:
name: network_connectivity
password: root
username: root
locket:
name: locket
password: root
username: root
credhub:
name: credhub
password: root
username: rootDownload the SUSE Kubernetes charts repository with Helm:
tux > helm repo add suse https://kubernetes-charts.suse.com/
You may replace the example suse name with any
name. Verify with helm:
tux > helm repo list
NAME URL
stable https://kubernetes-charts.storage.googleapis.com
local http://127.0.0.1:8879/charts
suse https://kubernetes-charts.suse.com/List your chart names, as you will need these for some operations:
tux > helm search repo suse
NAME CHART VERSION APP VERSION DESCRIPTION
suse/cf-operator 7.2.1+0.gaeb6ef3 2.1.1 A Helm chart for cf-operator, the k8s operator ....
suse/console 4.4.1 2.1.1 A Helm chart for deploying SUSE Stratos Console
suse/kubecf 2.7.13 2.1.1 A Helm chart for KubeCF
suse/metrics 1.3.0 2.1.1 A Helm chart for Stratos Metrics
suse/minibroker 1.2.0 A minibroker for your minikube
suse/nginx-ingress 0.28.4 0.15.0 An nginx Ingress controller that uses ConfigMap to store ...
...This section describes how to deploy SUSE Cloud Application Platform on Amazon EKS.
KubeCF and cf-operator interoperate closely. Before you deploy a specific version combination, make sure they were confirmed to work. For more information see Section 3.4, “Releases and Associated Versions”.
First, create the namespace for the operator.
tux > kubectl create namespace cf-operatorInstall the operator.
The value of global.operator.watchNamespace indicates the
namespace the operator will monitor for a KubeCF deployment. This
namespace should be separate from the namespace used by the operator. In
this example, this means KubeCF will be deployed into a namespace called
kubecf.
tux > helm install cf-operator suse/cf-operator \
--namespace cf-operator \
--set "global.singleNamespace.name=kubecf" \
--version 7.2.1+0.gaeb6ef3
Wait until cf-operator is successfully deployed before proceeding. Monitor
the status of your cf-operator deployment using the
watch command.
tux > watch --color 'kubectl get pods --namespace cf-operator'Use Helm to deploy KubeCF.
Note that you do not need to manually create the namespace for KubeCF.
tux > helm install kubecf suse/kubecf \
--namespace kubecf \
--values kubecf-config-values.yaml \
--version 2.7.13
Monitor the status of your KubeCF deployment using the
watch command.
tux > watch --color 'kubectl get pods --namespace kubecf'
Find the value of EXTERNAL-IP for each of the public
services.
tux >kubectl get service --namespace kubecf router-publictux >kubectl get service --namespace kubecf tcp-router-publictux >kubectl get service --namespace kubecf ssh-proxy-public
Create DNS CNAME records for the public services.
For the router-public service, create a record
mapping the EXTERNAL-IP value to <system_domain>.
For the router-public service, create a record
mapping the EXTERNAL-IP value to *.<system_domain>.
For the tcp-router-public service, create a record
mapping the EXTERNAL-IP value to tcp.<system_domain>.
For the ssh-proxy-public service, create a record
mapping the EXTERNAL-IP value to ssh.<system_domain>.
When all pods are fully ready, verify your deployment. See Section 3.2, “Status of Pods during Deployment” for more information.
Connect and authenticate to the cluster.
tux >cf api --skip-ssl-validation "https://api.<system_domain>" # Use the cf_admin_password set in kubecf-config-values.yamltux >cf auth admin changeme
SUSE Cloud Application Platform can be integrated with identity providers to help manage authentication of users. Integrating SUSE Cloud Application Platform with other identity providers is optional. In a default deployment, a built-in UAA server (https://docs.cloudfoundry.org/uaa/uaa-overview.html) is used to manage user accounts and authentication.
The Lightweight Directory Access Protocol (LDAP) is an example of an identity provider that Cloud Application Platform integrates with. This section describes the necessary components and steps in order to configure the integration. See User Account and Authentication LDAP Integration for more information.
The following prerequisites are required in order to complete an LDAP integration with SUSE Cloud Application Platform.
cf, the Cloud Foundry command line interface. For more information,
see https://docs.cloudfoundry.org/cf-cli/.
For SUSE Linux Enterprise and openSUSE systems, install using zypper.
tux > sudo zypper install cf-cliFor SLE, ensure the SUSE Cloud Application Platform Tools Module has been added. Add the module using YaST or SUSEConnect.
tux > SUSEConnect --product sle-module-cap-tools/15.1/x86_64For other systems, follow the instructions at https://docs.cloudfoundry.org/cf-cli/install-go-cli.html.
uaac, the Cloud Foundry uaa command line client
(UAAC). See
https://docs.cloudfoundry.org/uaa/uaa-user-management.html
for more information and installation instructions.
On SUSE Linux Enterprise systems, ensure the ruby-devel and gcc-c++
packages have been installed before installing the cf-uaac gem.
tux > sudo zypper install ruby-devel gcc-c++An LDAP server and the credentials for a user/service account with permissions to search the directory.
Run the following commands to complete the integration of your Cloud Application Platform deployment and LDAP server.
Use UAAC to target your uaa server.
tux > uaac target --skip-ssl-validation https://uaa.example.com
Authenticate to the uaa server as
admin using the
uaa_admin_client_secret set in your
kubecf-config-values.yaml file.
tux > uaac token client get admin --secret PASSWORDList the current identity providers.
tux > uaac curl /identity-providers --insecure
From the output, locate the default ldap entry and take
note of its id. The entry will be similar to the
following.
{
"type": "ldap",
"config": "{\"emailDomain\":null,\"additionalConfiguration\":null,\"providerDescription\":null,\"externalGroupsWhitelist\":[],\"attributeMappings\":{},\"addShadowUserOnLogin\":true,\"storeCustomAttributes\":true,\"ldapProfileFile\":\"ldap/ldap-search-and-bind.xml\",\"baseUrl\":\"ldap://localhost:389/\",\"referral\":null,\"skipSSLVerification\":false,\"userDNPattern\":null,\"userDNPatternDelimiter\":null,\"bindUserDn\":\"cn=admin,dc=test,dc=com\",\"userSearchBase\":\"dc=test,dc=com\",\"userSearchFilter\":\"cn={0}\",\"passwordAttributeName\":null,\"passwordEncoder\":null,\"localPasswordCompare\":null,\"mailAttributeName\":\"mail\",\"mailSubstitute\":null,\"mailSubstituteOverridesLdap\":false,\"ldapGroupFile\":null,\"groupSearchBase\":null,\"groupSearchFilter\":null,\"groupsIgnorePartialResults\":null,\"autoAddGroups\":true,\"groupSearchSubTree\":true,\"maxGroupSearchDepth\":10,\"groupRoleAttribute\":null,\"tlsConfiguration\":\"none\"}",
"id": "53gc6671-2996-407k-b085-2346e216a1p0",
"originKey": "ldap",
"name": "UAA LDAP Provider",
"version": 3,
"created": 946684800000,
"last_modified": 1602208214000,
"active": false,
"identityZoneId": "uaa"
},
Delete the default ldap identity provider. If the
default entry is not removed, adding another identity provider of type
ldap will result in a 409 Conflict
response. Replace the example id with one found in the
previous step.
tux > uaac curl /identity-providers/53gc6671-2996-407k-b085-2346e216a1p0 \
--request DELETE \
--insecure
Create your own LDAP identity provider. A 201 Created
response will be returned when the identity provider is successfully
created. See the
UAA
API Reference and
Cloud Foundry
UAA-LDAP Documentationfor information regarding the request
parameters and additional options available to configure your identity
provider.
The following is an example of a uaac curl command and
its request parameters used to create an identity provider. Specify the
parameters according to your LDAP server's credentials and directory
structure. Ensure the user specifed in the bindUserDn
has permissions to search the directory.
tux > uaac curl /identity-providers?rawConfig=true \
--request POST \
--insecure \
--header 'Content-Type: application/json' \
--data '{
"type" : "ldap",
"config" : {
"ldapProfileFile" : "ldap/ldap-search-and-bind.xml",
"baseUrl" : "ldap://ldap.example.com:389",
"bindUserDn" : "cn=admin,dc=example,dc=com",
"bindPassword" : "password",
"userSearchBase" : "dc=example,dc=com",
"userSearchFilter" : "uid={0}",
"ldapGroupFile" : "ldap/ldap-groups-map-to-scopes.xml",
"groupSearchBase" : "dc=example,dc=com",
"groupSearchFilter" : "member={0}"
},
"originKey" : "ldap",
"name" : "My LDAP Server",
"active" : true
}'
Verify the LDAP identify provider has been created. The output should now
contain an entry for the ldap type you created.
tux > uaac curl /identity-providers --insecureUse the cf CLI to target your SUSE Cloud Application Platform deployment.
tux > cf api --skip-ssl-validation https://api.example.comLog in as an administrator.
tux > cf login
API endpoint: https://api.example.com
Email> admin
Password>
Authenticating...
OKCreate users associated with your LDAP identity provider.
tux > cf create-user username --origin ldap
Creating user username...
OK
TIP: Assign roles with 'cf set-org-role' and 'cf set-space-role'.Assign the user a role. Roles define the permissions a user has for a given org or space and a user can be assigned multiple roles. See Orgs, Spaces, Roles, and Permissions for available roles and their corresponding permissions. The following example assumes that an org named Org and a space named Space have already been created.
tux >cf set-space-role username Org Space SpaceDeveloper Assigning role RoleSpaceDeveloper to user username in org Org / space Space as admin... OKtux >cf set-org-role username Org OrgManager Assigning role OrgManager to user username in org Org as admin... OK
Verify the user can log into your SUSE Cloud Application Platform deployment using their associated LDAP server credentials.
tux > cf login
API endpoint: https://api.example.com
Email> username
Password>
Authenticating...
OK
API endpoint: https://api.example.com (API version: 2.115.0)
User: username@ldap.example.comIf the current capacity of your Cloud Application Platform deployment is insufficient for your workloads, you can expand the capacity using the procedure in this section.
These instructions assume you have followed the procedure in Chapter 6, Deploying SUSE Cloud Application Platform on Amazon Elastic Kubernetes Service (EKS) and have a running Cloud Application Platform deployment on Amazon EKS.
Get the current number of Kubernetes nodes in the cluster.
tux > eksctl get nodegroup --name standard-workers \
--cluster kubecf \
--region us-east-2Scale the nodegroup to the desired node count.
tux > eksctl scale nodegroup --name standard-workers \
--cluster kubecf \
--nodes 4 \
--region us-east-2
Verify the new nodes are in a Ready state before
proceeding.
tux > kubectl get nodes
Add or update the following in your
kubecf-config-values.yaml file to increase the number of
diego-cell in your Cloud Application Platform deployment. Replace the
example value with the number required by your workflow.
sizing:
diego_cell:
instances: 5
Perform a helm upgrade to apply the change.
tux > helm upgrade kubecf suse/kubecf \
--namespace kubecf \
--values kubecf-config-values.yaml \
--version 2.7.13
Monitor progress of the additional diego-cell pods:
tux > watch --color 'kubectl get pods --namespace
kubecf'