- 1 SUSE® OpenStack Cloud: Security Planning and Features
- 1.1 Security Planning
- 1.2 Security Features in SUSE OpenStack Cloud 9
- 1.3 Role-Based Access Control (RBAC) Support for neutron Networks
- 1.4 Network Security Group Logging and Auditing
- 1.5 Separate Service Administrator Role
- 1.6 Inter-service Password Enhancements
- 1.7 Data In Transit Protection
- 1.8 Data-at-Rest Protection Using Project-Based Encryption
- 1.9 CADF-Compliant Security Audit Logs
- 1.10 glance-API Rate Limit to Address CVE-2016-8611
- 2 Key Management with the barbican Service
- 3 Key Management Service Administration
- 3.1 Post-installation verification and administration
- 3.2 Updating the barbican Key Management Service
- 3.3 barbican Settings
- 3.4 Enable or Disable Auditing of barbican Events
- 3.5 Updating the barbican API Service Configuration File
- 3.6 Starting and Stopping the barbican Service
- 3.7 Changing or Resetting a Password
- 3.8 Checking Barbican Status
- 3.9 Updating Logging Configuration
- 4 Service Admin Role Segregation in the Identity Service
- 5 Role-Based Access Control in neutron
- 5.1 Creating a Network
- 5.2 Creating an RBAC Policy
- 5.3 Listing RBACs
- 5.4 Listing the Attributes of an RBAC
- 5.5 Deleting an RBAC Policy
- 5.6 Sharing a Network with All Tenants
- 5.7 Target Project (
demo2) View of Networks and Subnets - 5.8 Target Project: Creating a Port Using demo-net
- 5.9 Target Project Booting a VM Using Demo-Net
- 5.10 Limitations
- 6 Enabling Network Security Group Logging
- 7 Configuring keystone and horizon to use X.509 Client Certificates
- 8 Transport Layer Security (TLS) Overview
- 9 Preventing Host Header Poisoning
- 10 Encryption of Passwords and Sensitive Data
- 11 Encryption of Ephemeral Volumes
- 12 Refining Access Control with AppArmor
- 13 Data at Rest Encryption
- 14 glance-API Rate Limit (CVE-2016-8611)
- 15 Security Audit Logs
Copyright © 2006– 2022 SUSE LLC and contributors. All rights reserved.
Except where otherwise noted, this document is licensed under Creative Commons Attribution 3.0 License : https://creativecommons.org/licenses/by/3.0/legalcode.
For SUSE trademarks, see https://www.suse.com/company/legal/. All other third-party trademarks are the property of their respective owners. Trademark symbols (®, ™ etc.) denote trademarks of SUSE and its affiliates. Asterisks (*) denote third-party trademarks.
All information found in this book has been compiled with utmost attention to detail. However, this does not guarantee complete accuracy. Neither SUSE LLC, its affiliates, the authors nor the translators shall be held liable for possible errors or the consequences thereof.
1 SUSE® OpenStack Cloud: Security Planning and Features #
1.1 Security Planning #
SUSE® OpenStack Cloud is a complex system that has many options which affect the security of the system and the workloads running within it. Careful planning is required to ensure the correct options and features are chosen for your particular environment. You should read Security Boundaries and Threats to understand how OpenStack defines security domains. Ensure that the network layout you choose for your deployment provides adequate network separation for your specific requirements. Carefully examine the other chapters in the OpenStack Security Guide for further information. Take advantage of the security features described below in this section.
OpenStack services by design run some commands with root privileges in order to provide functionality. Although services run as a non-root user they can escalate privilege to root to perform certain operations. If a service process is compromised to the extent that an attacker can execute arbitrary code as the service user, then it is likely that the attacker can elevate privilege to root and take over the system on which the service is running. When planning the security controls for an OpenStack deployment, you should consider the services to be running as root when deciding which security controls are appropriate for your environment. Timely installation of security patches is essential. Other controls such as Intrusion Prevention Systems and Web Application Firewalls are recommended.
1.2 Security Features in SUSE OpenStack Cloud 9 #
Enterprises need protection against security breaches, insider threats, and operational issues that increase the risk to sensitive data. SUSE OpenStack Cloud 9 provides capabilities that help you protect your data at rest and in transit, enable centralized key management, and meet compliance requirements.
In SUSE OpenStack Cloud 9, a number of security features are available to strengthen and harden your cloud deployment. Below is an overview of some of the features and brief descriptions. Follow the links to the relevant topics for instructions on setup, configuration, and use of these security features.
1.3 Role-Based Access Control (RBAC) Support for neutron Networks #
The RBAC feature for neutron networks enables better security as administrators can control who has access to specific networks. This is a significant improvement over the all-or-nothing approach to shared networks. This is beneficial from a security standpoint as some projects (or tenants) have stricter security policies. For example, a finance department must run workloads in isolation from other departments, and thus cannot share their neutron network resources. RBAC enables cloud admins to create granular security policies for sharing neutron resources with one or more tenants or projects using the standard CRUD (Create, Read, Update, Delete) model. More information can be found in Chapter 5, Role-Based Access Control in neutron.
1.4 Network Security Group Logging and Auditing #
Security logging and auditing provides the ability to discover and manage activities related to security in a cloud installation. Logging is a service plug-in for SUSE OpenStack Cloud that captures events for relevant resources such as security groups and firewalls. With this information, an administrator can investigate and take whatever remedial actions are necessary to maintain a secure system. For more information about enabling and managing Network Security Group logging, see Chapter 6, Enabling Network Security Group Logging.
1.5 Separate Service Administrator Role #
Each OpenStack service account has an optional role available to restrict the OpenStack functions each account can access. This feature enables cloud administrators to apply service-specific role-based, administration-level access to a specific UserID, with the ability to audit administration-level actions. This functionality provides better security by not only providing full visibility into administration-level activities via audit logs, but also by fulfilling compliance requirements. More information can be found in Section 4.1, “Overview”.
1.6 Inter-service Password Enhancements #
You can conveniently change the inter-service passwords used for authenticating communications between services in your SUSE OpenStack Cloud deployment, promoting better compliance with your organization’s security policies. The inter-service passwords that can be changed include (but are not limited to) keystone, MariaDB, RabbitMQ, Cloud Lifecycle Manager, monasca and barbican. Administrators can implement this feature by running the configuration processor to generate new passwords followed by Ansible playbook commands to change the credentials.
1.7 Data In Transit Protection #
With SUSE OpenStack Cloud 9, data transmission between internal API endpoints is encrypted using TLS v 1.2 to protect sensitive data against unauthorized disclosure and modification (spoofing and tampering attacks). Additionally, you can configure TLS using your own certificates, from a Certificate Authority of your choice, providing deployment flexibility. More information can be found in Section 8.2, “TLS Configuration”.
1.8 Data-at-Rest Protection Using Project-Based Encryption #
You can encrypt sensitive data-at-rest on per tenant or project basis, while storing and managing keys externally and centrally using Enterprise Secure Key Manager (ESKM). This capability requires the barbican API and OASIS KMIP (Key Management Interoperability Protocol) plug-ins for integration, and supports encryption of cinder block storage with SUSE OpenStack Cloud 9. More information can be found in Chapter 13, Data at Rest Encryption.
1.9 CADF-Compliant Security Audit Logs #
Security audit logs for critical services such as keystone, nova, cinder, glance, heat, neutron, and barbican are available in a standard CADF (Cloud Audit Data Federation) format. These logs contain information on events such as unauthorized logins, administration level access, unsuccessful login attempts, and anomalous deletion of VMs that are critical from a security threat monitoring standpoint. Audit logs are useful as a tool for risk mitigation, identifying suspicious or anomalous activity, and for fulfilling compliance requirements. For more information see Chapter 15, Security Audit Logs.
1.10 glance-API Rate Limit to Address CVE-2016-8611 #
No limits are enforced within the glance service for both v1 and v2/images API POST method for authenticated users, resulting in possible denial of service through database table saturation. Further explanation and instructions for adding a rate-limiter are in Chapter 14, glance-API Rate Limit (CVE-2016-8611).
2 Key Management with the barbican Service #
2.1 barbican Service Overview #
barbican is an OpenStack key management service offering secure storage, provisioning, and management of key data. The barbican service provides management of secrets, keys and certificates via multiple storage back-ends. The support for various back ends is provided via a plug-in mechanism, a Key Management Interoperability Protocol (KMIP) plug-in for a KMIP-compliant Hardware Secure Module (HSM) device. barbican supports symmetric and asymmetric key generation using various algorithms. cinder and nova will integrate with barbican for their encryption key generation and storage.
barbican has two types of core feature sets:
The barbican component, a Web Server Gateway Interface (WSGI) application that exposes a REST API for secrets/containers/orders.
barbican workers for asynchronous processing, which is used for various messaging-event-driven tasks related to certificate generation.
2.2 Key Features #
The major features of the barbican key management service are:
The ability to encrypt volumes/disks. In an OpenStack context, this means support for encrypting cinder volumes (volume encryption). cinder has its own key manager interface (KeyMgr) and can use
python-barbicanclientas one of its implementations. By default in SUSE OpenStack Cloud 9, cinder uses barbican as its key manager when barbican is enabled. KeyMgr encrypts data in the virtualization host before writing data to the remote disk. There are three options available in SUSE OpenStack Cloud:Tenant-based encryption for block volume storage using barbican for KMS
barbican with KMIP and PKCS11 and external KMS (certified with Micro Focus ESKM)
3PAR StoreServ Data-At-Rest Encryption
Storage and retrieval of secrets (passwords)
The ability to define and manage access policies for key material
Administrative functionality, and the ability to control the lifecycle of key material
A well-defined auditing ability in OpenStack services for key access and lifecycle events
Key management as a service for PaaS application(s) deployed on an OpenStack cloud
The ability to scale key management effectively and make it highly available (able to handle failover)
Do not delete the certificate container associated with your load balancer listeners before deleting the load balancers themselves. If you delete the certificate first, future operations on your load balancers and failover will stop working.
2.3 Installation #
New installations of SUSE OpenStack Cloud 9:
For new installations, no changes are needed for barbican to be enabled. When installing your cloud, you should use the input models which already define the necessary barbican components. When using the pre-defined input model files that come with SUSE OpenStack Cloud 9, nothing else needs to be done in those files.
Generate a master key.
WarningDo not change your master key after deploying it to barbican.
If you decide to make configuration changes to your clean install of SUSE OpenStack Cloud 9, you will need to redeploy the barbican service. For details on available customization options, please see Chapter 3, Key Management Service Administration.
Master Key Configuration#
barbican currently supports databases and KMIP as its secret store back-ends. In OpenStack upstream additional back-ends are available, such as the PKCS11 and dogtag plug-ins, but they are not tested or supported by SUSE OpenStack Cloud.
In SUSE OpenStack Cloud, by default barbican is configured to use a database as a secret (keys) storage back-end. This back-end encrypts barbican-managed keys with a project level key (KEK/Key Encryption Key) before storing it in the database. Project-level keys are encrypted using a master key. As part of the initial barbican configuration, you must generate and configure this master key.
When barbican is used with simple_crypto_plugin as
its secret store back-end, its master key needs to be defined
before initial deployment. If no key is
specified before deployment, the default master key is
used—this practice is discouraged.
Generate the master key using the provided python *generate_kek* script on the Cloud Lifecycle Manager node:
python ~/openstack/ardana/ansible/roles/KEYMGR-API/templates/generate_kek
The master key is generated at stdout from this command.
Set the master key in
~/openstack/my_cloud/config/barbican/barbican_deploy_config.yml.If there is an existing
barbican_customer_master_keyvalue, replace it with the generated master key you just generated.Commit the change to the Git repository:
cd ~/openstack git add -A git commit -m "My config"
Run ready-deployment:
cd ~/openstack/ardana/ansible/ ansible-playbook -i hosts/localhost ready-deployment.yml
When the master key is set, continue with your cloud deployment.
Upgrade Master Key Configuration#
Check the master key.
If a master key is already defined, check
~/openstack/ardana/ansible/roles/barbican-common/vars/barbican_deploy_config.ymlforbarbican_customer_master_keyvalue. If the value does not have a prefix@ardana@, it is not encrypted. It is highly recommended to encrypt this value.Encrypt the existing key during upgrade:
Set up the environment variable.
ARDANA_USER_PASSWORD_ENCRYPT_KEY
which contains the key used to encrypt barbican master key.
Before you run any playbooks, you need to export the encryption key in the following environment variable:
export ARDANA_USER_PASSWORD_ENCRYPT_KEY=<USER_ENCRYPTION_KEY>
python *roles/KEYMGR-API/templates/generate_kek <barbican_customer_master_key>
Master key is generated at stdout.
Set this master key in file
~/openstack/ardana/ansible/roles/barbican-common/vars/barbican_deploy_config.yml
Replace existing
barbican_customer_master_keyvalue with the master key you just generated.Commit the change in git repository.
cd ~/openstack/ardana/ansible/ ansible-playbook -i hosts/localhost ready-deployment.yml
When the master key is set, continue with cloud deployment.
Changing the master key during the upgrade process is discouraged. Changing the master key will result in a read error for existing secrets as they were encrypted using the previous master key.
For a barbican deployment with a database back-end, the master key needs to be generated and configured before barbican is deployed for the first time. Once the master key is set, it must not be modified.
Changing the master key can result in read errors for existing secrets as those secrets are stored in the database and are encrypted using the previous master key. Once a new master key is used, barbican will not be able to read those secrets. Also it will not be able to create new secrets within that project as the project key is encrypted using previous master key.
KMIP Plug-in Support#
barbican has a KMIP plug-in to store encryption keys (called secrets in barbican service terminology) in an HSM device using the KMIP protocol. This plug-in has been tested against Micro Focus ESKM with KMIP server. To enable support for it, barbican needs to be configured with the corresponding plug-in connection details, and client certificate information needs to be defined in its configuration. The ESKM KMIP server uses a client certificate to validate a KMIP client connection established by the barbican server. As part of that KMIP configuration, playbooks provide a mechanism to upload your client certs to nodes hosting the barbican API server.
KMIP deployment instructions can be found in Section 13.1, “Configuring KMIP and ESKM”.
Installation and deployment of the Micro Focus ESKM or any other HSM devices and dependent components is beyond the scope of this document. Please refer the relevant documentation for your choice of product. For example, you can get more information on Micro Focus ESKM and related Data Security and Encryption Products at https://software.microfocus.com/en-us/products/eskm-enterprise-secure-key-management/overview.
2.4 Auditing barbican Events #
The barbican service supports auditing and uses Chapter 15, Security Audit Logs to generate auditing data in Cloud Auditing Data Federation (CADF) format. The SUSE OpenStack Cloud input model has a mechanism to enable and disable auditing on a per-service basis. When barbican auditing is enabled, it writes audit messages to an audit log file that is separate from the barbican internal logging. The base location of audit log file is driven by common auditing configuration.
Enabling and Disabling Auditing#
The auditing of barbican events can be enabled and disabled through the barbican reconfigure playbook. As part of the configuration of barbican, its audit messages can be directed to a log or to a messaging queue. By default, messages are written to the barbican log file. Once an architecture-level decision is made with regards to the default consumer of audit events (either logging or messaging), the barbican service can be configured to use it as the default option when auditing is enabled.
Auditing can be disabled or enabled by following these steps on the Cloud Lifecycle Manager node.
Edit the file
~/openstack/my_cloud/definition/cloudConfig.yml. All audit-related configuration is defined in the audit-settings section. You must use valid YAML syntax when specifying values.Any service (including barbican) that is listed under enabled-services or disabled-services will override the default setting. To enable auditing, make sure that the barbican service name is in the enabled-services list of the
audit-settingssection or is not present in disabled-services list when default: is set to enabled.The relevant section of
cloudConfig.ymlis shown below. Enabled-services are commented out.The
default: enabledsetting applies to all services. If you want to disable (or enable) a few, whichever is the opposite of the default global setting you used, you can do so in a disabled-services (or enabled-services) section below it. Here the enabled-services entry is commented out. You should only have either a default of enabled (or disabled) or a section of disabled (or enabled). There is no need to duplicate the setting.audit-settings: default: enabled #enabled-services: # - keystone # - barbican disabled-services: - nova - barbican - keystone - cinder - ceilometer - neutronWhen you are satisfied with your settings, copy the files to
~/openstack/my_cloud/definition/, and commit the changes in the git repository. For example, if you are using the Entry Scale KVM model, you would copy from~/openstack/examples/entry-scale-kvmand commit.cp -r ~/openstack/examples/entry-scale-kvm/* ~/openstack/my_cloud/definition/ cd ~/openstack git add -A git commit -m "My config"
Run the configuration processor and ready-deployment:
cd ~/openstack/ardana/ansible/ ansible-playbook -i hosts/localhost config-processor-run.yml ansible-playbook -i hosts/localhost ready-deployment.yml
Run barbican-reconfigure:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts barbican-reconfigure.yml
2.5 barbican Key Management Service Bootstrap Data #
When the key management service is installed, some of the keystone-related initial data is bootstrapped as part of its initial deployment. The data added is primarily related to barbican user, roles, service and endpoint definitions, and barbican role assignments.
User, Roles, Service and Endpoint Definitions#
| Type | Name or key-value pair | Purpose | Comments |
|---|---|---|---|
|
keystone User Account |
barbican |
barbican user account associated with administrative privileges. |
Password is randomly generated and made available in the barbican
client environment setup script, |
|
keystone User Account |
barbican_service |
Service account used for keystone token validation by
|
Password is randomly generated and stored in barbican paste
configuration, |
|
keystone Role |
key-manager:creator |
barbican specific role with privilege to create, modify, list, and delete keys and certificates. |
This role has the same privileges defined for
|
|
keystone Role |
key-manager:admin |
barbican-specific role that has administrative privileges. Privileges include modifications (update and delete) in container's consumer, transport keys, certificate authorities (CA), assignment, and management of per-project CA. |
This role has the same privileges defined for |
|
keystone Role |
key-manager:observer |
barbican specific role which has privileges limited to read/list of keys, certificates. |
This role has the same privileges defined
for |
|
keystone Role |
key-manager:auditor |
barbican specific role which has privileges limited to reading metadata of keys, certificates. This role does not allow reading and listing of actual keys and certificates. |
This role has the same privileges defined for |
|
keystone Role |
key-manager:service-admin |
barbican specific role which has privilege to modify global preferred CA and modify default project quotas. |
This role has the same privileges defined
for |
|
keystone Service |
name: barbican type: key-manager |
barbican service definition. Service type is key-manager. | |
|
keystone Endpoint |
interface: internal region: region1 |
barbican internal endpoint. This is the load-balanced endpoint exposed for internal service usage. | |
|
keystone Endpoint |
interface: public region: region1 |
barbican public endpoint. This is the load-balanced endpoint exposed for external/public service usage. |
Role Assignments#
| User name | Project name | Role name | Purpose |
|---|---|---|---|
| barbican | admin | key-manager:admin |
User is assigned barbican administration privileges on keystone-defined
|
| barbican | admin | key-manager:service-admin |
User is assigned barbican service administration privileges on
keystone-defined |
| barbican | admin | admin |
User assigned keystone defined administrative role on its
|
| admin | admin | key-manager:admin |
keystone-defined |
| admin | admin | key-manager:service-admin |
In lines of above role assignment, barbican specific service administrator role is assigned to allow global preferred CA and quotas modifications. |
| barbican_service | services | service |
barbican service account is given |
2.6 Known issues and workarounds #
Make sure that in your Certificate Signing Request (CSR)
Common Namematches thebarbican_kmip_usernamevalue defined inroles/barbican-common/vars/barbican_deploy_config.yml. Otherwise you may see an internal server error message in barbican for secret create request.barbican does not return a clear error message with regards to client certificate setup and its connectivity with KMIP server. During secret create request, a general "Internal Server Error" is returned when the certificate is invalid or missing any of necessary client certificate data (client certificate, key and CA root certificate).
3 Key Management Service Administration #
3.1 Post-installation verification and administration #
In a production environment, you can verify your installation of the
barbican key management service by running the
barbican-status.yml Ansible playbook on the Cloud Lifecycle Manager node.
ansible-playbook -i hosts/verb_hosts barbican-status.yml
In any non-production environment, along with the playbook, you can also verify the service by storing and retrieving the secret from barbican.
3.2 Updating the barbican Key Management Service #
Some barbican features and service configurations can be changed. This
is done using the Cloud Lifecycle Manager Reconfigure Ansible playbook. For example, the log
level can be changed from INFO to DEBUG and vice-versa. If needed, this
change can be restricted to a set of nodes via the playbook's host limit
option. barbican administration tasks should be performed by an admin
user with a token scoped to the default domain via the keystone identity
API. These settings are preconfigured in the
barbican.osrc file. By default,
barbican.osrc is configured with the admin endpoint. If
the admin endpoint is not accessible from your network, change
OS_AUTH_URL to point to the public endpoint.
3.3 barbican Settings #
The following barbican configuration settings can be changed:
Anything in the main barbican configuration file:
/etc/barbican/barbican.confAnything in the main barbican worker configuration file:
/etc/barbican/barbican-worker.conf
You can also update the following configuration options and enable the following features. For example, you can:
Change the verbosity of logs written to barbican log files (
/var/log/barbican/).Enable and disable auditing of the barbican key management service
Edit
barbican_secret_storeplug-ins. The two options are:store_cryptoused to store the secrets in the databasekmip_pluginused to store the secrets into KMIP-enabled external devices
3.4 Enable or Disable Auditing of barbican Events #
Auditing of barbican key manager events can be disabled or enabled by following steps on the Cloud Lifecycle Manager node.
Edit the file
~/openstack/my_cloud/definition/cloudConfig.yml.All audit-related configuration is defined under
audit-settingssection. Valid YAML syntax is required when specifying values.Service name defined under
enabled-servicesordisabled-servicesoverride the default setting (that is,default: enabledordefault: disabled)To enable auditing, make sure that the barbican service name is listed in the
enabled-serviceslist ofaudit-settingssection or is not listed in thedisabled-serviceslist when default: is set toenabled.To disable auditing for the barbican service specifically, make sure that
barbican service nameis indisabled-serviceslist of theaudit-settingssection or is not present in theenabled-serviceslist when default: is set todisabled. You should not specify the service name in both lists. If it is specified in both, the enabled-services list takes precedence.Commit the change in git repository.
cd ~/openstack/ardana/ansible git add -A git commit -m "My config"
Run the
configuration-processor-runandready-deploymentplaybooks, followed by thebarbican-reconfigureplaybook:cd ~/openstack/ardana/ansible/ ansible-playbook -i hosts/localhost config-processor-run.yml ansible-playbook -i hosts/localhost ready-deployment.yml cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts barbican-reconfigure.yml
3.5 Updating the barbican API Service Configuration File #
The barbican API service configuration file (
/etc/barbican/barbican.conf), located on each control plane server (controller node) is generated from the following template file located on the Cloud Lifecycle Manager node:/var/lib/ardana/openstack/my_cloud/config/barbican/barbican.conf.j2. Modify this template file as appropriate. This is a Jinja2 template, which expects certain template variables to be set. Do not change values inside double curly braces:{{ }}.Once the template is modified, copy the files to
~/openstack/my_cloud/definition/, and commit the change to the local git repository:cp -r ~/hp-ci/padawan/* ~/openstack/my_cloud/definition/ cd ~/openstack/ardana/ansible git add -A git commit -m "My config"
Then rerun the configuration processor and ready-deployment playbooks:
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost config-processor-run.yml ansible-playbook -i hosts/localhost ready-deployment.yml
Finally, run the
barbican-reconfigureplaybook in the deployment area:cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts barbican-reconfigure.yml
3.6 Starting and Stopping the barbican Service #
You can start or stop the barbican service from the Cloud Lifecycle Manager nodes by running the appropriate Ansible playbooks:
To stop the barbican service:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts barbican-stop.yml
To start the barbican service:
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts barbican-start.yml
3.7 Changing or Resetting a Password #
To change the password for the barbican administrator:
Copy the file as shown below:
cp ~/openstack/my_cloud/info/private_data_metadata_ccp.yml \ ~/openstack/change_credentials/
Then edit
private_data_metadata_ccp.ymlfound here:~/openstack/change_credentials/private_data_metadata_ccp.yml
Change credentials for the barbican admin user and/or barbican service user. Remove everything else. The file will look similar to this:
barbican_admin_password: value: 'testing_123' metadata: - clusters: - cluster1 component: barbican-api cp: ccp version: '2.0' barbican_service_password: value: 'testing_123' metadata: - clusters: - cluster1 component: barbican-api cp: ccp version: '2.0'The value (shown in bold) is optional; it is used to set a user-chosen password. If left blank, the playbook will generate a random password.
Execute the following playbooks from
~/openstack/ardana/ansible/:cd ~/openstack/ardana/ansible/ ansible-playbook -i hosts/localhost config-processor-run.yml -e encrypt="" -e rekey="" ansible-playbook -i hosts/localhost ready-deployment.yml cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts barbican-reconfigure-credentials-change.yml
SSH to the controller and make sure the password has been properly updated.
/etc/barbican# vi barbican-api-paste.ini
3.8 Checking Barbican Status #
You can check the status of barbican by running the
barbican-status.yml Ansible playbook on the Cloud Lifecycle Manager node.
ansible-playbook -i hosts/verb_hosts barbican-status.yml
Make sure you remove/delete
~/openstack/change_credentials/private_data_metadata.yml
after successfully
changing the password.
3.9 Updating Logging Configuration #
All barbican logging is set to INFO by default. To change the level from the Cloud Lifecycle Manager, there are two options available
Edit the barbican configuration file,
/barbican_deploy_config.yml, in the following directory.~/openstack/my_cloud/config/barbican/
To change log level entry (
barbican_loglevel) to DEBUG, edit the entry:barbican_loglevel = {{ openstack_loglevel | default('DEBUG') }}To change the log level to INFO, edit the entry:
barbican_loglevel = {{ openstack_loglevel | default('INFO') }}Edit file
~/openstack/ardana/ansible/roles/KEYMGR-API/templates/api-logging.conf.j2and update the log level accordingly.
Commit the change to the local git repository:
cd ~/openstack/ardana/ansible git add -A git commit -m "My config"
Run the configuration-processor-run and ready-deployment playbooks, followed
by the barbican-reconfigure playbook:
ansible-playbook -i hosts/localhost config-processor-run.yml ansible-playbook -i hosts/localhost ready-deployment.yml cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts barbican-reconfigure.yml
4 Service Admin Role Segregation in the Identity Service #
4.1 Overview #
Under the default OpenStack user policies, a user can have either member privilege or admin privilege. Admin privilege is assigned by creating a user account with the role of admin. However, the default admin role is too broad and often grants users more privilege than they need, giving them access to additional tasks and resources that they should not have.
Ideally, each user account should only be assigned privileges necessary to perform tasks they are required to perform. According to the widely accepted principle of least privilege, a user who needs to perform administrative tasks should have a user account with the privileges required to perform only those administrative tasks and no others. This prevents the granting of too much privilege while retaining the individual accountability of the user.
Service Administrator Roles is an alternative to the current one-size-fits-all admin role model and can help you institute different privileges for different administrators.
4.2 Pre-Installed Service Admin Role Components #
The main components of Service Administrator Roles are:
nova_adminrole in the Identity service (keystone) and support innova_policy.jsonneutron_adminrole in the Identity service and support inneutron_policy.jsoncinder_adminrole in the Identity service and support incinder_policy.jsonswiftoperatorrole in the Identity service, defined in thekeystoneauthsection of theproxy-server.conffile.glance_adminrole in the Identity service and support inglance_policy.jsonWarning: Changingglance_policy.jsonmay Introduce a Security IssueA security issue is described in the OpenStack Security Note OSSN-0075 https://wiki.openstack.org/wiki/OSSN/OSSN-0075. It refers to a scenario where a malicious tenant is able to reuse deleted glance image IDs to share malicious images with other tenants in a manner that is undetectable to the victim tenant.
The default policy
glance_policy.jsonthat is shipped with SUSE OpenStack Cloud prevents this by ensuring only admins can deactivate/reactivate images:"deactivate": "role:admin" "reactivate": "role:admin"
It is suggested to not change these settings. If you do change them, please refer to the OSSN-0075 https://wiki.openstack.org/wiki/OSSN/OSSN-0075. This reference has details about the exact scope of the security issue.
The OpenStack
adminuser has broad capabilities to administer the cloud, including nova, neutron, cinder, swift, and glance. This is maintained to ensure backwards compatilibity, but if separation of duties is desired among administrative staff then the OpenStack roles may be partitioned across different administrators. For example, it is possible to have a set of network administrators with theneutron_adminrole, a set of storage administrators with thecinder_adminand/orswiftoperatorroles, and a set of compute administrators with thenova_adminandglance_adminroles.
4.3 Features and Benefits #
Service Administrator Roles offer the following features and benefits:
Support separation of duties through more granular roles
Are enabled by default
Are backwards compatible
Have predefined service administrator roles in the Identity service
Have predefined
policy.jsonfiles with corresponding service admin roles to facilitate quick and easy deployment
4.4 Roles #
The following are the roles defined in SUSE OpenStack Cloud 9. These roles serve as a way to group common administrative needs at the OpenStack service level. Each role represents administrative privilege into each service. Multiple roles can be assigned to a user. You can assign a Service Admin Role to a user once you have determined that the user is authorized to perform administrative actions and access resources in that service.
Pre-Installed Service Admin Roles
The following service admin roles exist by default:
- nova_admin role
Assign this role to users whose job function it is to perform nova compute-related administrative tasks.
- neutron_admin role
Assign this role to users whose job function it is to perform neutron networking-related administrative tasks.
- cinder_admin role
Assign this role to users whose job function it is to perform cinder storage-related administrative tasks.
- glance_admin role
Assign this role to users whose job function it is to perform glance image service-related administrative tasks.
5 Role-Based Access Control in neutron #
This topic explains how to achieve more granular access control for your neutron networks.
Previously in SUSE OpenStack Cloud, a network object was either private to a project or could be used by all projects. If the network's shared attribute was True, then the network could be used by every project in the cloud. If false, only the members of the owning project could use it. There was no way for the network to be shared by only a subset of the projects.
neutron Role Based Access Control (RBAC) solves this problem for networks. Now the network owner can create RBAC policies that give network access to target projects. Members of a targeted project can use the network named in the RBAC policy the same way as if the network was owned by the project. Constraints are described in the section Section 5.10, “Limitations”.
With RBAC you are able to let another tenant use a network that you created, but as the owner of the network, you need to create the subnet and the router for the network.
To use RBAC, neutron configuration files do not need to be changed.
5.1 Creating a Network #
ardana > openstack network create demo-net
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2018-07-25T17:43:59Z |
| description | |
| dns_domain | |
| id | 9c801954-ec7f-4a65-82f8-e313120aabc4 |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| mtu | 1450 |
| name | demo-net |
| port_security_enabled | False |
| project_id | cb67c79e25a84e328326d186bf703e1b |
| provider:network_type | vxlan |
| provider:physical_network | None |
| provider:segmentation_id | 1009 |
| qos_policy_id | None |
| revision_number | 2 |
| router:external | Internal |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2018-07-25T17:43:59Z |
+---------------------------+--------------------------------------+5.2 Creating an RBAC Policy #
Here we will create an RBAC policy where a member of the project called 'demo' will share the network with members of project 'demo2'
To create the RBAC policy, run:
ardana > openstack network rbac create --target-project DEMO2-PROJECT-ID --type network --action access_as_shared demo-netHere is an example where the DEMO2-PROJECT-ID is 5a582af8b44b422fafcd4545bd2b7eb5
ardana > openstack network rbac create --target-tenant 5a582af8b44b422fafcd4545bd2b7eb5 \
--type network --action access_as_shared demo-net5.3 Listing RBACs #
To list all the RBAC rules/policies, execute:
ardana > openstack network rbac list
+--------------------------------------+-------------+--------------------------------------+
| ID | Object Type | Object ID |
+--------------------------------------+-------------+--------------------------------------+
| 0fdec7f0-9b94-42b4-a4cd-b291d04282c1 | network | 7cd94877-4276-488d-b682-7328fc85d721 |
+--------------------------------------+-------------+--------------------------------------+5.4 Listing the Attributes of an RBAC #
To see the attributes of a specific RBAC policy, run
ardana > openstack network rbac show POLICY-IDFor example:
ardana > openstack network rbac show 0fd89dcb-9809-4a5e-adc1-39dd676cb386Here is the output:
+---------------+--------------------------------------+ | Field | Value | +---------------+--------------------------------------+ | action | access_as_shared | | id | 0fd89dcb-9809-4a5e-adc1-39dd676cb386 | | object_id | c3d55c21-d8c9-4ee5-944b-560b7e0ea33b | | object_type | network | | target_tenant | 5a582af8b44b422fafcd4545bd2b7eb5 | | tenant_id | 75eb5efae5764682bca2fede6f4d8c6f | +---------------+--------------------------------------+
5.5 Deleting an RBAC Policy #
To delete an RBAC policy, run openstack network rbac delete passing the policy id:
ardana > openstack network rbac delete POLICY-IDFor example:
ardana > openstack network rbac delete 0fd89dcb-9809-4a5e-adc1-39dd676cb386Here is the output:
Deleted rbac_policy: 0fd89dcb-9809-4a5e-adc1-39dd676cb386
5.6 Sharing a Network with All Tenants #
Either the administrator or the network owner can make a network shareable by all tenants.
The administrator can make a tenant's network shareable by all tenants.
To make the network demo-shareall-net accessible by all
tenants in the cloud:
To share a network with all tenants:
Get a list of all projects
ardana >~/service.osrcardana >openstack project listwhich produces the list:
+----------------------------------+------------------+ | ID | Name | +----------------------------------+------------------+ | 1be57778b61645a7a1c07ca0ac488f9e | demo | | 5346676226274cd2b3e3862c2d5ceadd | admin | | 749a557b2b9c482ca047e8f4abf348cd | swift-monitor | | 8284a83df4df429fb04996c59f9a314b | swift-dispersion | | c7a74026ed8d4345a48a3860048dcb39 | demo-sharee | | e771266d937440828372090c4f99a995 | glance-swift | | f43fb69f107b4b109d22431766b85f20 | services | +----------------------------------+------------------+
Get a list of networks:
ardana >openstack network listThis produces the following list:
+--------------------------------------+-------------------+----------------------------------------------------+ | id | name | subnets | +--------------------------------------+-------------------+----------------------------------------------------+ | f50f9a63-c048-444d-939d-370cb0af1387 | ext-net | ef3873db-fc7a-4085-8454-5566fb5578ea 172.31.0.0/16 | | 9fb676f5-137e-4646-ac6e-db675a885fd3 | demo-net | 18fb0b77-fc8b-4f8d-9172-ee47869f92cc 10.0.1.0/24 | | 8eada4f7-83cf-40ba-aa8c-5bf7d87cca8e | demo-shareall-net | 2bbc85a9-3ffe-464c-944b-2476c7804877 10.0.250.0/24 | | 73f946ee-bd2b-42e9-87e4-87f19edd0682 | demo-share-subset | c088b0ef-f541-42a7-b4b9-6ef3c9921e44 10.0.2.0/24 | +--------------------------------------+-------------------+----------------------------------------------------+
Set the network you want to share to a shared value of True:
ardana >openstack network set --share 8eada4f7-83cf-40ba-aa8c-5bf7d87cca8eYou should see the following output:
Updated network: 8eada4f7-83cf-40ba-aa8c-5bf7d87cca8e
Check the attributes of that network by running the following command using the ID of the network in question:
ardana >openstack network show 8eada4f7-83cf-40ba-aa8c-5bf7d87cca8eThe output will look like this:
+---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2018-07-25T17:43:59Z | | description | | | dns_domain | | | id | 8eada4f7-83cf-40ba-aa8c-5bf7d87cca8e | | ipv4_address_scope | None | | ipv6_address_scope | None | | is_default | None | | is_vlan_transparent | None | | mtu | 1450 | | name | demo-net | | port_security_enabled | False | | project_id | cb67c79e25a84e328326d186bf703e1b | | provider:network_type | vxlan | | provider:physical_network | None | | provider:segmentation_id | 1009 | | qos_policy_id | None | | revision_number | 2 | | router:external | Internal | | segments | None | | shared | False | | status | ACTIVE | | subnets | | | tags | | | updated_at | 2018-07-25T17:43:59Z | +---------------------------+--------------------------------------+
As the owner of the
demo-shareall-netnetwork, view the RBAC attributes fordemo-shareall-net(id=8eada4f7-83cf-40ba-aa8c-5bf7d87cca8e) by first getting an RBAC list:ardana >echo $OS_USERNAME ; echo $OS_PROJECT_NAME demo demoardana >openstack network rbac listThis produces the list:
+--------------------------------------+--------------------------------------+ | id | object_id | +--------------------------------------+--------------------------------------+ | ... | | 3e078293-f55d-461c-9a0b-67b5dae321e8 | 8eada4f7-83cf-40ba-aa8c-5bf7d87cca8e | +--------------------------------------+--------------------------------------+
View the RBAC information:
ardana >openstack network rbac show 3e078293-f55d-461c-9a0b-67b5dae321e8 +---------------+--------------------------------------+ | Field | Value | +---------------+--------------------------------------+ | action | access_as_shared | | id | 3e078293-f55d-461c-9a0b-67b5dae321e8 | | object_id | 8eada4f7-83cf-40ba-aa8c-5bf7d87cca8e | | object_type | network | | target_tenant | * | | tenant_id | 1be57778b61645a7a1c07ca0ac488f9e | +---------------+--------------------------------------+With network RBAC, the owner of the network can also make the network shareable by all tenants. First create the network:
ardana >echo $OS_PROJECT_NAME ; echo $OS_USERNAME demo demoardana >openstack network create test-netThe network is created:
+---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | | | created_at | 2018-07-25T18:04:25Z | | description | | | dns_domain | | | id | a4bd7c3a-818f-4431-8cdb-fedf7ff40f73 | | ipv4_address_scope | None | | ipv6_address_scope | None | | is_default | False | | is_vlan_transparent | None | | mtu | 1450 | | name | test-net | | port_security_enabled | False | | project_id | cb67c79e25a84e328326d186bf703e1b | | provider:network_type | vxlan | | provider:physical_network | None | | provider:segmentation_id | 1073 | | qos_policy_id | None | | revision_number | 2 | | router:external | Internal | | segments | None | | shared | False | | status | ACTIVE | | subnets | | | tags | | | updated_at | 2018-07-25T18:04:25Z | +---------------------------+--------------------------------------+
Create the RBAC. It is important that the asterisk is surrounded by single-quotes to prevent the shell from expanding it to all files in the current directory.
ardana >openstack network rbac create --type network \ --action access_as_shared --target-project '*' test-netHere are the resulting RBAC attributes:
+---------------+--------------------------------------+ | Field | Value | +---------------+--------------------------------------+ | action | access_as_shared | | id | 0b797cc6-debc-48a1-bf9d-d294b077d0d9 | | object_id | a4bd7c3a-818f-4431-8cdb-fedf7ff40f73 | | object_type | network | | target_tenant | * | | tenant_id | 1be57778b61645a7a1c07ca0ac488f9e | +---------------+--------------------------------------+
5.7 Target Project (demo2) View of Networks and Subnets #
Note that the owner of the network and subnet is not the tenant named
demo2. Both the network and subnet are owned by tenant demo.
Demo2members cannot create subnets of the network. They also cannot
modify or delete subnets owned by demo.
As the tenant demo2, you can get a list of neutron networks:
ardana > openstack network list+--------------------------------------+-----------+--------------------------------------------------+ | id | name | subnets | +--------------------------------------+-----------+--------------------------------------------------+ | f60f3896-2854-4f20-b03f-584a0dcce7a6 | ext-net | 50e39973-b2e3-466b-81c9-31f4d83d990b | | c3d55c21-d8c9-4ee5-944b-560b7e0ea33b | demo-net | d9b765da-45eb-4543-be96-1b69a00a2556 10.0.1.0/24 | ... +--------------------------------------+-----------+--------------------------------------------------+
And get a list of subnets:
ardana > openstack subnet list --network c3d55c21-d8c9-4ee5-944b-560b7e0ea33b+--------------------------------------+---------+--------------------------------------+---------------+ | ID | Name | Network | Subnet | +--------------------------------------+---------+--------------------------------------+---------------+ | a806f28b-ad66-47f1-b280-a1caa9beb832 | ext-net | c3d55c21-d8c9-4ee5-944b-560b7e0ea33b | 10.0.1.0/24 | +--------------------------------------+---------+--------------------------------------+---------------+
To show details of the subnet:
ardana > openstack subnet show d9b765da-45eb-4543-be96-1b69a00a2556+-------------------+--------------------------------------------+
| Field | Value |
+-------------------+--------------------------------------------+
| allocation_pools | {"start": "10.0.1.2", "end": "10.0.1.254"} |
| cidr | 10.0.1.0/24 |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 10.0.1.1 |
| host_routes | |
| id | d9b765da-45eb-4543-be96-1b69a00a2556 |
| ip_version | 4 |
| ipv6_address_mode | |
| ipv6_ra_mode | |
| name | sb-demo-net |
| network_id | c3d55c21-d8c9-4ee5-944b-560b7e0ea33b |
| subnetpool_id | |
| tenant_id | 75eb5efae5764682bca2fede6f4d8c6f |
+-------------------+--------------------------------------------+5.8 Target Project: Creating a Port Using demo-net #
The owner of the port is demo2. Members of the network owner project
(demo) will not see this port.
Running the following command:
ardana > openstack port create c3d55c21-d8c9-4ee5-944b-560b7e0ea33bCreates a new port:
+-----------------------+-----------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+-----------------------------------------------------------------------------------------------------+
| admin_state_up | True |
| allowed_address_pairs | |
| binding:vnic_type | normal |
| device_id | |
| device_owner | |
| dns_assignment | {"hostname": "host-10-0-1-10", "ip_address": "10.0.1.10", "fqdn": "host-10-0-1-10.openstacklocal."} |
| dns_name | |
| fixed_ips | {"subnet_id": "d9b765da-45eb-4543-be96-1b69a00a2556", "ip_address": "10.0.1.10"} |
| id | 03ef2dce-20dc-47e5-9160-942320b4e503 |
| mac_address | fa:16:3e:27:8d:ca |
| name | |
| network_id | c3d55c21-d8c9-4ee5-944b-560b7e0ea33b |
| security_groups | 275802d0-33cb-4796-9e57-03d8ddd29b94 |
| status | DOWN |
| tenant_id | 5a582af8b44b422fafcd4545bd2b7eb5 |
+-----------------------+-----------------------------------------------------------------------------------------------------+5.9 Target Project Booting a VM Using Demo-Net #
Here the tenant demo2 boots a VM that uses the demo-net shared network:
ardana > openstack server create --flavor 1 --image $OS_IMAGE --nic net-id=c3d55c21-d8c9-4ee5-944b-560b7e0ea33b demo2-vm-using-demo-net-nic+--------------------------------------+------------------------------------------------+
| Property | Value |
+--------------------------------------+------------------------------------------------+
| OS-EXT-AZ:availability_zone | |
| OS-EXT-STS:power_state | 0 |
| OS-EXT-STS:task_state | scheduling |
| OS-EXT-STS:vm_state | building |
| OS-SRV-USG:launched_at | - |
| OS-SRV-USG:terminated_at | - |
| accessIPv4 | |
| accessIPv6 | |
| adminPass | sS9uSv9PT79F |
| config_drive | |
| created | 2016-01-04T19:23:24Z |
| flavor | m1.tiny (1) |
| hostId | |
| id | 3a4dc44a-027b-45e9-acf8-054a7c2dca2a |
| image | cirros-0.3.3-x86_64 (6ae23432-8636-4e...1efc5) |
| key_name | - |
| metadata | {} |
| name | demo2-vm-using-demo-net-nic |
| os-extended-volumes:volumes_attached | [] |
| progress | 0 |
| security_groups | default |
| status | BUILD |
| tenant_id | 5a582af8b44b422fafcd4545bd2b7eb5 |
| updated | 2016-01-04T19:23:24Z |
| user_id | a0e6427b036344fdb47162987cb0cee5 |
+--------------------------------------+------------------------------------------------+Run openstack server list:
ardana > openstack server listSee the VM running:
+-------------------+-----------------------------+--------+------------+-------------+--------------------+ | ID | Name | Status | Task State | Power State | Networks | +-------------------+-----------------------------+--------+------------+-------------+--------------------+ | 3a4dc...a7c2dca2a | demo2-vm-using-demo-net-nic | ACTIVE | - | Running | demo-net=10.0.1.11 | +-------------------+-----------------------------+--------+------------+-------------+--------------------+
Run openstack port list:
ardana > openstask port list --device-id 3a4dc44a-027b-45e9-acf8-054a7c2dca2aView the subnet:
+---------------------+------+-------------------+-------------------------------------------------------------------+
| id | name | mac_address | fixed_ips |
+---------------------+------+-------------------+-------------------------------------------------------------------+
| 7d14ef8b-9...80348f | | fa:16:3e:75:32:8e | {"subnet_id": "d9b765da-45...00a2556", "ip_address": "10.0.1.11"} |
+---------------------+------+-------------------+-------------------------------------------------------------------+Run openstack port show:
ardana > openstack port show 7d14ef8b-9d48-4310-8c02-00c74d80348f+-----------------------+-----------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+-----------------------------------------------------------------------------------------------------+
| admin_state_up | True |
| allowed_address_pairs | |
| binding:vnic_type | normal |
| device_id | 3a4dc44a-027b-45e9-acf8-054a7c2dca2a |
| device_owner | compute:None |
| dns_assignment | {"hostname": "host-10-0-1-11", "ip_address": "10.0.1.11", "fqdn": "host-10-0-1-11.openstacklocal."} |
| dns_name | |
| extra_dhcp_opts | |
| fixed_ips | {"subnet_id": "d9b765da-45eb-4543-be96-1b69a00a2556", "ip_address": "10.0.1.11"} |
| id | 7d14ef8b-9d48-4310-8c02-00c74d80348f |
| mac_address | fa:16:3e:75:32:8e |
| name | |
| network_id | c3d55c21-d8c9-4ee5-944b-560b7e0ea33b |
| security_groups | 275802d0-33cb-4796-9e57-03d8ddd29b94 |
| status | ACTIVE |
| tenant_id | 5a582af8b44b422fafcd4545bd2b7eb5 |
+-----------------------+-----------------------------------------------------------------------------------------------------+5.10 Limitations #
Note the following limitations of RBAC in neutron.
neutron network is the only supported RBAC neutron object type.
The "access_as_external" action is not supported – even though it is listed as a valid action by python-neutronclient.
The neutron-api server will not accept action value of 'access_as_external'. The
access_as_externaldefinition is not found in the specs.The target project users cannot create, modify, or delete subnets on networks that have RBAC policies.
The subnet of a network that has an RBAC policy cannot be added as an interface of a target tenant's router. For example, the command
openstack router add subnet tgt-tenant-router <sb-demo-net uuid>will error out.The security group rules on the network owner do not apply to other projects that can use the network.
A user in target project can boot up VMs using a VNIC using the shared network. The user of the target project can assign a floating IP (FIP) to the VM. The target project must have SG rules that allows SSH and/or ICMP for VM connectivity.
neutron RBAC creation and management are currently not supported in horizon. For now, the neutron CLI has to be used to manage RBAC rules.
A RBAC rule tells neutron whether a tenant can access a network (Allow). Currently there is no DENY action.
Port creation on a shared network fails if
--fixed-ipis specified in theopenstack port createcommand.
6 Enabling Network Security Group Logging #
Currently securitygroup uses an iptables-based firewall by
default. This section provides information for enabling Open vSwitch (OVS)
Network Security Group logging.
As a prerequisite, the system configuration must specify the native OVS
firewall driver. Under [securitygroup] in
~/openstack/my_cloud/config/neutron/ml2_conf.ini.j2,
change the firewall driver to firewall_driver = openvswitch.
Use the following steps to enable logging for
securitygroup.
Add log as a
service_pluginin~/openstack/my_cloud/config/neutron/neutron.conf.j2.service_plugins = {{ neutron_service_plugins }},logAdd the
logextension in theagentsection of~/openstack/my_cloud/config/neutron/ml2_conf.ini.j2.[agent] extensions = log
Add the
logextension in theagentsection of~/openstack/my_cloud/config/neutron/openvswitch_agent.ini.j2. If other extensions are configured (such asqos), thelogextension must be added manually or the functionality of the other extension will break.[agent] extensions = log
Configure the
network_logsection in~/openstack/my_cloud/config/neutron/openvswitch_agent.ini.j2. If a custom file is configured to use for output logs, log file rotation must be done manually. Using a custom log file is optional. Setrate_limitandburst_limitaccording to the environment.[network_log] rate_limit = 100 burst_limit = 25 local_output_log_base = /var/log/neutron/security_group.log
Commit changes to git.
ardana >cd ~/openstack/ardana/ansible/ardana >git add -Aardana >git commit -m "Enable logging for security groups"Run configuration processor and ready deployment playbooks.
ardana >ansible-playbook -i hosts/localhost config-processor-run.ymlardana >ansible-playbook -i hosts/localhost ready-deployment.ymlFor a cloud that is already deployed, run the
neutron-reconfigure.ymlplaybook or follow cloud deployment steps.ardana >cd ~/scratch/ansible/next/ardana/ansible/ardana >ansible-playbook -i hosts/verb_hosts neutron-reconfigure.yml orardana >ansible-playbook -i hosts/verb_hosts site.yml
We recommend enabling logging for securitygroup and
OSV-based firewall features during deployment.
After deployment, Network Security Group logging can be enabled with the following OpenStackClient commands:
ardana >source ~/service.osrcardana >openstack network loggable resources list +-----------------+ | Supported types | +-----------------+ | security_group | +-----------------+ardana >openstack network log create --resource-type security_group \ --event ALL --enable sg_log_adminardana >openstack network log show sg_log_admin +-----------------+--------------------------------------+ | Field | Value | +-----------------+--------------------------------------+ | Description | | | Enabled | True | | Event | ALL | | ID | c9e7b763-3013-4a40-b697-c18f7cb9d588 | | Name | sg_log_admin | | Resource | None | | Target | None | | Type | security_group | | created_at | 2019-04-26T15:17:43Z | | revision_number | 0 | | updated_at | 2019-04-26T15:17:43Z | +-----------------+--------------------------------------+
7 Configuring keystone and horizon to use X.509 Client Certificates #
The keystone service supports X.509 SSL cerificate authentication and authorization for accessing the horizon dashboard in SUSE OpenStack Cloud. This feature is disabled by default, and must be manually configured and enabled by running a number of Ansible playbooks.
Enabling client SSL certificate authentication and authorization for the horizon dashboard is a non-core feature in SUSE OpenStack Cloud.
7.1 Keystone Configuration #
To configure and enable X.509 SSL authentication and authorization support for the keystone service, perform the following steps.
Create a new configuration file named
x509auth.ymland place it in any directory in your deployer node. For example, perform the following command to create the file in the/tmpdirectory:touch /tmp/x509auth.yml
Edit the new file to include the following text. Note that YAML files are whitespace-sensitive. Preserve the indentation format of the following text.
keystone_x509auth_conf: identity_provider: id: intermediateca description: This is the trusted issuer HEX Id. mapping: id: x509_mapping1 rules_file: /tmp/x509auth_mapping.json protocol: id: x509 remote_id: intermediateca ca_file: /tmp/cacert.pemThe preceding example sets a number of configuration parameters for the X.509/keystone configuration. The following are detailed descriptions of each parameter.
identity_provider This section identifies and describes an outside identity provider.
id: Any unique, readable string that identifies the identitiy provider.
description: A description of the identity provider.
mapping: This section describes a JSON-format file that maps X.509 client certificate attributes to a local keystone user.
id: Any unique, readable string that identifies the user-certificate mapping.
rules_file: The filepath to a JSON file that contains the client certificate attributes mapping.
protocol: This section sets the cryptographic protocol to be used.
id: The cryptographic protocol used for the certificate-based authentication/authorization.
remote_id: By default, this field expects the client certificate's issuer's common name (CN) as a value. The expected value is set in the
keystone.conffile, where the default setting is:remote_id_attribute = SSL_CLIENT_I_DN_CN
ca_file: The file that contains the client certificate's related intermediary and root CA certificates.
Note: In the
/tmp/x509auth.ymlfile, theca_filevalue should be a file that contains both the root and signing CA certificates (often found in/home/pki/cacert.pem).Create a JSON-formatted mapping file. To do so, edit the
x509auth.ymlfile you created in Step 2 to reference this file in the mapping→ rules_file parameter. You can create the file with the following example command:touch /tmp/x509auth_mapping.json
Edit the JSON file you created in Step 3 to include the following content:
[ { "local": [ { "user": { "name": "{0}", "domain": { "name": "{1}" }, "type": "local" } } ], "remote": [ { "type": "SSL_CLIENT_S_DN_CN" }, { "type": "SSL_CLIENT_S_DN_O" }, { "type": "SSL_CLIENT_I_DN", "any_one_of": [ ] } ] } ]Enter the distinguished name(s) (DN) of the certificate issuer(s) that issued your client certificates into the any_one_of field in the remote block. The any_one_of field is a comma-separated list of all certificate issuers that you want the keystone service to trust.
All DNs in the any_one_of field must adhere to the following format: A descending list of DN elements, with each element separated by a forward slash (
/).The following is an example of a properly formatted DN for a certificate issuer named
intermedia./C=US/ST=California/O=EXAMPLE/OU=Engineering/CN=intermediateca/emailAddress=user@example.com
The following example file illustrates an
x509auth_mapping.jsonfile with theintermediacertificate issuer added to the any_one_of field. Note that the DN string is in quotes.[ { "local": [ { "user": { "name": "{0}", "domain": { "name": "{1}" }, "type": "local" } } ], "remote": [ { "type": "SSL_CLIENT_S_DN_CN" }, { "type": "SSL_CLIENT_S_DN_O" }, { "type": "SSL_CLIENT_I_DN", "any_one_of": [ "/C=US/ST=California/O=EXAMPLE/OU=Engineering/CN=intermediateca/emailAddress=user@example.com" ] } ] } ]The keystone service will trust all client certificates issued by any of the certificate issuers listed in the any_one_of field.
Run the following commands to enable the new X.509/keystone settings.
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts keystone-reconfigure.yml -e@/tmp/x509auth.yml
7.2 HAProxy Configuration #
Because of the experimental nature of the HAProxy feature, it is important to minimize the risk of impacting other services. If you have implemented, or wish to implement the HAProxy feature alongside client SSL certificate login to the horizon dashboard in your cloud, please complete the following steps to make the necessary manual configuration changes.
You must perform the keystone configuration steps in the previous section before performing the following HAProxy configuration changes.
Locate and open the
~/openstack/ardana/ansible/roles/haproxy/templates/haproxy.cfgfile.Locate the following line in the
haproxy.cfgfile.listen {{ network.vip }}-{{ port }}Enter the following codeblock in the open space immediately preceding the
listen {{ network.vip }}-{{ port }}line.{%- if service == 'KEY_API' and port == '5000' %} {% set bind_defaults = 'ca-file /etc/ssl/private/cacert.pem verify optional' %} {%- endif %}After entering the preceding code, your
haproxy.cfgfile should look like the following example.{%- if network.terminate_tls is defined and network.terminate_tls and port == '80' %} {% set port = '443' %} {%- endif %} {%- if service == 'KEY_API' and port == '5000' %} {% set bind_defaults = 'ca-file /etc/ssl/private/cacert.pem verify optional' %} {%- endif %} listen {{ network.vip }}-{{ port }} {%- set options = network.vip_options | default(vip_options_defaults) %} {%- if options > 0 %} {%- for option in options %} {{ option }} {%- endfor %} {%- endif %} bind {{ network.vip }}:{{ port }} {% if network.terminate_tls is defined and network.terminate_tls %} ssl crt {{ frontend_server_cert_directory }}/{{ network.cert_file }} {{ bind_defaults }} {% endif %}Commit the changes to your local git repository.
git add -A git commit -m "Added HAProxy configuration changes"
Run the configuration processor and ready-deployment Ansible playbooks.
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost config-processor-run.yml ansible-playbook -i hosts/localhost ready-deployment.yml
Implement the HAProxy configuration changes.
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts FND-CLU-reconfigure.yml
7.3 Create CA and client certificates #
An X.509 client certificate can be issued from any certificate authority (CA). You can use the openssl command-line tool to generate certificate signing requests (CSRs). Once a CA has signed your CSR, the CA will return a signed certificate that you can use to authenticate to horizon.
Read more about openssl here: https://www.openssl.org/
Your cloud's load balancer will reject any self-signed client SSL certificates. Ensure that all client certificates are signed by a certificate authority that your cloud recognizes.
7.4 Horizon Configuration #
Complete the following steps to configure horizon to support SSL certificate authorization and authentication.
Edit the
~/openstack/ardana/ansible/roles/HZN-WEB/defaults/main.ymlfile and set the following parameter toTrue.horizon_websso_enabled: True
Locate the last line in the
~/openstack/ardana/ansible/roles/HZN-WEB/defaults/main.ymlfile. The default configuration for this line should look like the following.horizon_websso_choices: - {protocol: saml2, description: "ADFS Credentials"}If your cloud does not have AD FS enabled, then replace the preceding
horizon_websso_choices:parameter with the following.- {protocol: x509, description: "X.509 SSL Certificate"}The resulting block should look like the following.
horizon_websso_choices: - {protocol: x509, description: "X.509 SSL Certificate"}If your cloud does have AD FS enabled, then simply add the following parameter to the
horizon_websso_choices:section. Do not replace the default parameter, add the following line to the existing block.- {protocol: saml2, description: "ADFS Credentials"}If your cloud has AD FS enabled, the final block of your
~/openstack/ardana/ansible/roles/HZN-WEB/defaults/main.ymlshould have the following entries.horizon_websso_choices: - {protocol: x509, description: "X.509 SSL Certificate"} - {protocol: saml2, description: "ADFS Credentials"}
Run the following commands to add your changes to the local git repository, and reconfigure the horizon service, enabling the changes made in Step 1:
cd ~/openstack git add -A git commit -m "my commit message" cd ~/openstack/ardana/ansible/ ansible-playbook -i hosts/localhost config-processor-run.yml ansible-playbook -i hosts/localhost ready-deployment.yml cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts horizon-reconfigure.yml
7.5 Browser configuration #
To enable your web browser to present a certificate to the horizon dashboard upon login, you first need to import the certificate. The steps to complete this action will vary from browser to browser. Please refer to your browser's documentation for specific instructions.
Import the desired certificate into your web browser's certificate store.
After importing the certificate, verify that it appears in your browser's certificate manager.
7.6 User accounts #
For the keystone service to use X.509 certificates to grant users access to horizon, there must be a keystone user account associated with each certificate. keystone associates user accounts with certificates by matching the common name (CN) and organization (O) of a presented certificate with the username and domain of an existing keystone user.
When an X.509 certificate is presented to horizon for authentication/authorization, horizon passes the certificate information along to the keystone service. keystone attempts to match the CN and O of the certificate with the username and domain of an existing local user account. For this operation to be successful, there must be a keystone user account and domain that match the CN and O of the certificate.
For example, if a user named Sam presents a certificate to horizon with the following information,
CN=sam
O=EXAMPLE
Then there must be an existing keystone user account with the following values,
Username=sam
Domain=EXAMPLE
Further, Sam's client certificate must have been issued by one of the
certificate issuers listed in the
any_one_of field in the
x509auth_mapping.json file.
Also, when creating a local keystone user, you must assign the user account a project scope. Without a project scope, the authorization portion of the sign-on process will fail.
The following steps illustrate how to use the CLI to create a domain, create and manage a user, and assign a permissions role to the new user.
Create a new domain, named
EXAMPLE.openstack domain create EXAMPLE
Create a new project named
xyz, under theEXAMPLEdomain.openstack project create --domain EXAMPLE xyz
Create a new user named
Samin theEXAMPLEdomain. Set the password and email for the new account.openstack user create --domain EXAMPLE --password pass \ --email sam@example.com --enable sam
Create a new role named
role1.openstack role create role1
Grant the new role,
role1to the new userSamfrom theEXAMPLEdomain. Note that both the user account and domain must be referenced by their unique ID numbers rather than their friendly names.openstack role add --user 04f3db9e7f3f45dc82e1d5f20b4acfcc \ --domain 6b64021839774991b5e0df16077f11eb role1
Add the user
Samto the newly-created project from Step 2. Note that the project and user account must be referenced by their respective unique ID numbers rather than their friendly names.openstack role add --project 4e2ad14406b247c7a9fc0a48c0b1713e \ --user 04f3db9e7f3f45dc82e1d5f20b4acfcc role1
7.7 How it works #
The SSL authentication and authorization process is detailed in the following steps.
User directs a web browser to the SUSE OpenStack Cloud horizon login landing page.
The user selects the "X.509 Certificate" login option from the dropdown menu.
horizon responds with an HTTP 302 redirect, redirecting the browser to the SSL-protected keystone (federated) authentication endpoint.
The browser then prompts user to select the certificate to use for the login (if there is more than one certificate for the given trusted Certificate Authority (CA)).
The web browser establishes a 2-way SSL handshake with the keystone service.
keystone, utilizing federation mapping, maps the user to a federated persona and issues an (federated) unscoped token.
The token is then passed to the browser, along with JavaScript code that redirects the browser back to the horizon dashboard.
The browser then logs into the horizon dashboard using the newly issued unscoped token to authenticate the user.
horizon queries the keystone service for the list of federated projects the authenticated user has access to.
horizon then rescopes the token to the first project, granting the user authorization.
The login process is completed.
8 Transport Layer Security (TLS) Overview #
The Transport Layer Security (TLS) protocol, successor to SSL, provides the mechanisms to ensure authentication, non-repudiation, confidentiality, and integrity of user communications to and between the SUSE OpenStack Cloud services from internal and public endpoints.
OpenStack endpoints are HTTP (REST) services providing APIs to other OpenStack services on the management network. All traffic to OpenStack services coming in on the public endpoints and some traffic between services can be secured using TLS connections.
In SUSE OpenStack Cloud 9, the following are enabled for TLS
API endpoints in the internal and admin VIPs can now be secured by TLS.
API endpoints can be provided with their own certificates (this is shown in the model examples) or they can simply use the default certificate.
The barbican key management service API can be secured by TLS from the load balancer to the service endpoint.
You can add multiple trust chains (certificate authority (CA) certificates).
Fully qualified domain names (FQDNs) can be used for public endpoints and now they can be changed. The external name in the input model files (in
~/openstack/my_cloud/definition/data/network_groups.yml) is where the domain name is indicated and changed.There are two monitoring alarms specific to certificates, 14-days to certificate expiration and 1-day to expiration.
TLS can be turned off/on for individual endpoints.
8.1 Comparing Clean Installation and Upgrade of SUSE OpenStack Cloud #
Clean install: all TLS-encrypted services are already listed under
tls-components in network_groups.yml
You just have to:
Add your self-signed CA cert and server cert (for testing)
Or add your public (or company) CA-signed server cert and the public (or company) CA cert (for production)
Upgrade: you do not have TLS enabled already on the internal endpoints so you need to
Add your self-signed CA cert and server cert (for testing)
Or add your public (or company) CA-signed server cert and the public (or company) CA cert (for production)
Add all the services to tls-components in
network_groups.yml
For information on enabling and disabling TLS, see Section 8.2, “TLS Configuration”.
For instructions on installing certificates, see Section 8.2, “TLS Configuration”.
8.2 TLS Configuration #
In SUSE OpenStack Cloud 9, you can provide your own certificate authority and certificates for internal and public virtual IP addresses (VIPs), and you should do so for any production cloud. The certificates automatically generated by SUSE OpenStack Cloud are useful for testing and setup, but you should always install your own for production use. For further information, see Book “Deployment Guide using Cloud Lifecycle Manager”, Chapter 41 “Configuring Transport Layer Security (TLS)”
8.3 Enabling TLS for MySQL Traffic #
MySQL traffic can be encrypted using TLS. For completely new SUSE OpenStack Cloud
deployments using the supplied input model example files, you will have to
uncomment the commented entries for
tls-component-endpoints:. For upgrades from a previous
version, you will have to add the entries to your input model files if you
have not already done so. This topic explains how to do both.
8.3.1 Enabling TLS on the database server for client access #
Edit
network_groups.ymlto either add mysql under tls-component-endpoints in your existing file from a previous version, or uncomment it if installing from scratch.tls-component-endpoints: - mysql
After making the necessary changes, commit the changed file to git and run the config-processor-run and reconfigure Ansible playbooks:
cd ~/openstack git add -A git commit -m "My changed config" cd ~/openstack/ardana/ansible/ ansible-playbook -i hosts/localhost config-processor-run.yml -e encrypt="<encryption key>" -e rekey="" ansible-playbook -i hosts/localhost ready-deployment.yml cd ~/scratch/ansible/next/ardana/ansible
Next, either run
site.ymlif you are installing a new system:ansible-playbook -i hosts/verb_hosts site.yml
or ardana-reconfigure if you are reconfiguring an existing one:
ansible-playbook -i hosts/verb_hosts ardana-reconfigure.yml
8.3.2 MySQL replication over TLS #
MySQL replication over TLS is disabled. This is true even if you followed the instruction to turn on Mysql TLS in the previous section. Those steps turn on the service interactions to the database.
Turning on MySQL replication over TLS
Using TLS connections for MySQL replication will incur a performance cost.
You should have already enabled TLS for MySQL client interactions in the previous section. If not, read Section 8.3.1, “Enabling TLS on the database server for client access”.
TLS for MySQL replication is not turned on by default. Therefore, you will need to follow a manual process. Again, the steps are different for new systems and upgrades.
8.3.3 Enabling TLS for MySQL replication on a new deployment #
Log in to the Cloud Lifecycle Manager node and before running the config processor, edit the
~/openstack/my_cloud/config/mariadb/defaults.ymlfile.Search for mysql_gcomms_bind_tls. You should find this section:
# TLS disabled for cluster #mysql_gcomms_bind_tls: "{{ host.bind['FND_MDB'].mysql_gcomms.tls }}" mysql_gcomms_bind_tls: FalseUncomment the appropriate line so the file looks like this:
# TLS enabled for cluster mysql_gcomms_bind_tls: "{{ host.bind['FND_MDB'].mysql_gcomms.tls }}" #mysql_gcomms_bind_tls: FalseFollow the steps to deploy or reconfigure your cloud: Step 2 in Section 8.3.1, “Enabling TLS on the database server for client access”.
8.3.4 Enabling TLS for MySQL replication on an existing system #
If your cluster is already up, perform these steps to enable MySQL replication over TLS:
Edit the following two files:
~/openstack/my_cloud/config/mariadb/defaults.ymland~/scratch/ansible/next/ardana/ansible/roles/FND-MDB/defaults/main.yml. Note that these files are identical. The first is a master file and the second is a scratch version that is used for the current deployment. Make the same changes as explained in Section 8.3.3, “Enabling TLS for MySQL replication on a new deployment”.Then run the following command:
ansible-playbook -i hosts/verb_hosts tls-percona-reconfigure.yml
After this your MySQL should come up and replicate over TLS. You need to follow this section again if you ever want to switch TLS off for MySQL replication. You also must repeat these steps if any lifecycle operation changes the mysql_gcomms_bind_tls option.
8.3.5 Testing whether a service is using TLS #
Almost all services that have a database are able to communicate over TLS. You can test whether a service, in this example the Identity service (keystone), is communicating with MySQL over TLS by executing the following steps:
Log into a node member of the database cluster, change to the root user (such as by using
sudo -i) and run the mysql command.root@<server>:~# mysql
Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
Run:
mysql> select * from information_schema.user_statistics where user='keystone'\G
Note the results. TOTAL_SSL_CONNECTIONS should not be zero:
*************************** 1. row *************************** USER: keystone TOTAL_CONNECTIONS: 316 CONCURRENT_CONNECTIONS: 0 CONNECTED_TIME: 905790 BUSY_TIME: 205 CPU_TIME: 141 BYTES_RECEIVED: 197137617 BYTES_SENT: 801964 BINLOG_BYTES_WRITTEN: 0 ROWS_FETCHED: 972421 ROWS_UPDATED: 6893 TABLE_ROWS_READ: 1025866 SELECT_COMMANDS: 660209 UPDATE_COMMANDS: 3039 OTHER_COMMANDS: 299746 COMMIT_TRANSACTIONS: 0 ROLLBACK_TRANSACTIONS: 295200 DENIED_CONNECTIONS: 0 LOST_CONNECTIONS: 83 ACCESS_DENIED: 0 EMPTY_QUERIES: 71778 TOTAL_SSL_CONNECTIONS: 298 1 row in set (0.00 sec) mysql>
8.4 Enabling TLS for RabbitMQ Traffic #
RabbitMQ traffic can be encrypted using TLS. To enable it, you will have to
add entries for tls-component-endpoints: in your input
model files if you have not already done so. This topic explains how.
Edit
openstack/my_cloud/definition/data/network_groups.yml, addingrabbitmqto thetls-component-endpointssection:tls-component-endpoints: - barbican-api - mysql - rabbitmqCommit the changes:
cd ~/openstack git add -A git commit -m "My changed config"
Then run the typical deployment steps:
cd ~/openstack/ardana/ansible/ ansible-playbook -i hosts/localhost config-processor-run.yml -e encrypt="<encryption key>" -e rekey="" ansible-playbook -i hosts/localhost ready-deployment.yml
Change directories:
cd ~/scratch/ansible/next/ardana/ansible
Then for a fresh TLS install run:
ansible-playbook -i hosts/verb_hosts site.yml
Or, to reconfigure an existing system run:
ansible-playbook -i hosts/verb_hosts ardana-reconfigure.yml
8.4.1 Testing #
On the one of the rabbitmq nodes you can list the clients and their TLS status by running:
$ sudo rabbitmqctl -q list_connections ssl state ssl_protocol user name
You will see output like this where true indicates the client is using TLS for the connection, and false, as shown here, indicates the connection is over TCP:
Listing connections ... rmq_barbican_user false
Other indicators will be rabbit_use_ssl = True in the
Oslo messaging section of client configurations. The list of clients that
support TLS are as follows:
barbican
cinder
designate
Eon
glance
heat
ironic
keystone
monasca
neutron
nova
Octavia
8.5 Troubleshooting TLS #
8.5.1 Troubleshooting TLS certificate errors when running playbooks with a limit #
Has the deployer been restarted after the original site installation or is this a new deployer? If so, TLS certificates need to be bootstrapped before a playbook is run with limits. You can do this by running the following command.
cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts tls-reconfigure.yml --limit TLS-CA
8.5.2 Certificate Update Failure #
In general, if a certificate update fails, it is because of the following: Haproxy has not restarted or the Trust chain is not installed. This is the certificate of the CA that signed the server certificate.
8.5.3 Troubleshooting trust chain installation #
It is important to note that while SUSE OpenStack Cloud 9 allows you to add new trust chains, it would be better if you add all the required trust chains during the initial deploy. Trust chain changes can impact services.
However, this does not apply to certificates. There is a certificate-related issue whereby haproxy is not restarted if certificate content has been changed but the certificate file name remained the same. If you are having issues and you have replaced the content of existing CA file with new content, create another CA file with a new name. Also make sure the CA file has a .crt extension.
Do not update both certificate and the CA together. Add the CA first and then run a site deploy. Then update the certificate and run tls-reconfigure, FND-CLU-stop, FND-CLU-start and then ardana-reconfigure. If you know which playbook failed, rerun it with -vv to get detaled error information. The configure, HAproxy restart, and reconfigure steps are included in Section 8.2, “TLS Configuration”.
You can run the following commands to see if client libraries see the CA you have added:
~/scratch/ansible/next/ardana/ansible$ ansible -i hosts/verb_hosts FND-STN -a 'sudo keytool -list -alias \
debian:username-internal-cacert-001.pem -keystore /usr/lib/jvm/java-7-openjdk-amd64/jre/lib/security/cacerts -storepass changeit'
padawan-ccp-c0-m1-mgmt | FAILED | rc=1 >>
sudo: keytool: command not found
padawan-ccp-comp0001-mgmt | FAILED | rc=1 >>
sudo: keytool: command not found
padawan-ccp-comp0003-mgmt | FAILED | rc=1 >>
sudo: keytool: command not found
padawan-ccp-comp0002-mgmt | FAILED | rc=1 >>
sudo: keytool: command not found
padawan-ccp-c1-m1-mgmt | success | rc=0 >>
debian:username-internal-cacert-001.pem, May 9, 2016, trustedCertEntry,
Certificate fingerprint (SHA1): E7:B2:6E:9E:00:FB:86:0F:E5:46:CD:B8:C5:67:13:53:4E:3D:8F:43
padawan-ccp-c1-m2-mgmt | success | rc=0 >>
debian:username-internal-cacert-001.pem, May 9, 2016, trustedCertEntry,
Certificate fingerprint (SHA1): E7:B2:6E:9E:00:FB:86:0F:E5:46:CD:B8:C5:67:13:53:4E:3D:8F:43
padawan-ccp-c1-m3-mgmt | success | rc=0 >>
debian:username-internal-cacert-001.pem, May 9, 2016, trustedCertEntry,
Certificate fingerprint (SHA1): E7:B2:6E:9E:00:FB:86:0F:E5:46:CD:B8:C5:67:13:53:4E:3D:8F:43Java client libraries are used by monasca, so compute nodes will not have them. So the first three errors are expected. Check that the fingerprint is correct by checking the CA:
~/scratch/d002-certs/t002$ openssl x509 -in example-CA.crt -noout -fingerprint SHA1 Fingerprint=E7:B2:6E:9E:00:FB:86:0F:E5:46:CD:B8:C5:67:13:53:4E:3D:8F:43
If they do not match, there likely was a name collision. Add the CA cert again with a new file name. If you get monasca errors but find that the fingerprints match, try stopping and restarting monasca.
ansible-playbook -i hosts/verb_hosts monasca-stop.yml ansible-playbook -i hosts/verb_hosts monasca-start.yml
8.5.4 Expired TLS Certificates #
Use the following steps to re-create expired TLS certificates for MySQL Percona clusters.
Determine if the TLS certificates for MySQL / Percsona have expired.
ardana >cd /etc/mysql/ardana >openssl x509 -noout -enddate -in control-plane-1-mysql-internal-cert.pemRegenerate the TLS certificates on the deployer.
ardana >cd ~/scratch/ansible/next/hos/ansibleardana >ansible-playbook -i hosts/verb_hosts tls-reconfigure.yml --limit DEPLOYER_HOSTDistribute the regenerated TLS certificates to the MySQL Percona clusters.
ardana >cd ~/scratch/ansible/next/hos/ansibleardana >ansible-playbook -i hosts/verb_hosts --extra-vars "mysql_certs_needs_regeneration=true" tls-percona-reconfigure.ymlVerify Percona cluster status on a controller node
ardana >sudo mysql -e 'show status'
Use the following steps to re-create expired TLS certificates for RabbitMQ.
Determine if SSL certificate for RabbitMQ is expired
root #cd /etc/rabbitmqroot #openssl x509 -noout -text -in control-plane-1-rabbitmq.pem | grep After Not After : Nov 6 15:15:38 2018 GMTRegenerate the TLS certificates on the deployer.
ardana >cd ~/scratch/ansible/next/hos/ansibleardana >ansible-playbook -i hosts/verb_hosts tls-reconfigure.yml --limit DEPLOYER_HOSTReconfigure RabbitMQ. Certificate will be re-created if the input model is correct.
ardana >cd ~/scratch/ansible/next/ardana/ansibleardana >ansible-playbook -i hosts/verb_hosts --extra-vars "rabbitmq_tls_certs_force_regeneration=true" rabbitmq-reconfigure.yml
8.5.5 Troubleshooting certificates #
Certificates can fail in SUSE OpenStack Cloud 9 due to the following.
Trust chain issue. This is dealt with in the previous section
Wrong certificate: Compare the fingerprints. If they differ, then you have a wrong certificate somewhere.
Date range of the certificate is either in the future or expired: Check the dates and change certificates as necessary, observing the naming cautions above.
TLS handshake fails because the client does not support the ciphers the server offers. It is possible that you reused a certificate created for a different network model. Make sure the request file found under
info/cert_req/are used to create the certificate. If not, the service VIP names may not match.
9 Preventing Host Header Poisoning #
Depending on the environment and context of your SUSE OpenStack Cloud deployment, it may be advisable to configure horizon to protect against Host header poisoning (see ref. #1 below) by using Django's ALLOWED_HOSTS setting (see ref. #2 below). To configure horizon to use the ALLOWED_HOSTS setting, take the following steps:
Edit the haproxy settings to reconfigure the health check for horizon to specify the allowed hostname(s). This needs to be done first, before configuring horizon itself. Otherwise, if horizon is first configured to restrict the values of the "Host" header on incoming HTTP requests, the haproxy health checks will start to fail. So, the haproxy configuration needs to be updated first, if this is being done on an existing installation.
On your Cloud Lifecycle Manager node, make a backup copy of this file and then open /usr/share/ardana/input-model/2.0/services/horizon.yml
Find the line that contains "option httpchk" and modify it so it reads the following way:
- "option httpchk GET / HTTP/1.1\r\nHOST:\ my.example.com" # Note the escaped escape characters.
In this example, my.example.com is the hostname associated with the horizon VIP on the external API network. However, you are not restricted to just one allowed host. In addition, allowed hosts can contain wildcards (though not in the horizon.yml file; there you must have an actual resolvable hostname or a routeable IP address). However, for this change to the haproxy healthcheck, it is suggested that the hostname associated with the horizon VIP on the external API network be used.
Edit the template file that is used for horizon's
local_settings.pyconfiguration file.While still on your Cloud Lifecycle Manager node, open
~/openstack/my_cloud/config/horizon/local_settings.py.Change the line that sets the "ALLOWED_HOSTS" setting. This can be a list of hostnames and (V)IPs that eventually get routed to horizon. Wildcards are supported.
ALLOWED_HOSTS = ['my.example.com', '*.example.net', '192.168.245.6']
In the above example, any HTTP request received with a hostname not matching any in this list will receive an HTTP 400 reply.
Commit the change with a "git commit -a" command.
Run the configuration processor
cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost config-processor-run.yml
Enable the configuration: This can be done in one of a few ways: As part of a site deploy play, as part of an upgrade play, or by re-running the FND-CLU and horizon deploys on an existing deployment: If modifying an existing deploy, the FND-CLU deploy will need to be run first, since changing the ALLOWED_HOSTS setting in horizon first will cause the default health check to fail, if it does not specify a
Hostheader in the HTTP request sent to check the health of horizon's Apache virtual host.cd ~/openstack/ardana/ansible ansible-playbook -i hosts/localhost ready-deployment.yml cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts horizon-deploy.yml ansible-playbook -i hosts/verb_hosts FND-CLU-deploy.yml
References:
10 Encryption of Passwords and Sensitive Data #
In SUSE OpenStack Cloud, sensitive connection data is encrypted. The passwords that are encrypted include:
Inter-service passwords generated by the configuration processor (keystone, MariaDB, RabbitMQ and Cassandra passwords)
Secret keys generated by the configuration processor (MariaDB cluster-id, erlang cookie for RabbitMQ, horizon secret key, keystone admin token)
User-supplied passwords (IPMI passwords, Block Storage back-end passwords)
10.1 SSH Introduction #
| What is encrypted | Encryption mechanism | Is password changeable | Is encryption key changeable |
|---|---|---|---|
|
Inter-service passwords and secret keys generated by the configuration processor (keystone, MariaDB, RabbitMQ and Cassandra passwords) | Uses PyCrypto libraries & Ansible vault for encryption | No |
Yes
Passphrase for the encryption key will be prompted when running Ansible
playbook. Can also use command
|
|
User supplied passwords (IPMI passwords, Block Storage back-end passwords) | OpenSSL | Yes |
Yes The environment variable ARDANA_USER_PASSWORD_ENCRYPT_KEY must contain the key used to encrypt those passwords. |
Other protected data:
The SSH private key used by Ansible to connect to client nodes from the Cloud Lifecycle Manager is protected with a passphrase.
The swift swift-hash prefix and suffix values are encrypted.
All of the Ansible variables generated by the configuration processor are encrypted and held in Ansible Vault.
However, if a user wants to change the encryption keys then that can be done for all categories of password and secret-keys listed below, and the processes are documented.
The SSH private key passphrase needs to be entered once before any Ansible plays are run against the cloud.
The configuration processor encryption key will be prompted for when the relevant Ansible play is run. Once the configuration processor output has been encrypted, all subsequent Ansible plays need to have --ask-ansible-pass added to the command line to ensure that the encryption key which is needed by Ansible is prompted for.
Finally, if user-supplied passwords have been encrypted (this process uses the OpenSSL library) then the environment variable ARDANA_USER_PASSWORD_ENCRYPT_KEY must contain the key used to encrypt those passwords.
In the case where the ARDANA_USER_PASSWORD_ENCRYPT_KEY environment variable is either null, the empty string, or not defined, then no encryption will be performed on your passwords when using the ardanaencrypt.py script.
The generated passwords are stored in Ansible inputs generated by the configuration processor and also in the persistent state information maintained by the configuration processor.
10.2 Protecting sensitive data on the Cloud Lifecycle Manager #
There are a number of mechanisms that can be used to protect sensitive data such as passwords, some Ansible inputs, and the SSH key used by Ansible on the Cloud Lifecycle Manager. See the installation documents for details. Please remember the need to guard against exposure of your environment variables, which may happen through observation over the shoulder.
There are instructions included in the installation documents that show how
to encrypt your data using the ardanaencrypt.py script. You
may want to change the encryption keys used to protect your sensitive data
in the future and this shows you how:
SSH keys - Run the command below to change the passphrase used to protect the key:
ssh-keygen -f id_rsa -p
configuration processor Key - If you wish to change an encryption password that you have already used when running the configuration processor then enter the existing password at the first prompt and the new password at the second prompt when running the configuration processor playbook. See Book “Deployment Guide using Cloud Lifecycle Manager”, Chapter 24 “Installing Mid-scale and Entry-scale KVM” for more details.
IPMI passwords if encrypted with
ardanaencrypt.py- Rerun the utility specifying a new encryption key when prompted. You will need to enter the plain text passwords at the password prompt.
10.3 Interacting with Encrypted Files #
Once you have enabled encryption in your environment you may have a need to interact with these encrypted files at a later time. This section will show you how.
ardanaencrypt.py script password encryption
If you used the ardanaencrypt.py script to encrypt your IPMI
or other passwords and have a need to view them later, you can do so with
these steps.
You will want to ensure that the
ARDANA_USER_PASSWORD_ENCRYPT_KEY environment variable is set
prior to running these commands:
export ARDANA_USER_PASSWORD_ENCRYPT_KEY="<encryption_key>"
To view an encrypted password, you can use this command below which will promot you for the encrypted password value. It will then output the decrypted value:
./ardanaencrypt.py -d
Configuration processor encryption key
If you have used the encryption options available with the configuration processor, which uses Ansible vault, you can do so with these commands. Each of these commands will prompt you for the password you used when setting the encryption initially.
To view an encrypted file in read-only mode, use this command:
ansible-vault view <filename>
To edit an encrypted file, use this command. This allows you to edit a decrypted version of the file without the need to decrypt and re-encrypt it:
ansible-vault edit <filename>
For other available commands, use the help file:
ansible-vault -h
11 Encryption of Ephemeral Volumes #
By default, ephemeral volumes are not encrypted. If you wish to enable this feature, you should use the following steps.
11.1 Enabling ephemeral volume encryption #
Before deploying the Compute nodes you will need to change the disk
configuration to create a new volume-group which will be
used for your ephemeral disks. To do this, following these steps:
Log in to the Cloud Lifecycle Manager.
Add details about the volume-group you will be using for your encrypted volumes. You have two options for this, you can either create a new volume-group or add the details for an already existing volume-group.
To create a new volume-group, add the following lines to your Compute disk configuration file.
The location of the Compute disk configuration file is:
~/openstack/my_cloud/definition/data/disks_compute.yml
name: vg-comp physical-volumes: - /dev/sdbTo utilize an existing volume-group you can add the following lines to your
nova.conffile, using the name of your volume-group:[libvirt] images_type = lvm images_volume_group = <volume_group_name>
NoteThe requirement here is to have free space available on a
volume-group. The correct disk to use and the name for the volume group will depend on your environment's needs.Modify the
nova.conffile for the Compute and API nodes. Verify that the following entries exist, if they do not then add them and then restart thenova-computeandnova-apiservices:[libvirt] images_type = lvm images_volume_group = vg-comp [ephemeral_storage_encryption] key_size = 256 cipher = aes-xts-plain64 enabled = True [keymgr] api_class = nova.keymgr.barbican.barbicanKeyManager [barbican] endpoint_template = https://192.168.245.9:9311/v1
To restart the services, use the following commands:
sudo systemctl restart nova-compute sudo systemctl restart nova-api
Assign the role in keystone using the CLI tool. Using the openstack client you can assign the user
key-manager:creatorrole for the project.Boot an instance with an ephermal disk and verify that the disk is encrypted. Once the instance is active it is possible to check on the Compute node if the ephermal disk is encrypted.
SSH into the Compute node then run the following commands:
sudo dmsetup status cryptsetup -v status <name_of_ephemeral_disk>
12 Refining Access Control with AppArmor #
AppArmor is a Mandatory Access Control (MAC) system as opposed to a discretionary access control system. It is a kernel-level security module for Linux that controls access to low-level resources based on rights granted via policies to a program rather than to a user role. It enforces rules at the lowest software layer (the kernel level) preventing software from circumventing resource restrictions that reside at levels above the kernel. With AppArmor, the final gatekeeper is closest to the hardware.
Controlling resource access per application versus per user role allows you to enforce rules based on specifically what a program can do versus trying to create user roles that are broad enough yet specific enough to apply to a group of users. In addition, it prevents the trap of having to predict all possible vulnerabilities in order to be secure.
AppArmor uses a hybrid of whitelisting and blacklisting rules, and its security policies are/can be cascading, permitting inheritance from different or more general policies. Policies are enforced on a per-process basis.
AppArmor also lets you tie a process to a CPU core if you want, and set process priority.
AppArmor profiles are loaded into the kernel, typically on boot. They can run in either enforcement or complain modes. In enforcement mode, the policy is enforced and policy violation attempts are reported. In complain mode, policy violation attempts are reported but not prevented.
12.1 AppArmor in SUSE OpenStack Cloud 9 #
At this time, AppArmor is not enabled by default in SUSE OpenStack Cloud 9. However, we recommend enabling it for key virtualization processes on compute nodes. For more information, see the https://documentation.suse.com/sles/15-SP1/single-html/SLES-security/#part-apparmor.
13 Data at Rest Encryption #
The data at rest encryption features in SUSE OpenStack Cloud 9 include the barbican key management service for safely storing encryption keys, and cinder volume encryption. This topic explains how to configure a back end for barbican key storage, and how to configure cinder volumes to be encrypted.
The barbican service in SUSE OpenStack Cloud 9 supports two types of back ends for safely storing encryption keys:
A native database back end
An HSM device (KMIP + Micro Focus ESKM)
Configuring the key management back-end key store
Using the Cloud Lifecycle Manager reconfigure playbook, you can configure one of two back ends for the barbican key management service:
Native database: This is the default configuration in SUSE OpenStack Cloud 9.
KMIP + Atalla ESKM: For a KMIP device, an SSL client certificate is needed as HSM devices generally require two-way SSL for security reasons. You will need a client certificate, a client private key and client root certificate authority recognized by your HSM device.
13.1 Configuring KMIP and ESKM #
To configure KMIP + Atalla ESKM in place of the default database, begin by providing certificate information by modifying the sample configuration file,
barbican_kmip_plugin_config_sample.yml, on the Cloud Lifecycle Manager node:~/openstack/ardana/ansible/roles/KEYMGR-API/files/samples/barbican_kmip_plugin_config_sample.yml
Copy this file to a temporary directory such as /tmp.
Edit the file to provide either client certificates as absolute file paths as shown below in bold, or by pasting certificate and key content directly into the file.
NoteFile paths take precedence over content variables if both are provided.
To set file path variables, open
kmip_plugin_certs.ymlfor editing and set the paths to the cert files:vi /tmp/kmip_plugin_certs.yml # File paths takes precedence over cert content if both are provided. # Here file path refers to local filesystem path where ansible is # executed. client_cert_file_path: /path/to/cert/file client_key_file_path: /path/to/key/file client_cacert_file_path: /path/to/cacert/file
Alternatively, set the content variables by opening
/tmp/kmip_plugin_certs.ymland copy the certificates and keys directly into the file.vi /tmp/kmip_plugin_certs.yml # Following are samples you need to replace with your # own content here or via file path approach mentioned above. client_cert_content: | -----BEGIN CERTIFICATE----- MIID0jCCArqgAwIBAgICAKQwDQYJKoZIhvcNAQELBQAwgZQxCzAJBgNVBAYTAlVT MQswCQYDVQQIEwJDTzEUMBIGA1UEBxMLRnQuIENvbGxpbnMxGDAWBgNVBAoTD0hl ... d2xldHQgUGFja2FyZDEMMAoGA1UECxMDQ1RMMRYwFAYDVQQDFA1LTUlQX0xvY2Fs L7x0qB6Zaf3IBkOZqf5bMfAQoKfxww== -----END CERTIFICATE----- client_key_content: | -----BEGIN RSA PRIVATE KEY----- MIIEowIBAAKCAQEArjYVZzdsSMsk520UD1E94jl0/AZGLlsAB152dEP5E9C3mXzQ ZYvfApMh8PFc53gZwLBCb4joy1r8mZj/e7CwCUuo1cJHR9xnhwdK3RLeRbU3dfW8 ... 98DmYxBio8+wQWQdiAPRRthtnvhSWL67oYACPwvWUJJ+D18HfpWCEgCmBU3a8ZHc AaW8rRXtMZzuujGgAbA1hpf5z1lHuiG/X7/XMDVGiRALMyBbHV57 -----END RSA PRIVATE KEY----- client_cacert_content: | -----BEGIN CERTIFICATE----- MIIEmjCCA4KgAwIBAgIBADANBgkqhkiG9w0BAQsFADCBlDELMAkGA1UEBhMCVVMx CzAJBgNVBAgTAkNPMRQwEgYDVQQHEwtGdC4gQ29sbGluczEYMBYGA1UEChMPSGV3 ... FAimEB/a2E+A0oxwuHmhMg0kOpDuXIWn4BW+Z6z5h1j3PFyg/CZ548Fz0XOgvXC7 Ejpkd+5R+24HloruUV1R2EYvmlr8UMFX80og11u+ -----END CERTIFICATE-----Provide certificate information to the barbican service using the
barbican-reconfigure.ymlplaybook:cd ~/openstack/ardana/ansible ansible-playbook -i hosts/verb_hosts barbican-reconfigure.yml -e@/tmp/kmip_plugin_certs.yml
Provide HSM connection credentials for the barbican service. In this step, provide the KMIP plug-in connection details to the barbican service: Open the file
barbican_deploy_config.ymlfound here:~/openstack/ardana/ansible/roles/barbican-common/vars/barbican_deploy_config.yml
Set the value of
use_kmip_secretstore_plugintoTrueto use the KMIP plug-in orFalseto use the default secret store plugin (store_crypto).Next, add KMIP client connection credentials and KMIP server hostname and port to
barbican_deploy_config.yml:####################################################################### #################### KMIP Plugin Configuration Section ################ ####################################################################### # Flag to reflect whether KMIP plugin is to be used as back end for #storing secrets use_kmip_secretstore_plugin: True # Note: Connection username needs to match with 'Common Name' provided # in client cert request (CSR). barbican_kmip_username: userName barbican_kmip_password: password barbican_kmip_port: 1234 barbican_kmip_host: 111.222.333.444
Commit the changes to git:
cd ~/openstack/ardana/ansible git add -A git commit -m "My config"
and run the
barbican-reconfigure.ymlplaybook in the deployment area:ansible-playbook -i hosts/localhost ready-deployment.yml cd ~/scratch/ansible/next/ardana/ansible ansible-playbook -i hosts/verb_hosts barbican-reconfigure.yml
13.2 Configuring Cinder Volumes for Encryption #
The data-at-rest encryption model in SUSE OpenStack Cloud provides support for encrypting cinder volumes (Volume Encryption). These encrypted volumes are protected with an encryption key that can be stored in an HSM appliance.
Assuming barbican and cinder services have been installed, you can configure a cinder volume type for encryption. Doing so will create a new cinder volume type, "LUKS," that can be selected when creating a new volume. Such volumes will be encrypted using a 256bit AES key:
source ~/service.osrc openstack role add --user admin --project admin cinder_admin openstack volume type create LUKS openstack volume type create \ --cipher aes-xts-plain64 --key_size 256 --control_location \ front-end LUKS nova.volume.encryptors.luks.LuksEncryptor +--------------------------------------+-------------------------------------------+-----------------+----------+------------------+ | Volume Type ID | Provider | Cipher | Key Size | Control Location | +--------------------------------------+-------------------------------------------+-----------------+----------+------------------+ | 99ed804b-7ed9-41a5-9c5e-e2002e9f9bb4 | nova.volume.encryptors.luks.LuksEncryptor | aes-xts-plain64 | 256 | front-end | +--------------------------------------+-------------------------------------------+-----------------+----------+------------------+
You should now be able to create a new volume with the type LUKS, which will request a new key from barbican. Once created, you can attach the new volume to an instance:
openstack volume create --display-name testVolumeEncrypted --volume-type LUKS --availability-zone nova 1
The volume list (openstack volume show with the volume ID) should
now show that you have a new volume and that it is encrypted.
openstack volume show 2ebf610b-98bf-4914-aee1-9b866d7b1897
+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2016-03-04T00:17:45.000000 |
| description | None |
| encrypted | True |
| id | 2ebf610b-98bf-4914-aee1-9b866d7b1897 |
| metadata | {} |
| migration_status | None |
| multiattach | False |
| name | testVolumeEncrypted |
| os-vol-host-attr:host | ha-volume-manager@lvm-1#LVM_iSCSI |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 5f3b093c603f4dc8bc377d04e5385d42 |
| os-volume-replication:driver_data | None |
| os-volume-replication:extended_status | None |
| replication_status | disabled |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| user_id | 3bdde5491e174a8aafcbc5a88e01cac7 |
| volume_type | LUKS |
+---------------------------------------+--------------------------------------+When using an ESKM appliance as the back end, you can also confirm that key operations are going to your HSM via its admin portal.
UUID Owner Object Type State Creation Date 8d54f41d-91dd-4f5e-bcfe-964af8213a8c barbican SymmetricKey PreActive 2016-03-02 13:58:58
13.3 For More Information #
For more information on data at rest security with ESKM, see Data Security Protection for SUSE OpenStack Cloud.
14 glance-API Rate Limit (CVE-2016-8611) #
Within the glance service, calls to the POST method within v1 or v2/images
create records in queued status. No limit is enforced
within the glance API on the number of images a single tenant may
create. The only limit is on the total amount of storage a single user may
consume. More information about this vulnerability is at https://nvd.nist.gov/vuln/detail/CVE-2016-8611
Therefore a user could maliciously or unintentionally fill multiple database tables (images, image_properties, image_tags, image_members) with useless image records, thereby causing a denial of service by lengthening transaction response times in the glance database.
This issue can be mitigated with a rate limiter to the glance-api haproxy
endpoints. Only POST requests are affected. Instance launch is not impacted.
The number of images that can be created in a 60 minute window is limited.
The default value is 600 connections per 60 minute window which should cover
most normal glance-api use cases. When the number of connections has been
exceeded, the user is locked out for the duration of the 60 minute
interval. The value for the number of connections per 60 minute period can be
overridden by editing the control_plane.yml file.
The following steps will implement the rate limiter patch.
Edit
control_plane.yml, adding the following glance_rate_limit entry. Change theglance_rate_limitif the default 600 connections does not fit your situation.- glance-api: ha_mode: false glance_stores: 'file' glance_default_store: 'file' glance_rate_limit: LIMITCommit the change to Git
ardana >git add -A git commit "Change glance rate limit"Run playbooks
ardana >cd ~/openstack/ardana/ansibleardana >ansible-playbook -i hosts/localhost config-processor-run.ymlardana >ansible-playbook -i hosts/localhost ready-deployment.ymlardana >cd ~/scratch/ansible/next/ardana/ansibleardana >ansible-playbook -i hosts/localhost FND-CLU-reconfigure.yml
Access attempts are logged in
/var/log/haproxy.log. Users who exceed the limit will
see a message such as:
429 Too Many Requests You have sent too many requests in a given amount of time. HTTP/1.0 429 Too Many Requests Cache-Control: no-cache Connection: close Content-Type: text/html
15 Security Audit Logs #
15.1 The need for auditing #
Enterprises need the ability to audit and monitor workflows and data in accordance with their strict corporate, industry or governmental policies and compliance requirements such as FIPS-140-2, PCI-DSS, HIPAA, SOX, or ISO. To meet this need, SUSE OpenStack Cloud supports CADF (Cloud Auditing Data Federation)-compliant security audit logs that can easily be integrated with your organization's Security Information and Event Management (SIEM) tools. Such auditing is valuable not only to meet regulatory compliance requirements, but also for correlating threat forensics.
Note that logs from existing OpenStack services can also be used for auditing purposes, even though they are not in a consistent audit friendly CADF format today. All logs can easily be integrated with a SIEM tool such as ArcSight, Splunk etc.
15.2 Audit middleware #
Audit middleware is python middleware logic that addresses the aforementioned logging shortcomings. Audit middleware constructs audit event data in easily consumed CADF format. This data can be mined to answer critical questions about activities over REST resources such as who made the request, when, why, and so forth.
Audit middleware supports delivery of audit data via the Oslo messaging notifier feature. Each service is configured to route data to an audit-specific log file.
The following are key aspects of auditing support in SUSE OpenStack Cloud 9:
Auditing is disabled by default and can be enabled only after SUSE OpenStack Cloud installation.
Auditing support has been added to eight SUSE OpenStack Cloud services (nova, cinder, glance, keystone, neutron, heat, barbican, and ceilometer).
Auditing has been added for interactions where REST API calls are invoked.
All audit events are recorded in a service-specific audit log file.
Auditing configuration is centrally managed and indicates for which services auditing is currently disabled or enabled.
Auditing can be enabled or disabled on a per-service basis.
15.3 Centralized auditing configuration #
In SUSE OpenStack Cloud, all auditing configuration is centrally managed and controlled
via input model YAML files on the Cloud Lifecycle Manager node. The settings are
configured in the file
~/openstack/my_cloud/definition/cloudConfig.yml in an
audit-settings section shown below the following table.
| Key | Value (default) | Type | Description | Expected value(s) | Comments |
|---|---|---|---|---|---|
| default | disabled | String | Flag to globally enable or disable auditing for all services. | disabled, enabled |
A service's auditing behavior is determined via this default key value unless it is listed explicitly in the enabled-services or disabled-services list. |
| enabled-services | [] (empty list) | yaml list |
Setting to explicitly enable auditing for listed services regardless of default flag setting. |
nova, cinder, glance, keystone, neutron, heat, barbican, ceilometer |
To enable a specific service, either add the service name in the
enabled-services list when default is set to
If a service name is present in both enabled-services and disabled-services, then auditing will be enabled for that service. |
| disabled-services | nova, barbican, keystone, cinder, ceilometer, neutron | yaml list |
Setting to explicitly disable auditing for listed services regardless of default flag setting. |
nova, cinder, glance, keystone,neutron, heat, barbican, ceilometer |
To disable a specific service, either add the service name in
disabled-services when default is set to |
Audit settings in cloudConfig.yml with default set to
disabled and services selectively enabled:
product:
version: 2
cloud:
....
....
# Disc space needs to be allocated to the audit directory before enabling
# auditing.
# keystone and nova has auditing enabled
# cinder, ceilometer, glance, neutron, heat, barbican have auditing disabled
audit-settings:
audit-dir: /var/audit
default: disabled
enabled-services:
- keystone
- nova
disabled-services:
- cinder
- ceilometer
Audit setting in cloudConfig.yml with default set to
enabled and services selectively disabled:
product:
version: 2
cloud:
....
....
# Disc space needs to be allocated to the audit directory before enabling
# auditing.
# keystone, nova, glance, neutron, heat, barbican has auditing enabled
# cinder, ceilometer has auditing disabled
audit-settings:
audit-dir: /var/audit
default: enabled
enabled-services:
- keystone
- nova
disabled-services:
- cinder
- ceilometerBecause auditing is disabled by default, you will need to follow the steps below to enable it:
Book “Operations Guide CLM”, Chapter 13 “Managing Monitoring, Logging, and Usage Reporting”, Section 13.2 “Centralized Logging Service”, Section 13.2.7 “Audit Logging Overview”, Section 13.2.7.1 “Audit Logging Checklist”
Book “Operations Guide CLM”, Chapter 13 “Managing Monitoring, Logging, and Usage Reporting”, Section 13.2 “Centralized Logging Service”, Section 13.2.7 “Audit Logging Overview”, Section 13.2.7.2 “Enable Audit Logging”
For instructions on backing up and restoring audit logs, see: Book “Operations Guide CLM”, Chapter 17 “Backup and Restore”, Section 17.3 “Manual Backup and Restore Procedures”, Section 17.3.4 “Audit Log Backup and Restore” .
