This is a draft document that was built and uploaded automatically. It may document beta software and be incomplete or even incorrect. Use this document at your own risk.

Load Balancer: Octavia Administration

Removing load balancers

The following procedures demonstrate how to delete a load balancer that is in the ERROR, PENDING_CREATE, or PENDING_DELETE state.

Procedure 1.2.  Manually deleting load balancers created with neutron lbaasv2 (in an upgrade/migration scenario)

  1. Query the Neutron service for the loadbalancer ID:

    tux > neutron lbaas-loadbalancer-list
    neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead.
    +--------------------------------------+---------+----------------------------------+--------------+---------------------+----------+
    | id                                   | name    | tenant_id                        | vip_address  | provisioning_status | provider |
    +--------------------------------------+---------+----------------------------------+--------------+---------------------+----------+
    | 7be4e4ab-e9c6-4a57-b767-da9af5ba7405 | test-lb | d62a1510b0f54b5693566fb8afeb5e33 | 192.168.1.10 | ERROR               | haproxy  |
    +--------------------------------------+---------+----------------------------------+--------------+---------------------+----------+
            
  2. Connect to the neutron database:

    Important

    The default database name depends on the life cycle manager. Ardana uses ovs_neutron while Crowbar uses neutron.

    Ardana:

    mysql> use ovs_neutron
            

    Crowbar:

    mysql> use neutron
            
  3. Get the pools and healthmonitors associated with the loadbalancer:

    mysql> select id, healthmonitor_id, loadbalancer_id from lbaas_pools where loadbalancer_id = '7be4e4ab-e9c6-4a57-b767-da9af5ba7405';
    +--------------------------------------+--------------------------------------+--------------------------------------+
    | id                                   | healthmonitor_id                     | loadbalancer_id                      |
    +--------------------------------------+--------------------------------------+--------------------------------------+
    | 26c0384b-fc76-4943-83e5-9de40dd1c78c | 323a3c4b-8083-41e1-b1d9-04e1fef1a331 | 7be4e4ab-e9c6-4a57-b767-da9af5ba7405 |
    +--------------------------------------+--------------------------------------+--------------------------------------+
            
  4. Get the members associated with the pool:

    mysql> select id, pool_id from lbaas_members where pool_id = '26c0384b-fc76-4943-83e5-9de40dd1c78c';
    +--------------------------------------+--------------------------------------+
    | id                                   | pool_id                              |
    +--------------------------------------+--------------------------------------+
    | 6730f6c1-634c-4371-9df5-1a880662acc9 | 26c0384b-fc76-4943-83e5-9de40dd1c78c |
    | 06f0cfc9-379a-4e3d-ab31-cdba1580afc2 | 26c0384b-fc76-4943-83e5-9de40dd1c78c |
    +--------------------------------------+--------------------------------------+
            
  5. Delete the pool members:

    mysql> delete from lbaas_members where id = '6730f6c1-634c-4371-9df5-1a880662acc9';
    mysql> delete from lbaas_members where id = '06f0cfc9-379a-4e3d-ab31-cdba1580afc2';
            
  6. Find and delete the listener associated with the loadbalancer:

    mysql> select id, loadbalancer_id, default_pool_id from lbaas_listeners where loadbalancer_id = '7be4e4ab-e9c6-4a57-b767-da9af5ba7405';
    +--------------------------------------+--------------------------------------+--------------------------------------+
    | id                                   | loadbalancer_id                      | default_pool_id                      |
    +--------------------------------------+--------------------------------------+--------------------------------------+
    | 3283f589-8464-43b3-96e0-399377642e0a | 7be4e4ab-e9c6-4a57-b767-da9af5ba7405 | 26c0384b-fc76-4943-83e5-9de40dd1c78c |
    +--------------------------------------+--------------------------------------+--------------------------------------+
    mysql> delete from lbaas_listeners where id = '3283f589-8464-43b3-96e0-399377642e0a';
            
  7. Delete the pool associated with the loadbalancer:

    mysql> delete from lbaas_pools where id = '26c0384b-fc76-4943-83e5-9de40dd1c78c';
            
  8. Delete the healthmonitor associated with the pool:

    mysql> delete from lbaas_healthmonitors where id = '323a3c4b-8083-41e1-b1d9-04e1fef1a331';
            
  9. Delete the loadbalancer:

    mysql> delete from lbaas_loadbalancer_statistics where loadbalancer_id = '7be4e4ab-e9c6-4a57-b767-da9af5ba7405';
    mysql> delete from lbaas_loadbalancers where id = '7be4e4ab-e9c6-4a57-b767-da9af5ba7405';
            

Procedure 1.3. Manually Deleting Load Balancers Created With Octavia

  1. Query the Octavia service for the loadbalancer ID:

    tux > openstack loadbalancer list --column id --column name --column provisioning_status
    +--------------------------------------+---------+---------------------+
    | id                                   | name    | provisioning_status |
    +--------------------------------------+---------+---------------------+
    | d8ac085d-e077-4af2-b47a-bdec0c162928 | test-lb | ERROR               |
    +--------------------------------------+---------+---------------------+
            
  2. Query the Octavia service for the amphora IDs (in this example we use ACTIVE/STANDBY topology with 1 spare Amphora):

    tux > openstack loadbalancer amphora list
    +--------------------------------------+--------------------------------------+-----------+--------+---------------+-------------+
    | id                                   | loadbalancer_id                      | status    | role   | lb_network_ip | ha_ip       |
    +--------------------------------------+--------------------------------------+-----------+--------+---------------+-------------+
    | 6dc66d41-e4b6-4c33-945d-563f8b26e675 | d8ac085d-e077-4af2-b47a-bdec0c162928 | ALLOCATED | BACKUP | 172.30.1.7    | 192.168.1.8 |
    | 1b195602-3b14-4352-b355-5c4a70e200cf | d8ac085d-e077-4af2-b47a-bdec0c162928 | ALLOCATED | MASTER | 172.30.1.6    | 192.168.1.8 |
    | b2ee14df-8ac6-4bb0-a8d3-3f378dbc2509 | None                                 | READY     | None   | 172.30.1.20   | None        |
    +--------------------------------------+--------------------------------------+-----------+--------+---------------+-------------+
            
  3. Query the Octavia service for the loadbalancer pools:

    tux > openstack loadbalancer pool list
    +--------------------------------------+-----------+----------------------------------+---------------------+----------+--------------+----------------+
    | id                                   | name      | project_id                       | provisioning_status | protocol | lb_algorithm | admin_state_up |
    +--------------------------------------+-----------+----------------------------------+---------------------+----------+--------------+----------------+
    | 39c4c791-6e66-4dd5-9b80-14ea11152bb5 | test-pool | 86fba765e67f430b83437f2f25225b65 | ACTIVE              | TCP      | ROUND_ROBIN  | True           |
    +--------------------------------------+-----------+----------------------------------+---------------------+----------+--------------+----------------+
            
  4. Connect to the octavia database:

    mysql> use octavia
            
  5. Delete any listeners, pools, health monitors, and members from the load balancer:

    mysql> delete from listener where load_balancer_id = 'd8ac085d-e077-4af2-b47a-bdec0c162928';
    mysql> delete from health_monitor where pool_id = '39c4c791-6e66-4dd5-9b80-14ea11152bb5';
    mysql> delete from member where pool_id = '39c4c791-6e66-4dd5-9b80-14ea11152bb5';
    mysql> delete from pool where load_balancer_id = 'd8ac085d-e077-4af2-b47a-bdec0c162928';
            
  6. Delete the amphora entries in the database:

    mysql> delete from amphora_health where amphora_id = '6dc66d41-e4b6-4c33-945d-563f8b26e675';
    mysql> update amphora set status = 'DELETED' where id = '6dc66d41-e4b6-4c33-945d-563f8b26e675';
    mysql> delete from amphora_health where amphora_id = '1b195602-3b14-4352-b355-5c4a70e200cf';
    mysql> update amphora set status = 'DELETED' where id = '1b195602-3b14-4352-b355-5c4a70e200cf';
            
  7. Delete the load balancer instance:

    mysql> update load_balancer set provisioning_status = 'DELETED' where id = 'd8ac085d-e077-4af2-b47a-bdec0c162928';
            
  8. The following script automates the above steps:

    #!/bin/bash
    
    if (( $# != 1 )); then
    echo "Please specify a loadbalancer ID"
    exit 1
    fi
    
    LB_ID=$1
    
    set -u -e -x
    
    readarray -t AMPHORAE < <(openstack loadbalancer amphora list \
    --format value \
    --column id \
    --column loadbalancer_id \
    | grep ${LB_ID} \
    | cut -d ' ' -f 1)
    
    readarray -t POOLS < <(openstack loadbalancer show ${LB_ID} \
    --format value \
    --column pools)
    
    mysql octavia --execute "delete from listener where load_balancer_id = '${LB_ID}';"
    for p in "${POOLS[@]}"; do
    mysql octavia --execute "delete from health_monitor where pool_id = '${p}';"
    mysql octavia --execute "delete from member where pool_id = '${p}';"
    done
    mysql octavia --execute "delete from pool where load_balancer_id = '${LB_ID}';"
    for a in "${AMPHORAE[@]}"; do
    mysql octavia --execute "delete from amphora_health where amphora_id = '${a}';"
    mysql octavia --execute "update amphora set status = 'DELETED' where id = '${a}';"
    done
    mysql octavia --execute "update load_balancer set provisioning_status = 'DELETED' where id = '${LB_ID}';"